url stringlengths 11 2.25k | text stringlengths 88 50k | ts timestamp[s]date 2026-01-13 08:47:33 2026-01-13 09:30:40 |
|---|---|---|
https://aws.amazon.com/blogs/big-data/category/analytics/amazon-emr/ | Amazon EMR | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Category: Amazon EMR AWS analytics at re:Invent 2025: Unifying Data, AI, and governance at scale by Larry Weber on 07 JAN 2026 in Amazon EMR , Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , Amazon OpenSearch Service , Amazon Redshift , Amazon SageMaker Data & AI Governance , Amazon SageMaker Unified Studio , Analytics , AWS Glue , AWS Lake Formation , AWS re:Invent , Intermediate (200) Permalink Comments Share re:Invent 2025 showcased the bold Amazon Web Services (AWS) vision for the future of analytics, one where data warehouses, data lakes, and AI development converge into a seamless, open, intelligent platform, with Apache Iceberg compatibility at its core. Across over 18 major announcements spanning three weeks, AWS demonstrated how organizations can break down data silos, […] Amazon EMR Serverless eliminates local storage provisioning, reducing data processing costs by up to 20% by Karthik Prabhakar , Matt Tolton , Neil Mukerje , and Ravi Kumar Singh on 06 JAN 2026 in Amazon EMR , Analytics , Announcements , Intermediate (200) , Serverless Permalink Comments Share In this post, you’ll learn how Amazon EMR Serverless eliminates the need to configure local disk storage for Apache Spark workloads through a new serverless storage capability. We explain how this feature automatically handles shuffle operations, reduces data processing costs by up to 20%, prevents job failures from disk capacity constraints, and enables elastic scaling by decoupling storage from compute. Modernize Apache Spark workflows using Spark Connect on Amazon EMR on Amazon EC2 by Philippe Wanner and Ege Oguzman on 18 DEC 2025 in Advanced (300) , Amazon EC2 , Amazon EMR , Technical How-to Permalink Comments Share In this post, we demonstrate how to implement Apache Spark Connect on Amazon EMR on Amazon Elastic Compute Cloud (Amazon EC2) to build decoupled data processing applications. We show how to set up and configure Spark Connect securely, so you can develop and test Spark applications locally while executing them on remote Amazon EMR clusters. Introducing the Apache Spark troubleshooting agent for Amazon EMR and AWS Glue by Jake Zych , Andrew Kim , Maheedhar Reddy Chappidi , Arunav Gupta , Jeremy Samuel , Muhammad Ali Gulzar , Mohit Saxena , Mukul Prasad , Kartik Panjabi , Shubham Mehta , Vishal Kajjam , Vidyashankar Sivakumar , and Wei Tang on 15 DEC 2025 in Advanced (300) , Amazon EMR , AWS Glue , Kiro , Technical How-to Permalink Comments Share In this post, we show you how the Apache Spark troubleshooting agent helps analyze Apache Spark issues by providing detailed root causes and actionable recommendations. You’ll learn how to streamline your troubleshooting workflow by integrating this agent with your existing monitoring solutions across Amazon EMR and AWS Glue. Introducing Apache Spark upgrade agent for Amazon EMR by Keerthi Chadalavada , McCall Peltier , Rajendra Gujja , Bo Li , Malinda Malwala , Mohit Saxena , Mukul Prasad , Vaibhav Naik , Pradeep Patel , Shubham Mehta , and XiaoRun Yu on 15 DEC 2025 in Advanced (300) , Amazon EMR , Kiro , Technical How-to Permalink Comments Share In this post, you learn how to assess your existing Amazon EMR Spark applications, use the Spark upgrade agent directly from the Kiro IDE, upgrade a sample e-commerce order analytics Spark application project (including build configs, source code, tests, and data quality validation), and review code changes before rolling them out through your CI/CD pipeline. Accelerate Apache Hive read and write on Amazon EMR using enhanced S3A by Ramesh Kandasamy , Giovanni Matteo Fumarola , Himanshu Mishra , Paramvir Singh , and Anmol Sundaram on 15 DEC 2025 in Amazon EMR , Analytics , Announcements , Intermediate (200) Permalink Comments Share In this post, we demonstrate how Apache Hive on Amazon EMR 7.10 delivers significant performance improvements for both read and write operations on Amazon S3. Amazon EMR HBase on Amazon S3 transitioning to EMR S3A with comparable EMRFS performance by Dong Li , Giovanni Matteo Fumarola , and Ramesh Kandasamy on 15 DEC 2025 in Amazon EMR , Analytics , Announcements , AWS Big Data Permalink Comments Share Starting with version 7.10, Amazon EMR is transitioning from EMR File System (EMRFS) to EMR S3A as the default file system connector for Amazon S3 access. This transition brings HBase on Amazon S3 to a new level, offering performance parity with EMRFS while delivering substantial improvements, including better standardization, improved portability, stronger community support, improved performance through non-blocking I/O, asynchronous clients, and better credential management with AWS SDK V2 integration. In this post, we discuss this transition and its benefits. How Socure achieved 50% cost reduction by migrating from self-managed Spark to Amazon EMR Serverless by Junaid Effendi, Pengyu Wang and Raj Ramasubbu on 15 DEC 2025 in Advanced (300) , Amazon EMR , Customer Solutions , Serverless Permalink Comments Share Socure is one of the leading providers of digital identity verification and fraud solutions. Socure’s data science environment includes a streaming pipeline called Transaction ETL (TETL), built on OSS Apache Spark running on Amazon EKS. TETL ingests and processes data volumes ranging from small to large datasets while maintaining high-throughput performance. In this post, we show how Socure was able to achieve 50% cost reduction by migrating the TETL streaming pipeline from self-managed spark to Amazon EMR serverless. Run Apache Spark and Iceberg 4.5x faster than open source Spark with Amazon EMR by Atul Payapilly , Akshaya KP , Giovanni Matteo Fumarola , and Hari Kishore Chaparala on 26 NOV 2025 in Advanced (300) , Amazon EMR , Announcements , Technical How-to Permalink Comments Share This post shows how Amazon EMR 7.12 can make your Apache Spark and Iceberg workloads up to 4.5x faster performance. Apache Spark encryption performance improvement with Amazon EMR 7.9 by Sonu Kumar Singh , Roshin Babu , Polaris Jhandi , and Zheng Yuan on 26 NOV 2025 in Advanced (300) , Amazon EMR , Announcements Permalink Comments Share In this post, we analyze the results from our benchmark tests comparing the Amazon EMR 7.9 optimized Spark runtime against Spark 3.5.5 without encryption optimizations. We walk through a detailed cost analysis and provide step-by-step instructions to reproduce the benchmark. ← Older posts Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training <a d | 2026-01-13T09:29:12 |
https://aws.amazon.com/blogs/big-data/category/analytics/amazon-athena/page/2/ | Amazon Athena | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Category: Amazon Athena Accelerate your analytics with Amazon S3 Tables and Amazon SageMaker Lakehouse by Sandeep Adwankar , Aditya Kalyanakrishnan , and Srividya Parthasarathy on 17 APR 2025 in Amazon Athena , Amazon Redshift , Amazon SageMaker Lakehouse , Amazon SageMaker Studio , Amazon Simple Storage Service (S3) , Analytics , Announcements , AWS Glue , AWS Identity and Access Management (IAM) , AWS Lake Formation Permalink Comments Share Amazon SageMaker Lakehouse is a unified, open, and secure data lakehouse that now seamlessly integrates with Amazon S3 Tables, the first cloud object store with built-in Apache Iceberg support. In this post, we guide you how to use various analytics services using the integration of SageMaker Lakehouse with S3 Tables. Enhancing Adobe Marketo Engage Data Analysis with AWS Glue Integration by Kenny Rajan , Kamen Sharlandjiev , Basheer Sheriff , and Rafal Pawlaszek on 11 MAR 2025 in Amazon Athena , Analytics , AWS Glue , Partner solutions Permalink Share In this post, we show you how to use AWS Glue to extract data from Marketo Engage for data processing and enrichment on AWS for use in marketing analytics workflows. Introducing a new unified data connection experience with Amazon SageMaker Lakehouse unified data connectivity by Chiho Sugimoto , Shubham Agrawal , Joju Eruppanal , Noritaka Sekiyama , and Julie Zhao on 16 DEC 2024 in Amazon Athena , Amazon SageMaker , Analytics , AWS Glue Permalink Comments Share With Amazon SageMaker Lakehouse unified data connectivity, you can confidently connect, explore, and unlock the full value of your data across AWS services and achieve your business objectives with agility. This post demonstrates how SageMaker Lakehouse unified data connectivity helps your data integration workload by streamlining the establishment and management of connections for various data sources. Building end-to-end data lineage for one-time and complex queries using Amazon Athena, Amazon Redshift, Amazon Neptune and dbt by Nancy Wu , Xu Feng , and Da Xu on 12 DEC 2024 in Amazon Athena , Amazon DataZone , Amazon Neptune , Amazon Redshift , Amazon Simple Storage Service (S3) , Analytics , AWS Glue , AWS Lambda , AWS Step Functions , Technical How-to Permalink Comments Share In this post, we use dbt for data modeling on both Amazon Athena and Amazon Redshift. dbt on Athena supports real-time queries, while dbt on Amazon Redshift handles complex queries, unifying the development language and significantly reducing the technical learning curve. Using a single dbt modeling language not only simplifies the development process but also automatically generates consistent data lineage information. This approach offers robust adaptability, easily accommodating changes in data structures. Catalog and govern Amazon Athena federated queries with Amazon SageMaker Lakehouse by Sandeep Adwankar , Stuti Deshpande , Praveen Kumar , Scott Rigney , and Noritaka Sekiyama on 04 DEC 2024 in Amazon Athena , Amazon SageMaker , Analytics Permalink Comments Share In this post, we show how to connect to, govern, and run federated queries on data stored in Redshift, DynamoDB (Preview), and Snowflake (Preview). To query our data, we use Athena, which is seamlessly integrated with SageMaker Unified Studio. We use SageMaker Lakehouse to present data to end-users as federated catalogs, a new type of catalog object. Finally, we demonstrate how to use column-level security permissions in AWS Lake Formation to give analysts access to the data they need while restricting access to sensitive information. How ANZ Institutional Division built a federated data platform to enable their domain teams to build data products to support business outcomes by Leo Ramsamy , Rada Stanic , and Srinivasan Kuppusamy on 04 DEC 2024 in Amazon Athena , Amazon DataZone , Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , Amazon Quick Sight , Amazon Redshift , Amazon Simple Storage Service (S3) , Analytics , Architecture , AWS Glue , AWS Lake Formation , Best Practices , Customer Solutions , Thought Leadership Permalink Comments Share ANZ Institutional Division has transformed its data management approach by implementing a federated data platform based on data mesh principles. This shift aims to unlock untapped data potential, improve operational efficiency, and increase agility. The new strategy empowers domain teams to create and manage their own data products, treating data as a valuable asset rather than a byproduct. This post explores how the shift to a data product mindset is being implemented, the challenges faced, and the early wins that are shaping the future of data management in the Institutional Division. From data lakes to insights: dbt adapter for Amazon Athena now supported in dbt Cloud by Darshit Thakkar , BP Yau , and Selman Ay on 22 NOV 2024 in Amazon Athena , Analytics , Announcements Permalink Comments Share We are excited to announce that the dbt adapter for Amazon Athena is now officially supported in dbt Cloud. This integration enables data teams to efficiently transform and manage data using Athena with dbt Cloud’s robust features, enhancing the overall data workflow experience. In this post, we discuss the advantages of dbt Cloud over dbt Core, common use cases, and how to get started with Amazon Athena using the dbt adapter. Streamline AI-driven analytics with governance: Integrating Tableau with Amazon DataZone by Ramesh H Singh , Adiascar Cisneros , Ariana Rahgozar , Yogesh Dhimate , and Joel Farvault on 30 OCT 2024 in Amazon Athena , Amazon DataZone , Analytics , Announcements Permalink Comments Share Amazon DataZone recently announced the expansion of data analysis and visualization options for your project-subscribed data within Amazon DataZone using the Amazon Athena JDBC driver. In this post, you learn how the recent enhancements in Amazon DataZone facilitate a seamless connection with Tableau. By integrating Tableau with the comprehensive data governance capabilities of Amazon DataZone, we’re empowering data consumers to quickly and seamlessly explore and analyze their governed data. Expanding data analysis and visualization options: Amazon DataZone now integrates with Tableau, Power BI, and more by Ramesh H Singh , Eric Fleishman , Fabricio Hamada , Joel Farvault , Lakshmi Nair , Lionel Pulickal , and Theo Tolv on 30 OCT 2024 in Amazon Athena , Amazon DataZone , Analytics , Intermediate (200) , Launch Permalink Comments Share Amazon DataZone now launched authentication support through the Amazon Athena JDBC driver, allowing data users to seamlessly query their subscribed data lake assets via popular business intelligence (BI) and analytics tools like Tableau, Power BI, Excel, SQL Workbench, DBeaver, and more. This integration empowers data users to access and analyze governed data within Amazon DataZone using familiar tools, boosting both productivity and flexibility. Analyze Amazon EMR on Amazon EC2 cluster usage with Amazon Athena and Amazon QuickSight by Boon Lee Eu , Kyara Labrador , Vikas Omer , and Lorenzo Ripani on 25 OCT 2024 in Amazon Athena , Amazon EC2 , Amazon EMR , Amazon Quick Sight , Technical How-to Permalink Comments Share In this post, we guide you through deploying a comprehensive solution in your Amazon Web Services (AWS) environment to analyze Amazon EMR on EC2 cluster usage. By using this solution, you will gain a deep understanding of resource consumption and associated costs of individual applications running on your EMR cluster. ← Older posts Newer posts → {"data":{"items":[{"fields":{"footer":"{\n \"createAccountButtonLabel\": \"Create an AWS account\",\n \"createAccountButtonURL\": \"https://portal.aws.amazon.com/gp/aws/developer/registration/index.html?nc1=f_ct&src=footer_signup\",\n \"backToTopText\": \"Back to top\",\n \"eoeText\": \"Amazon is an Equal Opportunity Employer: Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation / Age.\",\n \"copyrightText\": \"© 2025, Amazon Web Services, Inc. or its affiliates. All rights reserved.\",\n \"items\": [\n {\n \"name\": \"Learn\",\n \"linkURL\": \"\",\n \"items\": [\n {\n \"heading\": \"What Is AWS?\",\n \"linkURL\": \"/what-is-aws/?nc1=f_cc\"\n },\n {\n \"heading\": \"What Is Cloud Computing?\",\n \"linkURL\": \"/what-is-cloud-computing/?nc1=f_cc\"\n },\n {\n \"heading\": \"What Is Agentic AI?\",\n \"linkURL\": \"/what-is/agentic-ai/?nc1=f_cc\"\n },\n {\n \"heading\": \"Cloud Computing Concepts Hub\",\n \"linkURL\": \"/what-is/?nc1=f_cc\"\n },\n {\n \"heading\": \"AWS Cloud Security\",\n \"linkURL\": \"/security/?nc1=f_cc\"\n },\n {\n \"heading\": \"What's New\",\n \"linkURL\": \"/new/?nc1=f_cc\"\n },\n {\n \"heading\": \"Blogs\",\n \"linkURL\": \"/blogs/?nc1=f_cc\"\n },\n {\n \"heading\": \"Press Releases\",\n \"linkURL\": \"https://press.aboutamazon.com/press-releases/aws\"\n }\n ]\n },\n {\n \"name\": \"Resources\",\n \"linkURL\": \"\",\n \"items\": [\n {\n \"heading\": \"Getting Started\",\n \"linkURL\": \"/getting-started/?nc1=f_cc\"\n },\n {\n \"heading\": \"Training\",\n \"linkURL\": \"/training/?nc1=f_cc\"\n },\n {\n \"heading\": \"AWS Trust Center\",\n \"linkURL\": \"/trust-center/?nc1=f_cc\"\n },\n {\n \"heading\": \"AWS Solutions Library\",\n \"linkURL\": \"/solutions/?nc1=f_cc\"\n },\n {\n \"heading\": \"Architecture Center\",\n \"linkURL\": \"/architecture/?nc1=f_cc\"\n },\n {\n \"heading\": \"Product and Technical FAQs\",\n \"linkURL\": \"/faqs/?nc1=f_dr\"\n },\n {\n \"heading\": \"Analyst Reports\",\n \"linkURL\": \"/resources/analyst-reports/?nc1=f_cc\"\n },\n {\n \"heading\": \"AWS Partners\",\n \"linkURL\": \"/partners/work-with-partners/?nc1=f_dr\"\n }\n ]\n },\n {\n \"name\": \"Developers\",\n \"linkURL\": \"\",\n \"items\": [\n {\n \"heading\": \"Builder Center\",\n \"linkURL\": \"/developer/?nc1=f_dr\"\n },\n {\n \"heading\": \"SDKs & Tools\",\n \"linkURL\": \"/developer/tools/?nc1=f_dr\"\n },\n {\n \"heading\": \".NET on AWS\",\n \"linkURL\": \"/developer/language/net/?nc1=f_dr\"\n },\n {\n \"heading\": \"Python on AWS\",\n \"linkURL\": \"/developer/language/python/?nc1=f_dr\"\n },\n {\n \"heading\": \"Java on AWS\",\n \"linkURL\": \"/developer/language/java/?nc1=f_dr\"\n },\n {\n \"heading\": \"PHP on AWS\",\n \"linkURL\": \"/developer/language/php/?nc1=f_cc\"\n },\n {\n \"heading\": \"JavaScript on AWS\",\n \"linkURL\": \"/developer/language/javascript/?nc1=f_dr\"\n }\n ]\n },\n {\n \"name\": \"Help\",\n \"linkURL\": \"\",\n \"items\": [\n {\n \"heading\": \"Contact Us\",\n \"linkURL\": \"/contact-us/?nc1=f_m\"\n },\n {\n \"heading\": \"File a Support Ticket\",\n \"linkURL\": \"https://console.aws.amazon.com/support/home/?nc1=f_dr\"\n },\n {\n \"heading\": \"AWS re:Post\",\n \"linkURL\": \"https://repost.aws/?nc1=f_dr\"\n },\n {\n \"heading\": \"Knowledge Center\",\n \"linkURL\": \"https://repost.aws/knowledge-center/?nc1=f_dr\"\n },\n {\n \"heading\": \"AWS Support Overview\",\n \"linkURL\": \"/premiumsupport/?nc1=f_dr\"\n },\n {\n \"heading\": \"Get Expert Help\",\n \"linkURL\": \"https://iq.aws.amazon.com/?utm=mkt.foot/?nc1=f_m\"\n },\n {\n \"heading\": \"AWS Accessibility\",\n \"linkURL\": \"/accessibility/?nc1=f_cc\"\n },\n {\n \"heading\": \"Legal\",\n \"linkURL\": \"/legal/?nc1=f_cc\"\n }\n ]\n }\n ],\n \"disclosureItems\": [\n {\n \"heading\": \"Privacy\",\n \"linkURL\": \"/privacy/?nc1=f_pr\"\n },\n {\n \"heading\": \"Site terms\",\n \"linkURL\": \"/terms/?nc1=f_pr\"\n }\n ],\n \"socialItems\": [\n {\n \"socialIconType\": \"x\",\n \"linkURL\": \"https://twitter.com/awscloud\"\n },\n {\n \"socialIconType\": \"facebook\",\n \"linkURL\": \"https://www.facebook.com/amazonwebservices\"\n },\n {\n \"socialIconType\": \"linkedin\",\n \"linkURL\": \"https://www.linkedin.com/company/amazon-web-services/\"\n },\n {\n \"socialIconType\": \"instagram\",\n \"linkURL\": \"https://www.instagram.com/amazonwebservices/\"\n },\n {\n \"socialIconType\": \"twitch\",\n \"linkURL\": \"https://www.twitch.tv/aws\"\n },\n {\n \"socialIconType\": \"youtube\",\n \"linkURL\": \"https://www.youtube.com/user/AmazonWebServices/Cloud/\"\n },\n {\n | 2026-01-13T09:29:12 |
https://aws.amazon.com/blogs/big-data/category/events/reinvent/ | AWS re:Invent | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Category: AWS re:Invent AWS analytics at re:Invent 2025: Unifying Data, AI, and governance at scale by Larry Weber on 07 JAN 2026 in Amazon EMR , Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , Amazon OpenSearch Service , Amazon Redshift , Amazon SageMaker Data & AI Governance , Amazon SageMaker Unified Studio , Analytics , AWS Glue , AWS Lake Formation , AWS re:Invent , Intermediate (200) Permalink Comments Share re:Invent 2025 showcased the bold Amazon Web Services (AWS) vision for the future of analytics, one where data warehouses, data lakes, and AI development converge into a seamless, open, intelligent platform, with Apache Iceberg compatibility at its core. Across over 18 major announcements spanning three weeks, AWS demonstrated how organizations can break down data silos, […] Your guide to AWS Analytics at AWS re:Invent 2025 by Sonu Kumar Singh and Navnit Shukla on 13 NOV 2025 in Analytics , AWS re:Invent , Events Permalink Comments Share It’s that time of year again — AWS re:Invent is here! At re:Invent, bold ideas come to life. Get a front-row seat to hear inspiring stories from AWS experts, customers, and leaders as they explore today’s most impactful topics, from data analytics to AI. For all the data enthusiasts and professionals, we’ve curated a comprehensive […] Top analytics announcements of AWS re:Invent 2024 by Sakti Mishra and Navnit Shukla on 26 FEB 2025 in Analytics , Announcements , AWS re:Invent Permalink Comments Share AWS re:Invent 2024, the flagship annual conference, took place December 2–6, 2024, in Las Vegas, bringing together thousands of cloud enthusiasts, innovators, and industry leaders from around the globe. Analytics remained one of the key focus areas this year, with significant updates and innovations aimed at helping businesses harness their data more efficiently and accelerate insights. In this post, we walk you through the top analytics announcements from re:Invent 2024 and explore how these innovations can help you unlock the full potential of your data. Recap of Amazon Redshift key product announcements in 2024 by Neeraja Rentachintala on 17 DEC 2024 in Amazon Redshift , Analytics , Announcements , AWS re:Invent , Generative AI Permalink Comments Share Amazon Redshift made significant strides in 2024, that enhanced price-performance, enabled data lakehouse architectures by blurring the boundaries between data lakes and data warehouses, simplified ingestion and accelerated near real-time analytics, and incorporated generative AI capabilities to build natural language-based applications and boost user productivity. This blog post provides a comprehensive overview of the major product innovations and enhancements made to Amazon Redshift in 2024. The next generation of Amazon SageMaker: The center for all your data, analytics, and AI by G2 Krishnamoorthy and Rahul Pathak on 04 DEC 2024 in Amazon SageMaker , Analytics , Artificial Intelligence , AWS re:Invent Permalink Comments Share This week on the keynote stages at AWS re:Invent 2024, you heard from Matt Garman, CEO, AWS, and Swami Sivasubramanian, VP of AI and Data, AWS, speak about the next generation of Amazon SageMaker, the center for all of your data, analytics, and AI. This update addresses the evolving relationship between analytics and AI workloads, aiming to streamline how customers work with their data. It helps organizations collaborate more effectively, reduce data silos, and accelerate the development of AI-powered applications while maintaining robust governance and security measures. Your guide to AWS Analytics at AWS re:Invent 2024 by Imtiaz Sayed and Navnit Shukla on 14 NOV 2024 in AWS re:Invent Permalink Comments Share It’s AWS re:Invent time, where you turn your ideas into reality. Get a front row seat to hear real stories from AWS customers, experts and leaders about navigating pressing topics like generative AI and data analytics. For data enthusiasts and data professionals alike, this blog is a curated and comprehensive guide to all analytics sessions, for you to efficiently plan your itinerary. AWS re:Invent 2023 Amazon Redshift Sessions Recap by Mia Heard on 18 DEC 2023 in Amazon Redshift , Analytics , AWS re:Invent Permalink Comments Share Amazon Redshift powers data-driven decisions for tens of thousands of customers every day with a fully managed, AI-powered cloud data warehouse, delivering the best price-performance for your analytics workloads. Customers use Amazon Redshift as a key component of their data architecture to drive use cases from typical dashboarding to self-service analytics, real-time analytics, machine learning […] Amazon Redshift announcements at AWS re:Invent 2023 to enable analytics on all your data by Neeraja Rentachintala and Sunaina Abdul Salah on 29 NOV 2023 in Amazon Redshift , Analytics , AWS re:Invent Permalink Comments Share In 2013, Amazon Web Services revolutionized the data warehousing industry by launching Amazon Redshift, the first fully-managed, petabyte-scale, enterprise-grade cloud data warehouse. Amazon Redshift made it simple and cost-effective to efficiently analyze large volumes of data using existing business intelligence tools. This cloud service was a significant leap from the traditional data warehousing solutions, which […] Unlocking the value of data as your differentiator by G2 Krishnamoorthy and Rahul Pathak on 29 NOV 2023 in Analytics , AWS Big Data , AWS re:Invent Permalink Comments Share Today on the AWS re:Invent keynote stage, Swami Sivasubramanian, VP of Data and AI, AWS, spoke about the beneficial relationship among data, generative AI, and humans—all working together to unleash new possibilities in efficiency and creativity. There has never been a more exciting time in modern technology. Innovation is accelerating everywhere, and the future is […] Unlock innovation in data and AI at AWS re:Invent 2023 by Pradeep Parmar on 15 NOV 2023 in Analytics , AWS re:Invent Permalink Comments Share For organizations seeking to unlock innovation with data and AI, AWS re:Invent 2023 offers several opportunities. Attendees will discover services, strategies, and solutions for tackling any data challenge. In this post, we provide a curated list of keynotes, sessions, demos, and exhibits that will showcase how you can unlock innovation in data and AI using […] ← Older posts Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center SDKs & Tools .NET on AWS Python on AWS Java on AWS PHP on AWS JavaScript on AWS Help Contact Us File a Support Ticket AWS re:Post Knowledge Center AWS Support Overview Get Expert Help AWS Accessibility <a data-rg-n="Lin | 2026-01-13T09:29:12 |
https://aws.amazon.com/blogs/big-data/aws-analytics-at-reinvent-2025-unifying-data-ai-and-governance-at-scale/#Comments | AWS analytics at re:Invent 2025: Unifying Data, AI, and governance at scale | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog AWS analytics at re:Invent 2025: Unifying Data, AI, and governance at scale by Larry Weber on 07 JAN 2026 in Amazon EMR , Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , Amazon OpenSearch Service , Amazon Redshift , Amazon SageMaker Data & AI Governance , Amazon SageMaker Unified Studio , Analytics , AWS Glue , AWS Lake Formation , AWS re:Invent , Intermediate (200) Permalink Comments Share re:Invent 2025 showcased the bold Amazon Web Services (AWS) vision for the future of analytics, one where data warehouses, data lakes, and AI development converge into a seamless, open, intelligent platform, with Apache Iceberg compatibility at its core. Across over 18 major announcements spanning three weeks, AWS demonstrated how organizations can break down data silos, accelerate insights with AI, and maintain robust governance without sacrificing agility. Amazon SageMaker: Your data platform, simplified AWS introduced a faster, simpler approach to data platform onboarding for Amazon SageMaker Unified Studio . The new one-click onboarding experience eliminates weeks of setup, so teams can start working with existing datasets in minutes using their current AWS Identity and Access Management (IAM) roles and permissions. Accessible directly from Amazon SageMaker , Amazon Athena , Amazon Redshift , and Amazon S3 Tables consoles, this streamlined experience automatically creates SageMaker Unified Studio projects with existing data permissions intact. At its core is a powerful new serverless notebook that reimagines how data professionals work. This single interface combines SQL queries, Python code, Apache Spark processing, and natural language prompts, backed by Amazon Athena for Apache Spark to scale from interactive exploration to petabyte-scale jobs. Data engineers, analysts, and data scientists no longer need to context-switch between different tools based on workload—they can explore data with SQL, build models with Python, and use AI assistance, all in one place. The introduction of Amazon SageMaker Data Agent in the new SageMaker notebooks marks a pivotal moment in AI-assisted development for data builders. This built-in agent doesn’t only generate code, it understands your data context, catalog information, and business metadata to create intelligent execution plans from natural language descriptions. When you describe an objective, the agent breaks down complex analytics and machine learning (ML) tasks into manageable steps, generates the required SQL and Python code, and maintains awareness of your notebook environment throughout the entire process. This capability transforms hours of manual coding into minutes of guided development, which means teams can focus on gleaning insights rather than repetitive boilerplate. Embracing open data with Apache Iceberg One significant theme across this year’s launches was the widespread adoption of Apache Iceberg across AWS analytics, transforming how organizations manage petabyte-scale data lakes. Catalog federation to remote Iceberg catalogs through the AWS Glue Data Catalog addresses a critical challenge in modern data architectures. You can now query remote Iceberg tables, stored in Amazon Simple Storage Service (Amazon S3) and catalogued in remote Iceberg catalogs, using preferred AWS analytics services such as Amazon Redshift, Amazon EMR , Amazon Athena, AWS Glue, and Amazon SageMaker, without moving or copying tables. Metadata synchronizes in real time, providing query results that reflect the current state. Catalog federation supports both coarse-grained access control and fine-grained access permissions through AWS Lake Formation enabling cross-account sharing and trusted identity propagation while maintaining consistent security across federated catalogs. Amazon Redshift now writes directly to Apache Iceberg tables, enabling true open lakehouse architectures where analytics seamlessly span data warehouses and lakes. Apache Spark on Amazon EMR 7.12 , AWS Glue, Amazon SageMaker notebooks, Amazon S3 Tables, and the AWS Glue Data Catalog now support Iceberg V3’s capabilities, including deletion vectors that mark deleted rows without expensive file rewrites, dramatically reducing pipeline costs and accelerating data modifications and row lineage. V3 automatically tracks every record’s history, creating audit trails essential for compliance and has table-level encryption that helps organizations meet stringent privacy regulations. These innovations mean faster writes, lower storage costs, comprehensive audit trails, and efficient incremental processing across your data architecture. Governance that scales with your organization Data governance received substantial attention at re:Invent with major enhancements to Amazon SageMaker Catalog . Organizations can now curate data at the column level with custom metadata forms and rich text descriptions , indexed in real time for immediate discoverability. New metadata enforcement rules require data producers to classify assets with approved business vocabulary before publication, providing consistency across the enterprise. The catalog uses Amazon Bedrock large language models (LLMs) to automatically suggest relevant business glossary terms by analyzing table metadata and schema information, bridging the gap between technical schemas and business language. Perhaps most importantly, SageMaker Catalog now exports its entire asset metadata as queryable Apache Iceberg tables through Amazon S3 Tables. This way, teams can analyze catalog inventory with standard SQL to answer questions like “which assets lack business descriptions?” or “how many confidential datasets were registered last month?” without building custom ETL infrastructure. As organizations adopt multi-warehouse architectures to scale and isolate workloads, the new Amazon Redshift federated permissions capability eliminates governance complexity. Define data permissions one time from a Amazon Redshift warehouse, and they automatically enforce them across the warehouses in your account. Row-level, column-level, and masking controls apply consistently regardless of which warehouse queries originate from, and new warehouses automatically inherit permission policies. This horizontal scalability means organizations can add warehouses without increasing governance overhead, and analysts immediately see the databases from registered warehouses. Accelerating AI innovation with Amazon OpenSearch Service Amazon OpenSearch Service introduced powerful new capabilities to simplify and accelerate AI application development. With support for OpenSearch 3.3 , agentic search enables precise results using natural language inputs without the need for complex queries, making it easier to build intelligent AI agents. The new Apache Calcite-powered PPL engine delivers query optimization and an extensive library of commands for more efficient data processing. As seen in Matt Garman’s keynote , building large-scale vector databases is now dramatically faster with GPU acceleration and auto-optimization . Previously, creating large-scale vector indexes required days of building time and weeks of manual tuning by experts, which slowed innovation and prevented cost-performance optimizations. The new serverless auto-optimize jobs automatically evaluate index configurations—including k-nearest neighbors (k-NN) algorithms, quantization, and engine settings—based on your specified search latency and recall requirements. Combined with GPU acceleration, you can build optimized indexes up to ten times faster at 25% of the indexing cost, with serverless GPUs that activate dynamically and bill only when providing speed boosts. These advancements simplify scaling AI applications such as semantic search, recommendation engines, and agentic systems, so teams can innovate faster by dramatically reducing the time and effort needed to build large-scale, optimized vector databases. Performance and cost optimization Also announced in the keynote , Amazon EMR Serverless now eliminates local storage provisioning for Apache Spark workloads, introducing serverless storage that reduces data processing costs by up to 20% while preventing job failures from disk capacity constraints. The fully managed, auto scaling storage encrypts data in transit and at rest with job-level isolation, allowing Spark to release workers immediately when idle rather than keeping them active to preserve temporary data. Additionally, AWS Glue introduced materialized views based on Apache Iceberg, storing precomputed query results that automatically refresh as source data changes. Spark engines across Amazon Athena, Amazon EMR, and AWS Glue intelligently rewrite queries to use these views, accelerating performance by up to eight times while reducing compute costs. The service handles refresh schedules, change detection, incremental updates, and infrastructure management automatically. The new Apache Spark upgrade agent for Amazon EMR transforms version upgrades from months-long projects into week-long initiatives. Using conversational interfaces, engineers express upgrade requirements in natural language while the agent automatically identifies API changes and behavioral modifications across PySpark and Scala applications. Engineers review and approve suggested changes before implementation, maintaining full control while the agent validates functional correctness through data quality checks. Currently supporting upgrades from Spark 2.4 to 3.5, this capability is available through SageMaker Unified Studio, Kiro CLI , or an integrated development environment (IDE) with Model Context Protocol compatibility. For workflow optimization, AWS introduced a new Serverless deployment option for Amazon Managed Workflows for Apache Airflow (Amazon MWAA), which eliminates the operational overhead of managing Apache Airflow environments while optimizing costs through serverless scaling. This new offering addresses key challenges of operational scalability, cost optimization, and access management that data engineers and DevOps teams face when orchestrating workflows. With Amazon MWAA Serverless , data engineers can focus on defining their workflow logic rather than monitoring for provisioned capacity. They can now submit their Airflow workflows for execution on a schedule or on demand, paying only for the actual compute time used during each task’s execution. Looking forward These launches collectively represent more than incremental improvements. They signal a fundamental shift in how organizations are approaching analytics. By unifying data warehousing, data lakes, and ML under a common framework built on Apache Iceberg, simplifying access through intelligent interfaces powered by AI, and maintaining robust governance that scales effortlessly, AWS is giving organizations the tools to focus on insights rather than infrastructure. The emphasis on automation, from AI-assisted development to self-managing materialized views and serverless storage, reduces operational overhead while improving performance and cost efficiency. As data volumes continue to grow and AI becomes increasingly central to business operations, these capabilities position AWS customers to accelerate their data-driven initiatives with unprecedented simplicity and power. To view the Re:Invent 2025 Innovation Talk on analytics, visit Harnessing analytics for humans and AI on YouTube. About the authors Larry Weber Larry leads product marketing for the analytics portfolio at AWS. Loading comments… Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow Twitter Facebook LinkedIn Twitch Email Updates Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center SDKs & Tools .NET on AWS Python on AWS Java on AWS PHP on AWS JavaScript on AWS Help Contact Us File a Support Ticket AWS re:Post Knowledge Center AWS Support Overview Get Expert Help AWS Accessibility Legal English Back to top Amazon is an Equal Opportunity Employer: Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation / Age. x facebook linkedin instagram twitch youtube podcasts email Privacy Site terms Cookie Preferences © 2025, Amazon Web Services, Inc. or its affiliates. All rights reserved. | 2026-01-13T09:29:12 |
https://aws.amazon.com/blogs/big-data/category/analytics/#aws-page-content-main | Analytics | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Category: Analytics Navigating architectural choices for a lakehouse using Amazon SageMaker by Lakshmi Nair and Saman Irfan on 12 JAN 2026 in Amazon SageMaker Data & AI Governance , Amazon SageMaker Lakehouse , Amazon SageMaker Unified Studio , Analytics Permalink Comments Share Over time, several distinct lakehouse approaches have emerged. In this post, we show you how to evaluate and choose the right lakehouse pattern for your needs. A lakehouse architecture isn’t about choosing between a data lake and a data warehouse. Instead, it’s an approach to interoperability where both frameworks coexist and serve different purposes within a unified data architecture. By understanding fundamental storage patterns, implementing effective catalog strategies, and using native storage capabilities, you can build scalable, high-performance data architectures that support both your current analytics needs and future innovation. Access Databricks Unity Catalog data using catalog federation in the AWS Glue Data Catalog by Srividya Parthasarathy and Venkat Viswanathan on 12 JAN 2026 in Advanced (300) , Amazon SageMaker , AWS Glue , AWS Lake Formation , Technical How-to Permalink Comments Share AWS has launched the catalog federation capability, enabling direct access to Apache Iceberg tables managed in Databricks Unity Catalog through the AWS Glue Data Catalog. With this integration, you can discover and query Unity Catalog data in Iceberg format using an Iceberg REST API endpoint, while maintaining granular access controls through AWS Lake Formation. In this post, we demonstrate how to set up catalog federation between the Glue Data Catalog and Databricks Unity Catalog, enabling data querying using AWS analytics services. Use Amazon SageMaker custom tags for project resource governance and cost tracking by David Victoria , Ahan Malli , and Rohit Srikanta on 08 JAN 2026 in Advanced (300) , Amazon SageMaker , Amazon SageMaker Unified Studio , Technical How-to Permalink Comments Share Amazon SageMaker announced a new feature that you can use to add custom tags to resources created through an Amazon SageMaker Unified Studio project. This helps you enforce tagging standards that conform to your organization’s service control policies (SCPs) and helps enable cost tracking reporting practices on resources created across the organization. In this post, we look at use cases for custom tags and how to use the AWS Command Line Interface (AWS CLI) to add tags to project resources. Create AWS Glue Data Catalog views using cross-account definer roles by Aarthi Srinivasan and Sundeep Kumar on 08 JAN 2026 in Advanced (300) , Analytics , AWS Glue , Technical How-to Permalink Comments Share In this post, we demonstrate how to use cross-account IAM definer roles with AWS Glue Data Catalog views. We show how data owner accounts can create and manage views in a central governance account while maintaining security and control over their data assets. AWS analytics at re:Invent 2025: Unifying Data, AI, and governance at scale by Larry Weber on 07 JAN 2026 in Amazon EMR , Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , Amazon OpenSearch Service , Amazon Redshift , Amazon SageMaker Data & AI Governance , Amazon SageMaker Unified Studio , Analytics , AWS Glue , AWS Lake Formation , AWS re:Invent , Intermediate (200) Permalink Comments Share re:Invent 2025 showcased the bold Amazon Web Services (AWS) vision for the future of analytics, one where data warehouses, data lakes, and AI development converge into a seamless, open, intelligent platform, with Apache Iceberg compatibility at its core. Across over 18 major announcements spanning three weeks, AWS demonstrated how organizations can break down data silos, […] Amazon EMR Serverless eliminates local storage provisioning, reducing data processing costs by up to 20% by Karthik Prabhakar , Matt Tolton , Neil Mukerje , and Ravi Kumar Singh on 06 JAN 2026 in Amazon EMR , Analytics , Announcements , Intermediate (200) , Serverless Permalink Comments Share In this post, you’ll learn how Amazon EMR Serverless eliminates the need to configure local disk storage for Apache Spark workloads through a new serverless storage capability. We explain how this feature automatically handles shuffle operations, reduces data processing costs by up to 20%, prevents job failures from disk capacity constraints, and enables elastic scaling by decoupling storage from compute. Building scalable AWS Lake Formation governed data lakes with dbt and Amazon Managed Workflows for Apache Airflow by Abhilasha Agarwal and Muralidhar Reddy on 06 JAN 2026 in Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , AWS Lake Formation , Expert (400) , Technical How-to Permalink Comments Share Organizations often struggle with building scalable and maintainable data lakes—especially when handling complex data transformations, enforcing data quality, and monitoring compliance with established governance. Traditional approaches typically involve custom scripts and disparate tools, which can increase operational overhead and complicate access control. A scalable, integrated approach is needed to simplify these processes, improve data reliability, […] Simplify multi-warehouse data governance with Amazon Redshift federated permissions by Satesh Sonti , Ning Di , Sandeep Adwankar , Ramchandra Anil Kulkarni , and Abhishek Rai Sharma on 05 JAN 2026 in Advanced (300) , Amazon Redshift , Technical How-to Permalink Comments Share Amazon Redshift federated permissions simplify permissions management across multiple Redshift warehouses. In this post, we show you how to define data permissions one time and automatically enforce them across warehouses in your AWS account, removing the need to re-create security policies in each warehouse. Simplified management of Amazon MSK with natural language using Kiro CLI and Amazon MSK MCP Server by Kalyan Janaki , Aarjvi Desai , Ankit Mishra , and Sandhya Khanderia on 24 DEC 2025 in Amazon Managed Streaming for Apache Kafka (Amazon MSK) , Kiro , Learning Levels , Technical How-to Permalink Comments Share In this post, we demonstrate how Kiro CLI and the MSK MCP server can streamline your Kafka management. Through practical examples and demonstrations, we show you how to use these tools to perform common administrative tasks efficiently while maintaining robust security and reliability. Unifying governance and metadata across Amazon SageMaker Unified Studio and Atlan by Karan Singh Thakur, Satabrata Paul , Divij Bhatia , and Leonardo Gomez on 22 DEC 2025 in Advanced (300) , Amazon SageMaker Unified Studio , Technical How-to Permalink Comments Share In this post, we show you how to unify governance and metadata across Amazon SageMaker Unified Studio and Atlan through a comprehensive bidirectional integration. You’ll learn how to deploy the necessary AWS infrastructure, configure secure connections, and set up automated synchronization to maintain consistent metadata across both platforms. ← Older posts Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started <a data-rg-n="Link" h | 2026-01-13T09:29:12 |
https://aws.amazon.com/blogs/big-data/category/analytics/amazon-athena?sc_ichannel=ha&sc_icampaign=acq_awsblogsb&sc_icontent=bigdata-resources#aws-page-content-main | Amazon Athena | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Category: Amazon Athena How Twilio built a multi-engine query platform using Amazon Athena and open-source Presto by Amber Runnels , Aakash Pradeep , and Venkatram Bondugula on 21 OCT 2025 in Amazon Athena , Analytics , Customer Solutions , Intermediate (200) Permalink Comments Share At Twilio, we manage a 20 petabyte-scale Amazon S3 data lake that serves the analytics needs of over 1,500 users, processing 2.5 million queries monthly and scanning an average of 85 PB of data. To meet our growing demands for scalability, emerging technology support, and data mesh architecture adoption, we built Odin, a multi-engine query platform that provides an abstraction layer built on top of Presto Gateway. In this post, we discuss how we designed and built Odin, combining Amazon Athena with open-source Presto to create a flexible, scalable data querying solution. Visualize data lineage using Amazon SageMaker Catalog for Amazon EMR, AWS Glue, and Amazon Redshift by Shubham Purwar , Nitin Kumar , and Prashanthi Chinthala on 13 OCT 2025 in Amazon Athena , Amazon EMR , Amazon Redshift , Amazon SageMaker Data & AI Governance , Amazon SageMaker Unified Studio , AWS Glue , Expert (400) , Technical How-to Permalink Comments Share Amazon SageMaker offers a comprehensive hub that integrates data, analytics, and AI capabilities, providing a unified experience for users to access and work with their data. Through Amazon SageMaker Unified Studio, a single and unified environment, you can use a wide range of tools and features to support your data and AI development needs, including […] Transform your data to Amazon S3 Tables with Amazon Athena by Pathik Shah and Aritra Gupta on 15 AUG 2025 in Amazon Athena , Amazon S3 Tables , Analytics , Intermediate (200) Permalink Comments Share This post demonstrates how Amazon Athena CREATE TABLE AS SELECT (CTAS) simplifies the data transformation process through a practical example: migrating an existing Parquet dataset into Amazon S3 Tables. Build an analytics pipeline that is resilient to Avro schema changes using Amazon Athena by Mohammad Sabeel and Indira Balakrishnan on 25 JUL 2025 in Amazon Athena , Analytics , AWS Glue , Intermediate (200) Permalink Comments Share This post demonstrates how to build a solution by combining Amazon Simple Storage Service (Amazon S3) for data storage, AWS Glue Data Catalog for schema management, and Amazon Athena for one-time querying. We’ll focus specifically on handling Avro-formatted data in partitioned S3 buckets, where schemas can change frequently while providing consistent query capabilities across all data regardless of schema versions. How Stifel built a modern data platform using AWS Glue and an event-driven domain architecture by Amit Maindola and Srinivas Kandi, Hossein Johari, Ahmad Rawashdeh, Lei Meng on 07 JUL 2025 in Advanced (300) , Amazon Athena , Amazon EventBridge , Amazon Simple Storage Service (S3) , Analytics , Architecture , AWS Glue , AWS Lake Formation , Best Practices , Experience-Based Acceleration , Technical How-to , Thought Leadership Permalink Comments Share In this post, we show you how Stifel implemented a modern data platform using AWS services and open data standards, building an event-driven architecture for domain data products while centralizing the metadata to facilitate discovery and sharing of data products. Introducing managed query results for Amazon Athena by Guy Bachar , Darshit Thakkar , and Sayan Chakraborty on 03 JUN 2025 in Amazon Athena , Analytics , Announcements Permalink Comments Share We’re thrilled to introduce managed query results, a new Athena feature that automatically stores, secures, and manages the lifecycle of query result data for you at no additional cost. In this post, we demonstrate how to get started with managed query results and, by removing the undifferentiated effort spent on query result management, how Athena helps you get insights from your data in fewer steps than before. Build a secure serverless streaming pipeline with Amazon MSK Serverless, Amazon EMR Serverless and IAM by Shubham Purwar , Nitin Kumar , and Prashanthi Chinthala on 02 JUN 2025 in Amazon Athena , Amazon EMR , Amazon Managed Streaming for Apache Kafka (Amazon MSK) , Analytics , AWS Big Data Permalink Comments Share The post demonstrates a comprehensive, end-to-end solution for processing data from MSK Serverless using an EMR Serverless Spark Streaming job, secured with IAM authentication. Additionally, it demonstrates how to query the processed data using Amazon Athena, providing a seamless and integrated workflow for data processing and analysis. This solution enables near real-time querying of the latest data processed from MSK Serverless and EMR Serverless using Athena, providing instant insights and analytics. How BMW Group built a serverless terabyte-scale data transformation architecture with dbt and Amazon Athena by Philipp Karg , Cizer Pereira , and Selman Ay on 29 APR 2025 in Amazon Athena , Amazon Quick Sight , Analytics , Customer Solutions Permalink Comments Share At the BMW Group, our Cloud Efficiency Analytics (CLEA) team has developed a FinOps solution to optimize costs across over 10,000 cloud accounts This post explores our journey, from the initial challenges to our current architecture, and details the steps we took to achieve a highly efficient, serverless data transformation setup. Amazon SageMaker Lakehouse now supports attribute-based access control by Sandeep Adwankar and Srividya Parthasarathy on 24 APR 2025 in Amazon Athena , Amazon Redshift , Amazon SageMaker Lakehouse , Analytics , Announcements , AWS Glue , AWS Identity and Access Management (IAM) , AWS Lake Formation , Technical How-to Permalink Comments Share Amazon SageMaker Lakehouse now supports attribute-based access control (ABAC) with AWS Lake Formation, using AWS Identity and Access Management (IAM) principals and session tags to simplify data access, grant creation, and maintenance. In this post, we demonstrate how to get started with SageMaker Lakehouse with ABAC. Read and write Apache Iceberg tables using AWS Lake Formation hybrid access mode by Aarthi Srinivasan and Parul Saxena on 21 APR 2025 in Amazon Athena , Amazon EMR , AWS Lake Formation , Intermediate (200) Permalink Comments Share In this post, we demonstrate how to use Lake Formation for read access while continuing to use AWS Identity and Access Management (IAM) policy-based permissions for write workloads that update the schema and upsert (insert and update combined) data records into the Iceberg tables. ← Older posts Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases | 2026-01-13T09:29:12 |
https://aws.amazon.com/blogs/big-data/create-aws-glue-data-catalog-views-using-cross-account-definer-roles/ | Create AWS Glue Data Catalog views using cross-account definer roles | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Create AWS Glue Data Catalog views using cross-account definer roles by Aarthi Srinivasan and Sundeep Kumar on 08 JAN 2026 in Advanced (300) , Analytics , AWS Glue , Technical How-to Permalink Comments Share With AWS Glue Data Catalog views you can create a SQL view in the Data Catalog that references one or more base tables. These multi-dialect views support various SQL query engines, providing consistent access across multiple Amazon Web Services (AWS) services including Amazon Athena , Amazon Redshift Spectrum, and Apache Spark in both Amazon EMR and AWS Glue 5.0 . You can now create Data Catalog views using a cross-account AWS Identity and Access Management (IAM) definer role. A definer role is an IAM role used to create the Data Catalog view and has SELECT permissions on all columns of the underlying base tables. This definer role is assumed by AWS Glue and AWS Lake Formation service principals to vend credentials to the base tables’ data whenever the view is queried. The definer role allows the Data Catalog view to be shared to principals or AWS accounts so that you can share a filtered subset of data without sharing the base tables. Previously, Data Catalog views required a definer role within the same AWS account as the base tables. The introduction of cross-account definer roles enables Data Catalog view creation in enterprise data mesh architectures. In this setup, database and table metadata is centralized in a governance account, and individual data owner accounts maintain control over table creation and management through their IAM roles. Data owner accounts can now create and manage Data Catalog views in the central governance accounts using their existing continuous integration and continuous delivery (CI/CD) pipeline roles. In this post, we show you a cross-account scenario involving two AWS accounts: a central governance account containing the tables and hosting the views and a data owner (producer) account with the IAM role used to create and manage views. We provide implementation details for both SPARK dialect using AWS SDK code samples and ATHENA dialect using SQL commands. Using this approach, you can implement sophisticated data governance models at enterprise scale while maintaining operational efficiency across your AWS environment. Key benefits Key benefits for cross-account definer roles are as follows: Enhanced data mesh support – Enterprises with multi-account data lakehouse architectures can now maintain their existing operational model where data owner accounts manage table creation and updates using their established IAM roles. These same roles can now create and manage Data Catalog views across account boundaries. Strengthened security controls – By keeping table and view management within data owner account roles: Security posture is enhanced through proper separation of duties. Audit trails become more comprehensive and meaningful. Access controls follow the principle of least privilege. Elimination of data duplication – Data owner accounts can create views in central accounts that: Provide access to specific data subsets without duplicating tables. Reduce storage costs and management overhead. Maintain a single source of truth while enabling targeted data sharing. Solution overview An example customer has a database with two transaction tables in their central account, where the catalog and permissions are maintained. With the database shared to the data owner (producer) account, we create a Data Catalog view in the central account on these two tables, using the producer’s definer role. The view from the central account can be shared to additional consumer accounts and queried. We illustrate creating the SPARK dialect using create-table CLI , and add the ATHENA dialect for the same view from the Athena console . We also provide the AWS SDK sample code for CreateTable() and UpdateTable() , with view definition and a sample pySpark script to read and verify the view in AWS Glue. The following diagram shows the table, view, and definer IAM role placements between a central governance account and data producer account. Prerequisites To perform this solution, you need to have the following prerequisites: Two AWS accounts with AWS Lake Formation set up. For details, refer to Set up AWS Lake Formation . The Lake Formation setup includes registering your IAM admin role as Lake Formation administrator. In the Data Catalog settings , shown in the following screenshot, Default permissions for newly created databases and tables is set to use Lake Formation permissions only. Cross-account version settings is set to Version 4 . Create an IAM role Data-Analyst in the producer account. For the IAM permissions on this role, refer to Data analyst permissions . This role will also be used as the view definer role. Add the permissions to this definer role from the Prerequisites for creating views . Create database and tables in the central account In this step, you create two tables in the central governance account and populate them with few rows of data: Sign in to the central account as admin user. Open the Athena console and set up the Athena query results bucket . Run the following queries to create two sample Iceberg tables, representing bank customer transactions data: /* Check if the Database exists, if not create new database. */ CREATE DATABASE IF NOT EXISTS bankdata_icebergdb; /*Create transaction_table1*/ Replace the bucket name CREATE TABLE bankdata_icebergdb.transaction_table1 ( transaction_id string, transaction_type string, transaction_amount double) LOCATION 's3://<bucket-name>/bankdata_icebergdb/transaction-table1' TBLPROPERTIES ( 'table_type'='iceberg', 'write_compression'='zstd' ); /*Create transaction_table2*/ CREATE TABLE bankdata_icebergdb.transaction_table2 ( transaction_id string, transaction_location string, transaction_date date) LOCATION 's3://<bucket-name>/bankdata_icebergdb/transaction-table2' TBLPROPERTIES ( 'table_type'='iceberg', 'write_compression'='zstd' ); INSERT INTO bankdata_icebergdb.transaction_table1 (transaction_id, transaction_type, transaction_amount) VALUES ('T001', 'purchase', 50.0), ('T002', 'purchase', 120.0), ('T003', 'refund', 200.5), ('T004', 'purchase', 80.0), ('T005', 'withdrawal', 500.0), ('T006', 'purchase', 300.0), ('T007', 'deposit', 1000.0), ('T008', 'refund', 20.0), ('T009', 'purchase', 150.0), ('T010', 'withdrawal', 75.0); INSERT INTO bankdata_icebergdb.transaction_table2 (transaction_id, transaction_location, transaction_date) VALUES ('T001', 'Charlotte', DATE '2024-10-01'), ('T002', 'Seattle', DATE '2024-10-02'), ('T003', 'Chicago', DATE '2024-10-03'), ('T004', 'Miami', DATE '2024-10-04'), ('T005', 'New York', DATE '2024-10-05'), ('T006', 'Austin', DATE '2024-10-06'), ('T007', 'Denver', DATE '2024-10-07'), ('T008', 'Boston', DATE '2024-10-08'), ('T009', 'San Jose', DATE '2024-10-09'), ('T010', 'Phoenix', DATE '2024-10-10'); Verify the created tables in Athena query editor by running a preview. Share the database and tables from central to producer account In the central governance account, you share the database and the two tables to the producer account and the Data-Analyst role in producer. Sign in to the Lake Formation console as the Lake Formation admin role. In the navigation pane, choose Data permissions . Choose Grant and provide the following information: For Principals , select External accounts and enter the producer account ID, as shown in the following screenshot. For Named Data Catalog Resources , select the default catalog and database bankdata_icebergdb , as shown in the following screenshot. Under Database permissions , select Describe . For Grantable permissions , select Describe . Choose Grant . Repeat the preceding steps to grant access to the producer account definer role Data-Analyst on the database bankdata_icebergdb and the two tables transaction_table1 and transaction_table2 as follows. Under Database permissions , grant Create table and Describe permissions. Under Table permissions , grant Select and Describe on all columns. With these steps, the central governance account data admin steward has shared the database and tables to the producer account definer role. Steps for producer account Follow these steps for the producer account: Sign in to the Lake Formation console on the producer account as the Lake Formation administrator. In the left navigation pane, choose Databases . A blue banner will appear on the console, showing pending invitations from AWS Resource Access Manager (AWS RAM). Open the AWS RAM console and review the AWS RAM shares under Shared with me. You will see the AWS RAM shares in pending state. Select the pending AWS RAM share from central account and choose Accept resource share . After the resource share request is accepted, the shared database shows up in the producer account. On the Lake Formation console, select the database. On the Create dropdown list, choose Resource link . Provide a name rl_bank_iceberg and choose Create . Let’s grant Describe permission on the resource link to the Data-Analyst role in the producer account in the following steps. In the left navigation pane, choose Data permissions . Choose the Data-Analyst role. Select the resource link rl_bank_iceberg for the database as shown in the following screenshot. Grant Describe permission on the resource link. Note: Cross-account Data Catalog views can’t be created using a resource link, although a resource link is needed for the SDK use of SPARK dialect. Next, add the central account Data Catalog as a Data Source in Athena from producer account: Open the Athena console. On the left navigation pane, choose Data sources and catalogs . Choose Create data source . Select S3-AWS Glue Data Catalog . Choose AWS – Glue Data Catalog in another account and name the data source as centraladmin . Choose Next and then create data source. After the data source is created, navigate to the Query editor and verify the Data source centraladmin appears, as shown in the following screenshot. The definer role can also now access and query the central catalog database. Create SPARK dialect view In this step, you create a view with SPARK dialect, using AWS Glue CLI command create-table : Sign in to the AWS console in the producer account as Data-Analyst role. Enter the following command in your CLI environment, such as AWS CloudShell , to create a SPARK DIALECT: aws glue create-table --cli-input-json '{ "DatabaseName": "rl_bank_iceberg", "TableInput": { "Name": "mdv_transaction1", "StorageDescriptor": { "Columns": [ { "Name": "transaction_id", "Type": "string" }, { "Name": "transaction_type", "Type": "string" }, { "Name": "transaction_amount", "Type": "float" }, { "Name": "transaction_location", "Type": "string" }, { "Name": "transaction_date", "Type": "date" } ], "SerdeInfo": {} }, "ViewDefinition": { "SubObjects": [ "arn:aws:glue:<your-region>:<your-central-account-id>:table/bankdata_icebergdb/transaction_table1", "arn:aws:glue:<your-region>:<your-central-account-id>:table/bankdata_icebergdb/transaction_table2" ], "IsProtected": true, "Representations": [ { "Dialect": "SPARK", "DialectVersion": "1.0", "ViewOriginalText": "SELECT t1.transaction_id, t1.transaction_type, t1.transaction_amount, t2.transaction_location, t2.transaction_date FROM transaction_table1 t1 JOIN transaction_table2 t2 ON t1.transaction_id = t2.transaction_id WHERE t1.transaction_amount > 100;", "ViewExpandedText": "SELECT t1.transaction_id, t1.transaction_type, t1.transaction_amount, t2.transaction_location, t2.transaction_date FROM transaction_table1 t1 JOIN transaction_table2 t2 ON t1.transaction_id = t2.transaction_id WHERE t1.transaction_amount > 100;" } ] } } }' Open the Lake Formation console and verify if the view is created. Verify the dialect of the view on the SQL definitions tab for the view details. Add ATHENA dialect To add ATHENA dialect, follow these steps: On the Athena console, select centraladmin from the Data source . Enter the following SQL script to create the ATHENA dialect for the same view: ALTER VIEW mdv_transaction1 FORCE ADD DIALECT AS SELECT t1.transaction_id, t1.transaction_type, t1.transaction_amount, t2.transaction_location, t2.transaction_date FROM transaction_table1 t1 JOIN transaction_table2 t2 ON t1.transaction_id = t2.transaction_id WHERE t1.transaction_amount > 100 We can’t use the resource link rl_bank_iceberg in the Athena query editor to create or alter a view in the central account. Verify the added dialect by running a preview in Athena. For running the query, you can use either the resource link rl_bank_iceberg from the producer account catalog or use the centraladmin catalog. The following screenshot shows querying using the resource link of the database in the producer account catalog. The following screenshot shows querying the view from the producer using the connected catalog centraladmin as the data source. Verify the dialects on the view by inspecting the table in the Lake Formation console. You can now query the view as the Data-Analyst role in the producer account, using both Athena and Spark. The view will also show in the central account as shown in the following code example, with access to the Lake Formation admin. You can also create the view with ATHENA dialect and add the SPARK dialect. The SQL syntax to create the view in ATHENA dialect is shown in the following example: create protected multi dialect view mdv_transaction1 security definer as SELECT t1.transaction_id, t1.transaction_type, t1.transaction_amount, t2.transaction_location, t2.transaction_date FROM transaction_table1 t1 JOIN transaction_table2 t2 ON t1.transaction_id = t2.transaction_id WHERE t1.transaction_amount > 100; The update-table CLI to add the corresponding SPARK dialect is shown in the following example: aws glue update-table --cli-input-json '{ "DatabaseName": "rl_bankdatadb", "ViewUpdateAction": "ADD", "Force": true, "TableInput": { "Name": " mdv_transaction1", "StorageDescriptor": { "Columns": [ { "Name": "transaction_id", "Type": "string" }, { "Name": "transaction_type", "Type": "string" }, { "Name": "transaction_amount", "Type": "float" }, { "Name": "transaction_location", "Type": "string" }, { "Name": "transaction_date", "Type": "date" } ], "SerdeInfo": {} }, "ViewDefinition": { "SubObjects": [ " "arn:aws:glue:<your-region>:<your-central-account-id>:table/bankdata_icebergdb/transaction_table1", "arn:aws:glue:<your-region>:<your-central-account-id>:table/bankdata_icebergdb/transaction_table2" ], "IsProtected": true, "Representations": [ { "Dialect": "SPARK", "DialectVersion": "1.0", "ViewOriginalText": " SELECT t1.transaction_id, t1.transaction_type, t1.transaction_amount, t2.transaction_location, t2.transaction_date FROM transaction_table1 t1 JOIN transaction_table2 t2 ON t1.transaction_id = t2.transaction_id WHERE t1.transaction_amount > 100", "ViewExpandedText": " SELECT t1.transaction_id, t1.transaction_type, t1.transaction_amount, t2.transaction_location, t2.transaction_date FROM transaction_table1 t1 JOIN transaction_table2 t2 ON t1.transaction_id = t2.transaction_id WHERE t1.transaction_amount > 100" } ] } } }' The following is a sample Python script to create a SPARK dialect view: glueview-createtable.py . The following code block is a sample AWS Glue extract, transfer, and load (ETL) script to access the Spark dialect of the view from AWS Glue 5.0 from the central account. The AWS Glue job execution role should have Lake Formation SELECT permission on the AWS Glue view: from pyspark.context import SparkContext from pyspark.sql import SparkSession aws_region = "<your-region>" aws_account_id = "<your-central-account-id>" local_catalogname = "spark_catalog" warehouse_path = "s3://<your-bucket-name>/bankdata_icebergdb/transaction-table1" spark = SparkSession.builder.appName('query_glue_view') \ .config('spark.sql.extensions','org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions') \ .config(f'spark.sql.catalog.{local_catalogname}', 'org.apache.iceberg.spark.SparkSessionCatalog') \ .config(f'spark.sql.catalog.{local_catalogname}.catalog-impl', 'org.apache.iceberg.aws.glue.GlueCatalog') \ .config(f'spark.sql.catalog.{local_catalogname}.client.region', aws_region) \ .config(f'spark.sql.catalog.{local_catalogname}.glue.account-id', aws_account_id) \ .config(f'spark.sql.catalog.{local_catalogname}.io-impl', 'org.apache.iceberg.aws.s3.S3FileIO') \ .config(f'spark.sql.catalog.{local_catalogname}.warehouse',warehouse_path) \ .getOrCreate() spark.sql(f"show databases").show() spark.sql(f"SHOW TABLES IN {local_catalogname}.bankdata_icebergdb").show() spark.sql(f"SELECT * FROM {local_catalogname}.bankdata_icebergdb. mdv_transaction1").show() In the AWS Glue job-details, for Lake Formation managed tables and for Iceberg tables, set additional parameters respectively as follows: --enable-lakeformation-fine-grained-access = true --datalake-formats = iceberg Cleanup To avoid incurring costs, clean up the resources you used for this post: Revoke the Lake Formation permissions granted to the Data-Analyst role and Producer account Drop the Athena tables Delete the Athena query results from your Amazon Simple Storage Service (Amazon S3) bucket Delete the Data-Analyst role from IAM Conclusion In this post, we demonstrated how to use cross-account IAM definer roles with AWS Glue Data Catalog views . We showed how data owner accounts can create and manage views in a central governance account while maintaining security and control over their data assets. This feature enables enterprises to implement sophisticated data mesh architectures without compromising on security or requiring data duplication. The ability to use cross-account definer roles with Data Catalog views provides several key advantages: Streamlines view management in multi-account environments Maintains existing CI/CD workflows and automation Enhances security through centralized governance Reduces operational overhead by eliminating the need for data duplication As organizations continue to build and scale their data lakehouse architectures across multiple AWS accounts, cross-account definer roles for Data Catalog views provide a crucial capability for implementing efficient, secure, and well-governed data sharing patterns. About the authors Aarthi Srinivasan Aarthi is a Senior Big Data Architect at Amazon Web Services (AWS). She works with AWS customers and partners to architect data lake solutions, enhance product features, and establish best practices for data governance. Sundeep Kumar Sundeep is a Sr. Specialist Solutions Architect at Amazon Web Services (AWS), helping customers build data lake and analytics platforms and solutions. When not building and designing data lakes, Sundeep enjoys listening to music and playing guitar. Loading comments… Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow Twitter Facebook LinkedIn Twitch Email Updates Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center SDKs & Tools .NET on AWS Python on AWS Java on AWS PHP on AWS JavaScript on AWS Help Contact Us File a Support Ticket AWS re:Post Knowledge Center AWS Support Overview Get Expert Help AWS Accessibility Legal English Back to top Amazon is an Equal Opportunity Employer: Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation / Age. x facebook linkedin instagram twitch <svg role | 2026-01-13T09:29:12 |
https://aws.amazon.com/blogs/big-data/category/analytics/amazon-elasticsearch-service/ | Amazon OpenSearch Service | AWS Big Data Blog Skip to Main Content English Contact us Support My account Filter: All Sign in to console Create Account AWS Blogs Home Blogs Editions AWS Big Data Blog Category: Amazon OpenSearch Service Trusted identity propagation using IAM Identity Center for Amazon OpenSearch Service by Muthu Pitchaimani and Sohaib Katariwala on 25 JUL 2025 in Amazon OpenSearch Service , Analytics , AWS Identity and Access Management (IAM) Permalink Comments Share Now, by using trusted identity propagation, IAM Identity Center provides a new, direct method for accessing data in OpenSearch Service. In this post, we outline how you can take advantage of this new access method to simplify data access using the OpenSearch UI and still maintain robust role-based access control for your OpenSearch data. Amazon OpenSearch Service 101: How many shards do I need by Tom Burns and Ron Miller on 24 JUL 2025 in Amazon OpenSearch Service , Foundational (100) , Technical How-to Permalink Comments Share Customers new to Amazon OpenSearch Service often ask how many shards their indexes need. An index is a collection of shards, and an index’s shard count can affect both indexing and search request efficiency. OpenSearch Service can take in large amounts of data, split it into smaller units called shards, and distribute those shards across a dynamically changing set of instances. In this post, we provide some practical guidance for determining the ideal shard count for your use case. Workload management in OpenSearch-based multi-tenant centralized logging platforms by Ezat Karimi and Jon Handler on 22 JUL 2025 in Amazon OpenSearch Service , Analytics Permalink Comments Share When you use Amazon OpenSearch Service to store and analyze log data, whether as a developer or an IT admin, you must balance these tenants to make sure you deliver the resources to each tenant so they can ingest, store, and query their data. In this post, we present a multi-layered workload management framework with a rules-based proxy and OpenSearch workload management that can effectively address these challenges. Optimizing vector search using Amazon S3 Vectors and Amazon OpenSearch Service by Sohaib Katariwala , Bobby Mohammed , Sorabh Hamirwasia , Mark Twomey , and Pallavi Priyadarshini on 21 JUL 2025 in Advanced (300) , Amazon OpenSearch Service , Amazon Simple Storage Service (S3) , Analytics , Artificial Intelligence , Launch , Storage , Technical How-to Permalink Comments Share We now have a public preview of two integrations between Amazon Simple Storage Service (Amazon S3) Vectors and Amazon OpenSearch Service that give you more flexibility in how you store and search vector embeddings. In this post, we walk through this seamless integration, providing you with flexible options for vector search implementation. Integrating Amazon OpenSearch Ingestion with Amazon RDS and Amazon Aurora by Michael Torio , Arjun Nambiar , and Sohaib Katariwala on 17 JUL 2025 in Amazon Aurora , Amazon OpenSearch Service , Amazon RDS , Analytics , Intermediate (200) Permalink Comments Share We are happy to announce the general availability of the integration of Amazon OpenSearch Service with Amazon Relational Database Service (Amazon RDS) and Amazon Aurora. This new integration eliminates complex data pipelines and enables near real-time data synchronization between Amazon Aurora (including Amazon Aurora MySQL-Compatible Edition and Amazon Aurora PostgreSQL-Compatible Edition) and Amazon RDS databases (including Amazon RDS for MySQL and Amazon RDS for PostgreSQL), and Amazon OpenSearch Service, unlocking advanced search capabilities such as hybrid search, ranked results, and faceted search on transactional databases. Build conversational AI search with Amazon OpenSearch Service by Bharav Patel on 03 JUL 2025 in Amazon OpenSearch Service , Analytics , Generative AI , Intermediate (200) Permalink Comments Share Amazon OpenSearch Service is a versatile search and analytics tool. In this post, we explore conversational search, its architecture, and various ways to implement it. Enhance stability with dedicated cluster manager nodes using Amazon OpenSearch Service by Chinmayi Narasimhadevara and Imtiaz Sayed on 03 JUL 2025 in Amazon OpenSearch Service , Analytics , Best Practices Permalink Comments Share In this post, we show how to enhance the stability of your OpenSearch Service domain with dedicated cluster manager nodes and how using these in deployment enhances your cluster’s stability and reliability. Kaltura reduces observability operational costs by 60% with Amazon OpenSearch Service by Ido Ziv , Roi Gamliel , and Yonatan Dolan on 03 JUL 2025 in Amazon OpenSearch Service , Analytics , Customer Solutions , Technical How-to Permalink Comments Share In this post, we share how Kaltura transformed its observability strategy and technological stack by migrating from a software as a service (SaaS) logging solution to Amazon OpenSearch Service—achieving higher log retention, a 60% reduction in cost, and a centralized platform that empowers multiple teams with real-time insights. Amazon OpenSearch Service 101: Create your first search application with OpenSearch by Sriharsha Subramanya Begolli and Fraser Sequeira on 25 JUN 2025 in Amazon API Gateway , Amazon OpenSearch Service , Architecture , Intermediate (200) Permalink Comments Share In this post, we walk you through a search application building process using Amazon OpenSearch Service. Whether you’re a developer new to search or looking to understand OpenSearch fundamentals, this hands-on post shows you how to build a search application from scratch—starting with the initial setup; diving into core components such as indexing, querying, result presentation; and culminating in the execution of your first search query. Implement secure hybrid and multicloud log ingestion with Amazon OpenSearch Ingestion by Xiaoxue Xu and Simran Singh on 25 JUN 2025 in Amazon OpenSearch Service , AWS Identity and Access Management (IAM) , Intermediate (200) , Monitoring and observability , Multicloud , Security , Technical How-to Permalink Comments Share In this post, we demonstrate how to configure Fluent Bit, a fast and flexible log processor and router supported by various operating systems, to securely send logs from any environment to OpenSearch Ingestion using IAM Roles Anywhere. ← Older posts Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Generative AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners AWS Inclusion, Diversity & Equity Developers Developer Center SDKs & Tools .NET on AWS Python on AWS Java on AWS PHP on AWS JavaScript on AWS Help Contact Us File a Support Ticket AWS re:Post Knowledge Center AWS Support Overview Get Expert Help AWS Accessibility Legal English Back to top Amazon is an Equal Opportunity Employer: Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation / Age. facebook linkedin instagram twitch youtube podcasts email Privacy Site terms Cookie Preferences © 2025, Amazon Web Services, Inc. or its affiliates. All rights reserved. | 2026-01-13T09:29:12 |
https://aws.amazon.com/blogs/big-data/category/application-services/aws-step-functions/ | AWS Step Functions | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Category: AWS Step Functions Automate and orchestrate Amazon EMR jobs using AWS Step Functions and Amazon EventBridge by Senthil Kamala Rathinam and Shashidhar Makkapati on 15 SEP 2025 in Advanced (300) , Amazon CloudWatch , Amazon EC2 , Amazon EMR , Amazon EventBridge , Analytics , AWS Step Functions , Technical How-to Permalink Comments Share In this post, we discuss how to build a fully automated, scheduled Spark processing pipeline using Amazon EMR on EC2, orchestrated with Step Functions and triggered by EventBridge. We walk through how to deploy this solution using AWS CloudFormation, processes COVID-19 public dataset data in Amazon Simple Storage Service (Amazon S3), and store the aggregated results in Amazon S3. How Open Universities Australia modernized their data platform and significantly reduced their ETL costs with AWS Cloud Development Kit and AWS Step Functions by Michael Davies and Emma Arrigo on 30 JAN 2025 in Amazon AppFlow , Amazon EventBridge , Amazon Redshift , Amazon Redshift , Amazon Simple Storage Service (S3) , Asia Pacific , AWS Glue , AWS Lambda , AWS Serverless Application Model , AWS Step Functions , Customer Solutions , Education , Higher education Permalink Comments Share At Open Universities Australia (OUA), we empower students to explore a vast array of degrees from renowned Australian universities, all delivered through online learning. In this post, we show you how we used AWS services to replace our existing third-party ETL tool, improving the team’s productivity and producing a significant reduction in our ETL operational costs. Building end-to-end data lineage for one-time and complex queries using Amazon Athena, Amazon Redshift, Amazon Neptune and dbt by Nancy Wu , Xu Feng , and Da Xu on 12 DEC 2024 in Amazon Athena , Amazon DataZone , Amazon Neptune , Amazon Redshift , Amazon Simple Storage Service (S3) , Analytics , AWS Glue , AWS Lambda , AWS Step Functions , Technical How-to Permalink Comments Share In this post, we use dbt for data modeling on both Amazon Athena and Amazon Redshift. dbt on Athena supports real-time queries, while dbt on Amazon Redshift handles complex queries, unifying the development language and significantly reducing the technical learning curve. Using a single dbt modeling language not only simplifies the development process but also automatically generates consistent data lineage information. This approach offers robust adaptability, easily accommodating changes in data structures. Accelerate your data workflows with Amazon Redshift Data API persistent sessions by Dipal Mahajan , Anusha Challa , Blessing Bamiduro , Debu Panda , and Ricardo Serafim on 22 NOV 2024 in Amazon Redshift , Analytics , Announcements , AWS Step Functions , Technical How-to Permalink Comments Share In this post, we’ll walk through an example ETL process that uses session reuse to efficiently create, populate, and query temporary staging tables across the full data transformation workflow—all within the same persistent Amazon Redshift database session. You’ll learn best practices for optimizing ETL orchestration code, reducing job runtimes by eliminating connection overhead, and simplifying pipeline complexity Modernize your legacy databases with AWS data lakes, Part 2: Build a data lake using AWS DMS data on Apache Iceberg by Shaheer Mansoor , Anoop Kumar K M , and Sreenivas Nettem on 30 OCT 2024 in Amazon Simple Queue Service (SQS) , Amazon Simple Storage Service (S3) , AWS Big Data , AWS Database Migration Service , AWS Glue , AWS Step Functions , Python , Technical How-to Permalink Comments Share This is part two of a three-part series where we show how to build a data lake on AWS using a modern data architecture. This post shows how to load data from a legacy database (SQL Server) into a transactional data lake (Apache Iceberg) using AWS Glue. We show how to build data pipelines using AWS Glue jobs, optimize them for both cost and performance, and implement schema evolution to automate manual tasks. To review the first part of the series, where we load SQL Server data into Amazon Simple Storage Service (Amazon S3) using AWS Database Migration Service (AWS DMS), see Modernize your legacy databases with AWS data lakes, Part 1: Migrate SQL Server using AWS DMS. Enrich your serverless data lake with Amazon Bedrock by Dave Horne and Robert Kessler on 26 SEP 2024 in Amazon Bedrock , Application Integration , AWS Lambda , AWS Step Functions , Technical How-to Permalink Comments Share Organizations are collecting and storing vast amounts of structured and unstructured data like reports, whitepapers, and research documents. By consolidating this information, analysts can discover and integrate data from across the organization, creating valuable data products based on a unified dataset. This post shows how to integrate Amazon Bedrock with the AWS Serverless Data Analytics Pipeline architecture using Amazon EventBridge, AWS Step Functions, and AWS Lambda to automate a wide range of data enrichment tasks in a cost-effective and scalable manner. Build a serverless data quality pipeline using Deequ on AWS Lambda by Vivek Mittal , John Cherian , and Uma Ramadoss on 14 AUG 2024 in Advanced (300) , AWS Lambda , AWS Step Functions , Technical How-to Permalink Comments Share Poor data quality can lead to a variety of problems, including pipeline failures, incorrect reporting, and poor business decisions. For example, if data ingested from one of the systems contains a high number of duplicates, it can result in skewed data in the reporting system. To prevent such issues, data quality checks are integrated into […] Migrate workloads from AWS Data Pipeline by Noritaka Sekiyama , Matt Su , Vaibhav Porwal , and Sriram Ramarathnam on 25 JUL 2024 in Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , Analytics , AWS Data Pipeline , AWS Glue , AWS Step Functions Permalink Comments Share After careful consideration, we have made the decision to close new customer access to AWS Data Pipeline, effective July 25, 2024. AWS Data Pipeline existing customers can continue to use the service as normal. AWS continues to invest in security, availability, and performance improvements for AWS Data Pipeline, but we do not plan to introduce […] Automate data loading from your database into Amazon Redshift using AWS Database Migration Service (DMS), AWS Step Functions, and the Redshift Data API by Ritesh Sinha , Praveen Kadipikonda , and Jagadish Kumar on 02 JUL 2024 in Amazon Database Migration Accelerator , Amazon EventBridge , Amazon Redshift , Analytics , AWS Big Data , AWS Step Functions Permalink Comments Share Amazon Redshift is a fast, scalable, secure, and fully managed cloud data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing ETL (extract, transform, and load), business intelligence (BI), and reporting tools. Tens of thousands of customers use Amazon Redshift to process exabytes of data per […] Disaster recovery strategies for Amazon MWAA – Part 2 by Chandan Rupakheti and Parnab Basak on 17 JUN 2024 in Amazon EventBridge , Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , Amazon Simple Storage Service (S3) , AWS Lambda , AWS Step Functions , Technical How-to Permalink Comments Share Amazon Managed Workflows for Apache Airflow (Amazon MWAA) is a fully managed orchestration service that makes it straightforward to run data processing workflows at scale. Amazon MWAA takes care of operating and scaling Apache Airflow so you can focus on developing workflows. However, although Amazon MWAA provides high availability within an AWS Region through features […] ← Older posts @charset "UTF-8";[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4{position:relative;transition:box-shadow .3s ease}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_3d631df0,[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_b27cc003,[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_5962fadc:hover{box-shadow:var(--rg-shadow-gray-elevation-1, 1px 1px 20px rgba(0, 0, 0, .1))}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_3d631df0.rgft_e79955da,[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_b27cc003.rgft_e79955da,[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_5962fadc:hover.rgft_e79955da{box-shadow:var(--rg-shadow-gray-elevation-2, 1px 1px 24px rgba(0, 0, 0, .25))}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_b27cc003:hover{box-shadow:none}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde{position:relative;transform-style:preserve-3d;overflow:unset!important}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:before{content:"";position:absolute;inset:0;border-radius:inherit;transform:translateZ(-1px);pointer-events:none;transition-property:filter,inset;transition-duration:.3s;transition-timing-function:ease;background-clip:content-box!important;padding:1px}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_3d631df0:before{filter:blur(15px)}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_3d631df0.rgft_4df65418:hover:before{filter:blur(20px)}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_5962fadc:hover:before{filter:blur(15px)}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_b27cc003:before{filter:blur(15px)}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_b27cc003:hover:before{filter:none}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_e90ac70d:active:before{filter:blur(8px)!important}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_a4f580d2:before{filter:blur(8px)!important}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=fuchsia] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=fuchsia].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-fuchsia, linear-gradient(123deg, #fa6f00 0%, #e433ff 50%, #8575ff 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=fuchsia] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=fuchsia].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-fuchsia, linear-gradient(123deg, #d14600 0%, #c300e0 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=indigo] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=indigo].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-indigo, linear-gradient(123deg, #0099ff 0%, #5c7fff 50%, #8575ff 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=indigo] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=indigo].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-indigo, linear-gradient(123deg, #006ce0 0%, #295eff 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=orange] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=orange].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-orange, linear-gradient(123deg, #ff1ae0 0%, #ff386a 50%, #fa6f00 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=orange] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=orange].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-orange, linear-gradient(123deg, #d600ba 0%, #eb003b 50%, #d14600 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=teal] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=teal].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-teal, linear-gradient(123deg, #00bd6b 0%, #00a4bd 50%, #0099ff 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=teal] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=teal].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-teal, linear-gradient(123deg, #008559 0%, #007e94 50%, #006ce0 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=blue] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=blue].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-blue, linear-gradient(123deg, #00bd6b 0%, #0099ff 50%, #8575ff 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=blue] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=blue].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-blue, linear-gradient(123deg, #008559 0%, #006ce0 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=violet] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=violet].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-violet, linear-gradient(123deg, #ad5cff 0%, #0099ff 50%, #00a4bd 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=violet] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=violet].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-violet, linear-gradient(123deg, #962eff 0%, #006ce0 50%, #007e94 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=purple] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=purple].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-purple, linear-gradient(123deg, #ff1ae0 0%, #8575ff 50%, #00a4bd 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=purple] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=purple].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-purple, linear-gradient(123deg, #d600ba 0%, #6842ff 50%, #007e94 100%))}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:linear-gradient(123deg,#d14600,#c300e0,#6842ff)}[data-eb-6a8f3296] [data-rg-theme=fuchsia] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=fuchsia].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-fuchsia, linear-gradient(123deg, #d14600 0%, #c300e0 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-theme=indigo] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=indigo].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-indigo, linear-gradient(123deg, #006ce0 0%, #295eff 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-theme=orange] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=orange].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-orange, linear-gradient(123deg, #d600ba 0%, #eb003b 50%, #d14600 100%))}[data-eb-6a8f3296] [data-rg-theme=teal] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=teal].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-teal, linear-gradient(123deg, #008559 0%, #007e94 50%, #006ce0 100%))}[data-eb-6a8f3296] [data-rg-theme=blue] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=blue].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-blue, linear-gradient(123deg, #008559 0%, #006ce0 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-theme=violet] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=violet].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-violet, linear-gradient(123deg, #962eff 0%, #006ce0 50%, #007e94 100%))}[data-eb-6a8f3296] [data-rg-theme=purple] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=purple].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-purple, linear-gradient(123deg, #d600ba 0%, #6842ff 50%, #007e94 100%))}[data-eb-6a8f3296] a.rgft_f7822e54,[data-eb-6a8f3296] button.rgft_f7822e54{--button-size: 44px;--button-pad-h: 24px;--button-pad-borderless-h: 26px;border:2px solid var(--rg-color-background-page-inverted, #0F141A);padding:8px var(--button-pad-h, 24px);border-radius:40px!important;align-items:center;justify-content:center;display:inline-flex;height:var(--button-size, 44px);text-decoration:none!important;text-wrap:nowrap;cursor:pointer;position:relative;transition:all .3s ease}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_094d67e1,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_094d67e1{--button-size: 32px;--button-pad-h: 14px;--button-pad-borderless-h: 16px}[data-eb-6a8f3296] a.rgft_f7822e54>span,[data-eb-6a8f3296] button.rgft_f7822e54>span{color:var(--btn-text-color, inherit)!important}[data-eb-6a8f3296] a.rgft_f7822e54:focus-visible,[data-eb-6a8f3296] button.rgft_f7822e54:focus-visible{outline:2px solid var(--rg-color-focus-ring, #006CE0)!important;outline-offset:4px!important;transition:outline 0s}[data-eb-6a8f3296] a.rgft_f7822e54:focus:not(:focus-visible),[data-eb-6a8f3296] button.rgft_f7822e54:focus:not(:focus-visible){outline:none!important}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_303c672b,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_303c672b{--btn-text-color: var(--rg-color-text-utility-inverted, #FFFFFF);background-color:var(--rg-color-btn-primary-bg, #161D26);border:none;padding:10px var(--button-pad-borderless-h, 24px)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_18409398,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_18409398{--btn-text-color: var(--rg-color-text-utility, #161D26);background-color:var(--rg-color-btn-secondary-bg, #FFFFFF);border-color:var(--rg-color-background-page-inverted, #0F141A)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_090951dc{--btn-text-color: var(--rg-color-text-utility, #161D26);background-color:var(--rg-color-background-object, #F3F3F7);border:none;padding:10px var(--button-pad-borderless-h, 24px)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_15529d9f,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_15529d9f{--btn-text-color: var(--rg-color-text-utility, #161D26);background-color:var(--rg-color-btn-secondary-bg, #FFFFFF);border:none;padding:10px var(--button-pad-borderless-h, 24px)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb950a4e{--btn-text-color: var(--rg-color-text-utility, #161D26);border:none;padding:10px var(--button-pad-borderless-h, 24px)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb950a4e{background-image:linear-gradient(97deg,#ffc0ad,#f8c7ff 37.79%,#d2ccff 75.81%,#c2d1ff)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb950a4e{--rg-gradient-angle:97deg;background-image:var(--rg-gradient-a, linear-gradient(120deg, #f8c7ff 20.08%, #d2ccff 75.81%))}[data-eb-6a8f3296] [data-rg-mode=dark] a.rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] [data-rg-mode=dark] button.rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] a[data-rg-mode=dark].rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] button[data-rg-mode=dark].rgft_f7822e54.rgft_bb950a4e{background-image:var(--rg-gradient-a, linear-gradient(120deg, #78008a 24.25%, #b2008f 69.56%))}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb419678,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb419678{--btn-text-color: var(--rg-color-text-utility-inverted, #FFFFFF);background-color:var(--rg-color-btn-visited-bg, #656871);border-color:var(--rg-color-btn-visited-bg, #656871)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb419678.rgft_18409398,[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb419678.rgft_15529d9f,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb419678.rgft_18409398,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb419678.rgft_15529d9f{--btn-text-color: var(--rg-color-text-utility, #161D26);background-color:var(--rg-color-btn-secondary-visited-bg, #FFFFFF)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_badebaf5,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_badebaf5{--btn-text-color: var(--rg-color-btn-disabled-text, #B4B4BB);background-color:var(--rg-color-btn-disabled-bg, #F3F3F7);border-color:var(--rg-color-btn-disabled-bg, #F3F3F7);cursor:default}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_badebaf5.rgft_18409398,[data-eb-6a8f3296] a.rgft_f7822e54.rgft_badebaf5.rgft_15529d9f,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_badebaf5.rgft_18409398,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_badebaf5.rgft_15529d9f{border:none;padding:10px var(--button-pad-borderless-h, 24px)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_badebaf5.rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_badebaf5.rgft_090951dc{--btn-text-color: var(--rg-color-btn-tertiary-disabled-text, #B4B4BB);background-color:#0000}[data-eb-6a8f3296] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_18409398:not(.rgft_bb950a4e),[data-eb-6a8f3296] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_15529d9f:not(.rgft_bb950a4e),[data-eb-6a8f3296] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_18409398:not(.rgft_bb950a4e),[data-eb-6a8f3296] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_15529d9f:not(.rgft_bb950a4e){--btn-text-color: var(--rg-color-text-utility, #161D26);background-color:var(--rg-color-btn-secondary-bg, #FFFFFF)}[data-eb-6a8f3296] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc{box-shadow:none}[data-eb-6a8f3296] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc{background-image:linear-gradient(97deg,#ffc0ad80,#f8c7ff80 37.79%,#d2ccff80 75.81%,#c2d1ff80)}[data-eb-6a8f3296] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc{--rg-gradient-angle:97deg;background-image:var(--rg-gradient-a-50, linear-gradient(120deg, #f8c7ff 20.08%, #d2ccff 75.81%))}[data-eb-6a8f3296] [data-rg-mode=dark] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] [data-rg-mode=dark] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] a[data-rg-mode=dark].rgft_f7822e54.rgft_090951dc:hover:not(.rgft_badebaf5),[data-eb-6a8f3296] button[data-rg-mode=dark].rgft_f7822e54.rgft_090951dc:hover:not(.rgft_badebaf5){background-image:var(--rg-gradient-a-50, linear-gradient(120deg, #78008a 24.25%, #b2008f 69.56%))}[data-eb-6a8f3296] a.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc{box-shadow:none}[data-eb-6a8f3296] a.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc{background-image:linear-gradient(97deg,#ffc0ad,#f8c7ff 37.79%,#d2ccff 75.81%,#c2d1ff)}[data-eb-6a8f3296] a.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc{--rg-gradient-angle:97deg;background-image:var(--rg-gradient-a-pressed, linear-gradient(120deg, rgba(248, 199, 255, .5) 20.08%, #d2ccff 75.81%))}[data-eb-6a8f3296] [data-rg-mode=dark] a.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] [data-rg-mode=dark] button.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] a[data-rg-mode=dark].rgft_f7822e54.rgft_090951dc:active:not(.rgft_badebaf5),[data-eb-6a8f3296] button[data-rg-mode=dark].rgft_f7822e54.rgft_090951dc:active:not(.rgft_badebaf5){background-image:var(--rg-gradient-a-pressed, linear-gradient(120deg, rgba(120, 0, 138, .5) 24.25%, #b2008f 69.56%))}[data-eb-6a8f3296] .rgft_8711ccd9{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;background:#0000;border:none;margin:0}[data-eb-6a8f3296] .rgft_8711ccd9.rgft_5e58a6df{text-align:center}[data-eb-6a8f3296] .rgft_8711ccd9.rgft_b7ada98b{display:block}[data-eb-6a8f3296] .rgft_8711ccd9.rgft_beb26dc7{font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace}[data-eb-6a8f3296] .rgft_8711ccd9 a{display:inline;position:relative;cursor:pointer;text-decoration:none!important;color:var(--rg-color-link-default, #006CE0);background:linear-gradient(to right,currentcolor,currentcolor);background-size:100% .1em;background-position:0 100%;background-repeat:no-repeat}[data-eb-6a8f3296] .rgft_8711ccd9 a:focus-visible{color:var(--rg-color-link-focus, #006CE0)}[data-eb-6a8f3296] .rgft_8711ccd9 a:hover{color:var(--rg-color-link-hover, #003B8F);animation:rgft_d72bdead .3s cubic-bezier(0,0,.2,1)}[data-eb-6a8f3296] .rgft_8711ccd9 a:visited{color:var(--rg-color-link-visited, #6842FF)}@keyframes rgft_d72bdead{0%{background-size:0 .1em}to{background-size:100% .1em}}[data-eb-6a8f3296] .rgft_8711ccd9 b,[data-eb-6a8f3296] b.rgft_8711ccd9,[data-eb-6a8f3296] .rgft_8711ccd9 strong,[data-eb-6a8f3296] strong.rgft_8711ccd9{font-weight:700}[data-eb-6a8f3296] i.rgft_8711ccd9,[data-eb-6a8f3296] .rgft_8711ccd9 i,[data-eb-6a8f3296] em.rgft_8711ccd9,[data-eb-6a8f3296] .rgft_8711ccd9 em{font-style:italic}[data-eb-6a8f3296] u.rgft_8711ccd9,[data-eb-6a8f3296] .rgft_8711ccd9 u{text-decoration:underline}[data-eb-6a8f3296] code.rgft_8711ccd9,[data-eb-6a8f3296] .rgft_8711ccd9 code{font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace;border-radius:4px;border:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);color:var(--rg-color-text-secondary, #232B37);padding-top:var(--rg-padding-8);padding-right:var(--rg-padding-8);padding-bottom:var(--rg-padding-8);padding-left:var(--rg-padding-8)}[data-eb-6a8f3296] .rgft_12e1c6fa{display:inline!important;vertical-align:middle}[data-eb-6a8f3296] .rgft_8711ccd9 p img{aspect-ratio:16/9;height:100%;object-fit:cover;width:100%;border-radius:8px;order:1;margin-bottom:var(--rg-margin-4)}[data-eb-6a8f3296] .rgft_8711ccd9 table{table-layout:fixed;border-spacing:0;width:100%}[data-eb-6a8f3296] .rgft_8711ccd9 table td{font-size:14px;border-right:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);border-bottom:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);padding-top:var(--rg-padding-6);padding-right:var(--rg-padding-6);padding-bottom:var(--rg-padding-6);padding-left:var(--rg-padding-6)}[data-eb-6a8f3296] .rgft_8711ccd9 table td:first-of-type{border-left:1px solid var(--rg-color-border-lowcontrast, #CCCCD1)}[data-eb-6a8f3296] .rgft_8711ccd9 table thead tr:first-of-type>*:first-of-type,[data-eb-6a8f3296] .rgft_8711ccd9 table:not(:has(thead)) tr:first-of-type>*:first-of-type{border-top-left-radius:16px}[data-eb-6a8f3296] .rgft_8711ccd9 table thead tr:first-of-type>*:last-of-type,[data-eb-6a8f3296] .rgft_8711ccd9 table:not(:has(thead)) tr:first-of-type>*:last-of-type{border-top-right-radius:16px}[data-eb-6a8f3296] .rgft_8711ccd9 table tr:last-of-type td:first-of-type{border-bottom-left-radius:16px}[data-eb-6a8f3296] .rgft_8711ccd9 table tr:last-of-type td:last-of-type{border-bottom-right-radius:16px}[data-eb-6a8f3296] .rgft_8711ccd9 table:not(:has(thead),:has(th)) tr:first-of-type td{border-top:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);border-right:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);border-bottom:1px solid var(--rg-color-border-lowcontrast, #CCCCD1)}[data-eb-6a8f3296] .rgft_8711ccd9 table th{color:var(--rg-color-text-primary-inverted, #FFFFFF);min-width:280px;max-width:400px;padding:0;text-align:left;vertical-align:top;background-color:var(--rg-color-background-object-inverted, #232B37);border-left:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);border-right:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);border-bottom:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);padding-top:var(--rg-padding-6);padding-right:var(--rg-padding-6);padding-bottom:var(--rg-padding-6);padding-left:var(--rg-padding-6);row-gap:var(--rg-margin-5);column-gap:var(--rg-margin-5);max-width:100%;min-width:150px}@media (min-width: 480px) and (max-width: 767px){[data-eb-6a8f3296] .rgft_8711ccd9 table th{max-width:100%;min-width:150px}}@media (min-width: 768px) and (max-width: 1023px){[data-eb-6a8f3296] .rgft_8711ccd9 table th{max-width:240px;min-width:180px}}@media (min-width: 1024px) and (max-width: 1279px){[data-eb-6a8f3296] .rgft_8711ccd9 table th{max-width:350px;min-width:240px}}@media (min-width: 1280px) and (max-width: 1599px){[data-eb-6a8f3296] .rgft_8711ccd9 table th{max-width:400px;min-width:280px}}@media (min-width: 1600px){[data-eb-6a8f3296] .rgft_8711ccd9 table th{max-width:400px;min-width:280px}}[data-eb-6a8f3296] .rgft_8711ccd9 table th:first-of-type{border-top-left-radius:16px;border-top:0 solid var(--rg-color-border-lowcontrast, #CCCCD1);border-left:0 solid var(--rg-color-border-lowcontrast, #CCCCD1);border-right:0 solid var(--rg-color-border-lowcontrast, #CCCCD1)}[data-eb-6a8f3296] .rgft_8711ccd9 table th:nth-of-type(n+3){border-left:0 solid var(--rg-color-border-lowcontrast, #CCCCD1)}[data-eb-6a8f3296] .rgft_8711ccd9 table th:last-of-type{border-top-right-radius:16px;border-top:0 solid var(--rg-color-border-lowcontrast, #CCCCD1);border-right:0 solid var(--rg-color-border-lowcontrast, #CCCCD1)}[data-eb-6a8f3296] .rgft_a1b66739{display:inline-flex;flex-direction:column;align-items:center;justify-content:center;color:var(--rg-color-text-primary, #161D26);--icon-color: currentcolor}[data-eb-6a8f3296] .rgft_a1b66739.rgft_bc1a8743{height:16px;width:16px}[data-eb-6a8f3296] .rgft_a1b66739.rgft_c0cbb35d{height:20px;width:20px}[data-eb-6a8f3296] .rgft_a1b66739.rgft_bd40fe12{height:32px;width:32px}[data-eb-6a8f3296] .rgft_a1b66739.rgft_27320e58{height:48px;width:48px}[data-eb-6a8f3296] .rgft_a1b66739 svg{fill:none;stroke:none}[data-eb-6a8f3296] .rgft_a1b66739 path[data-fill]:not([fill]){fill:var(--icon-color)}[data-eb-6a8f3296] .rgft_a1b66739 path[data-stroke]{stroke-width:2}[data-eb-6a8f3296] .rgft_a1b66739 path[data-stroke]:not([stroke]){stroke:var(--icon-color)}[data-eb-6a8f3296] .rgft_3ed66ff4{display:inline-flex;flex-direction:column;align-items:center;justify-content:center;color:var(--rg-color-text-primary, #161D26)}[data-eb-6a8f3296] .rgft_3ed66ff4.rgft_9124b200{height:10px;width:10px}[data-eb-6a8f3296] .rgft_3ed66ff4.rgft_bc1a8743{height:16px;width:16px}[data-eb-6a8f3296] .rgft_3ed66ff4.rgft_c0cbb35d{height:20px;width:20px}[data-eb-6a8f3296] .rgft_3ed66ff4.rgft_bd40fe12{height:32px;width:32px}[data-eb-6a8f3296] .rgft_3ed66ff4.rgft_27320e58{height:48px;width:48px}[data-eb-6a8f3296] .rgft_98b54368{color:var(--rg-color-text-body, #232B37)}[data-eb-6a8f3296] .rgft_98b54368.rgft_275611e5{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_98b54368.rgft_275611e5{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_98b54368.rgft_275611e5{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_98b54368.rgft_275611e5{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_98b54368.rgft_275611e5{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_98b54368.rgft_275611e5{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_98b54368.rgft_275611e5{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_98b54368.rgft_275611e5{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_98b54368.rgft_007aef8b{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.429;font-weight:400;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_98b54368.rgft_007aef8b{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.429;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_98b54368.rgft_007aef8b{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.429;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_98b54368.rgft_007aef8b{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_98b54368.rgft_007aef8b{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_98b54368.rgft_007aef8b{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_98b54368.rgft_007aef8b{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_98b54368.rgft_007aef8b{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_98b54368.rgft_ff19c5f9{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:1.333;font-weight:400;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_98b54368.rgft_ff19c5f9{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:1.333;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_98b54368.rgft_ff19c5f9{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:1.333;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_98b54368.rgft_ff19c5f9{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_98b54368.rgft_ff19c5f9{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_98b54368.rgft_ff19c5f9{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_98b54368.rgft_ff19c5f9{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_98b54368.rgft_ff19c5f9{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_98b54368 ul{list-style-type:disc;margin-top:2rem}[data-eb-6a8f3296] .rgft_98b54368.rgft_2a7f98ee{display:inline;position:relative;cursor:pointer;text-decoration:none!important;color:var(--rg-color-link-default, #006CE0);background:linear-gradient(to right,currentcolor,currentcolor);background-size:100% .1em;background-position:0 100%;background-repeat:no-repeat}[data-eb-6a8f3296] .rgft_98b54368.rgft_2a7f98ee:focus-visible{color:var(--rg-color-link-focus, #006CE0)}[data-eb-6a8f3296] .rgft_98b54368.rgft_2a7f98ee:hover{color:var(--rg-color-link-hover, #003B8F);animation:rgft_9beb7cc5 .3s cubic-bezier(0,0,.2,1)}[data-eb-6a8f3296] .rgft_98b54368.rgft_2a7f98ee:visited{color:var(--rg-color-link-visited, #6842FF)}@keyframes rgft_9beb7cc5{0%{background-size:0 .1em}to{background-size:100% .1em}}[data-eb-6a8f3296] .rgft_d835af5c{color:var(--rg-color-text-title, #161D26)}[data-eb-6a8f3296] .rgft_d835af5c.rgft_3e9243e1{font-size:calc(4.5rem * var(--font-size-multiplier, 1.6));line-height:1.111;font-weight:500;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_3e9243e1{font-size:calc(3.75rem * var(--font-size-multiplier, 1.6));line-height:1.133;font-weight:500}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_3e9243e1{font-size:calc(3rem * var(--font-size-multiplier, 1.6));line-height:1.167;font-weight:500}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_d835af5c.rgft_3e9243e1{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_d835af5c.rgft_3e9243e1{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_d835af5c.rgft_3e9243e1{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_d835af5c.rgft_3e9243e1{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_d835af5c.rgft_3e9243e1{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_d835af5c.rgft_54816d41{font-size:calc(3.75rem * var(--font-size-multiplier, 1.6));line-height:1.133;font-weight:500;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_54816d41{font-size:calc(3rem * var(--font-size-multiplier, 1.6));line-height:1.167;font-weight:500}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_54816d41{font-size:calc(2.5rem * var(--font-size-multiplier, 1.6));line-height:1.2;font-weight:500}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_d835af5c.rgft_54816d41{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_d835af5c.rgft_54816d41{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_d835af5c.rgft_54816d41{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_d835af5c.rgft_54816d41{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_d835af5c.rgft_54816d41{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_d835af5c.rgft_852a8b78{font-size:calc(3rem * var(--font-size-multiplier, 1.6));line-height:1.167;font-weight:500;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_852a8b78{font-size:calc(2.5rem * var(--font-size-multiplier, 1.6));line-height:1.2;font-weight:500}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_852a8b78{font-size:calc(2rem * var(--font-size-multiplier, 1.6));line-height:1.25;font-weight:500}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_d835af5c.rgft_852a8b78{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_d835af5c.rgft_852a8b78{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_d835af5c.rgft_852a8b78{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_d835af5c.rgft_852a8b78{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_d835af5c.rgft_852a8b78{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_286fbc8d{letter-spacing:1.6px;text-transform:uppercase;color:var(--rg-color-text-eyebrow, #161D26)}[data-eb-6a8f3296] .rgft_286fbc8d.rgft_cf5cdf86{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400;font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_286fbc8d.rgft_cf5cdf86{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.714;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_286fbc8d.rgft_cf5cdf86{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:2;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_286fbc8d.rgft_cf5cdf86{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_286fbc8d.rgft_cf5cdf86{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_286fbc8d.rgft_cf5cdf86{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_286fbc8d.rgft_cf5cdf86{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_286fbc8d.rgft_cf5cdf86{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_286fbc8d.rgft_c6f92487{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.714;font-weight:400;font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_286fbc8d.rgft_c6f92487{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:2;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_286fbc8d.rgft_c6f92487{font-size:calc(.625rem * var(--font-size-multiplier, 1.6));line-height:2.4;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_286fbc8d.rgft_c6f92487{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_286fbc8d.rgft_c6f92487{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_286fbc8d.rgft_c6f92487{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_286fbc8d.rgft_c6f92487{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_286fbc8d.rgft_c6f92487{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_d27b4751{color:var(--rg-color-text-utility, #161D26)}[data-eb-6a8f3296] .rgft_d27b4751.rgft_927d7fd1{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400;font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_d27b4751.rgft_927d7fd1{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_d27b4751.rgft_927d7fd1{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_d27b4751.rgft_927d7fd1{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_d27b4751.rgft_927d7fd1{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_d27b4751.rgft_927d7fd1{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_d27b4751.rgft_927d7fd1{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_d27b4751.rgft_927d7fd1{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_d27b4751.rgft_100c8a76{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.429;font-weight:400;font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_d27b4751.rgft_100c8a76{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.429;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_d27b4751.rgft_100c8a76{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.429;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_d27b4751.rgft_100c8a76{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_d27b4751.rgft_100c8a76{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_d27b4751.rgft_100c8a76{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_d27b4751.rgft_100c8a76{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_d27b4751.rgft_100c8a76{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_d27b4751.rgft_453dc601{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:1.333;font-weight:400;font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_d27b4751.rgft_453dc601{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:1.333;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_d27b4751.rgft_453dc601{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:1.333;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_d27b4751.rgft_453dc601{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_d27b4751.rgft_453dc601{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_d27b4751.rgft_453dc601{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_d27b4751.rgft_453dc601{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_d27b4751.rgft_453dc601{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_d27b4751.rgft_949ed5ce{font-size:calc(.625rem * var(--font-size-multiplier, 1.6));line-height:1.2;font-weight:400;font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f32 | 2026-01-13T09:29:12 |
https://aws.amazon.com/blogs/big-data/use-amazon-sagemaker-custom-tags-for-project-resource-governance-and-cost-tracking/ | Use Amazon SageMaker custom tags for project resource governance and cost tracking | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Use Amazon SageMaker custom tags for project resource governance and cost tracking by David Victoria , Ahan Malli , and Rohit Srikanta on 08 JAN 2026 in Advanced (300) , Amazon SageMaker , Amazon SageMaker Unified Studio , Technical How-to Permalink Comments Share Amazon SageMaker announced a new feature that you can use to add custom tags to resources created through an Amazon SageMaker Unified Studio project. This helps you enforce tagging standards that conform to your organization’s service control policies (SCPs) and helps enable cost tracking reporting practices on resources created across the organization. As a SageMaker administrator, you can configure a project profile with tag configurations that will be pushed down to projects that currently use or will use that project profile. The project profile is set up to pass either required key and value tag pairings or pass the key of the tag with a default value that can be modified during project creation. All tags passed to the project will result in the resources created by that project being tagged. This provides you with a governance mechanism that enforces that project resources have the expected tags across all projects of the domain. The first release of custom tags for project resources is supported through an application programming interface (API), through Amazon DataZone SDKs. In this post, we look at use cases for custom tags and how to use the AWS Command Line Interface (AWS CLI) to add tags to project resources. What we hear from customers As customers continue to build and collaborate using AWS tools for model development, generative AI, data processing, and SQL analytics, they see the need to bring control and visibility into the resources being created. To support connectivity to these AWS tools from SageMaker Unified Studio projects, many different types of resources across AWS services need to be created. These resources are created through AWS CloudFormation stacks (through project environment deployment) by the Amazon SageMaker service. From customers we hear the following use cases: Customers need to enforce that tagging practices conform to company policies through the use of AWS controls, such as SCPs, for resource creation. These controls block the creation of resources unless specific tags are placed on the resource. Customers can also start with policies to enforce that the correct tags are placed when resources are created with the additional goal of standardizing on resource reporting. By placing identifiable information on resources when created, they enforce consistency and completeness when performing cost attribution reporting and observability. Customer Swiss Life uses SageMaker as a single solution for cataloging, discovery, sharing, and governance of their enterprise data across business domains. They require all resources have a set of mandatory tags for their finance group to bill organizations across their company for the AWS resources created. “The launch of project resource tags for Amazon SageMaker allows us to bring visibility to the costs incurred across our accounts. With this capability we are able to meet the resource tagging guidelines of our company and have confidence in attributing costs across our multi-account setup for the resources created by Amazon SageMaker projects.” – Tim Kopacz, Software Developer at Swiss Life Prerequisites To get started with custom tags, you must have the following resources: A SageMaker Unified Studio domain. An AWS Identity and Access Management (IAM) entity with privileges to make AWS CLI calls to the domain. An IAM entity authorized to make changes to the domain IAM provisioning role. If SageMaker created this for you, it will be called AmazonSageMakerProvisioning-<accountId> . The provisioning role provisions and manages resources defined in the selected blueprints in your account. How to set up project resource tags The following steps outline how you can configure custom tags for your SageMaker Unified Studio project resources: (Optional) Update the SageMaker provisioning role to permit specific tag keys. Create a new project profile with project resource tags configured. Create a new project with project resource tags. Update an existing project with project resource tags. Validate that the resources are tagged. (Optional) Update a SageMaker provisioning role to permit tag key values The AmazonSageMakerProvisioning-<accountId> role has an AWS managed policy with condition aws:TagKeys allowing tags to be created by this role only if the tag key begins with AmazonDataZone . For this example, we will change the tag key to begin with different strings. Skip to Create a new project profile with project resource tags configured if you don’t need tag keys to have a different structure (such as begins with, contains, and so on) Open the AWS Management Console and go to IAM . In the navigation pane, choose Roles . In the list, choose AmazonSageMakerProvisioning- <accountId> . Choose the Permissions tab. Choose Add permissions , and then choose Create inline policy . Under Policy editor , select JSON . Enter the following policy. Add the strings under the condition aws:TagKeys . In this example, tag keys beginning with ACME or tag keys with the exact match of CostCenter will be created by the role. { "Version": "2012-10-17", "Statement": [ { "Sid": "CustomTagsUnTagPermissions", "Effect": "Allow", "Action": [ "codecommit:UntagResource", "iam:UntagRole", "logs:UntagResource", "athena:UntagResource", "redshift-serverless:UntagResource", "scheduler:UntagResource", "bedrock:UntagResource", "neptune-graph:UntagResource", "quicksight:UntagResource", "glue:UntagResource", "airflow:UntagResource", "secretsmanager:UntagResource", "lambda:UntagResource", "emr-serverless:UntagResource", "elasticmapreduce:RemoveTags", "sagemaker:DeleteTags", "ec2:DeleteTags" ], "Resource": "*", "Condition": { "StringEquals": { "aws:ResourceAccount": "${aws:PrincipalAccount}" }, "ForAllValues:StringLike": { "aws:TagKeys": [ "AmazonDataZone*", "ACME*", "CostCenter" ] }, "Null": { "aws:ResourceTag/AmazonDataZoneProject": "false" } } }, { "Sid": "CustomTagsTaggingPermissions", "Effect": "Allow", "Action": [ "cloudformation:TagResource", "codecommit:TagResource", "iam:TagRole", "glue:TagResource", "athena:TagResource", "lambda:TagResource", "redshift-serverless:TagResource", "logs:TagResource", "secretsmanager:TagResource", "sagemaker:AddTags", "emr-serverless:TagResource", "neptune-graph:TagResource", "bedrock:TagResource", "elasticmapreduce:AddTags", "airflow:TagResource", "scheduler:TagResource", "quicksight:TagResource", "emr-containers:TagResource", "logs:CreateLogGroup", "athena:CreateWorkGroup", "scheduler:CreateScheduleGroup", "cloudformation:CreateStack", "ec2:*" ], "Resource": "*", "Condition": { "ForAnyValue:StringLike": { "aws:TagKeys": [ "AmazonDataZone*", "ACME*", "CostCenter" ] }, "StringEquals": { "aws:ResourceAccount": "${aws:PrincipalAccount}" } } } ] } It’s possible to scope down the specific AWS service tag and un-tag permissions based on which blueprints or capabilities are being used. Create a new project profile with project resource tags configured Use the following steps to create a new SQL Analytics project profile with custom tags. The example uses AWS CLI commands. Open the AWS CloudShell console. Create a project profile using the following CLI command. The project-resource-tags parameter consists of key (tag key), value (tag value), and isValueEditable (boolean indicating if the tag value can be modified during project creation or update). The allow-custom-project-resource-tags parameter set to true permits the project creator to create additional key-value pairs. The key needs to conform to the inline policy of the AmazonSageMakerProvisioning-<accountId> role. The project-resource-tags-description parameter is a description field for project resource tags. The max character limit is 2,048. The description needs to be passed in every time create-project-profile or update-project-profile is called. aws datazone create-project-profile \ --name "SQL Analytics with Project Resource Tags" \ --description "Analyze your data in SageMaker Lakehouse using SQL" \ --domain-identifier "$DOMAIN_ID" \ --region "$REGION" \ --status ENABLED \ --project-resource-tags '[ { "key": "ACME-Application", "value": "SageMaker", "isValueEditable": false }, { "key": "CostCenter", "value": "123", "isValueEditable": true } ]' \ --allow-custom-project-resource-tags \ --environment-configurations '[ { "name": "Tooling", "description": "Configuration for the Tooling Environment", "environmentBlueprintId": "", "deploymentMode": "ON_CREATE", "deploymentOrder": 0, "awsAccount": { "awsAccountId": "$ACCOUNT" }, "awsRegion": { "regionName": "$REGION" }, "configurationParameters": { "parameterOverrides": [ { "name": "enableSpaces", "value": "false", "isEditable": false }, { "name": "maxEbsVolumeSize", "isEditable": false }, { "name": "idleTimeoutInMinutes", "isEditable": false }, { "name": "lifecycleManagement", "isEditable": false }, { "name": "enableNetworkIsolation", "isEditable": false } ] } }, { "name": "Lakehouse Database", "description": "Creates databases in Amazon SageMaker Lakehouse for storing tables in S3 and Amazon Athena resources for your SQL workloads", "environmentBlueprintId": "", "deploymentMode": "ON_CREATE", "deploymentOrder": 1, "awsAccount": { "awsAccountId": "$ACCOUNT" }, "awsRegion": { "regionName": "$REGION" }, "configurationParameters": { "parameterOverrides": [ { "name": "glueDbName", "value": "glue_db", "isEditable": true } ] } }, { "name": "OnDemand RedshiftServerless", "description": "Enables you to create an additional Amazon Redshift Serverless workgroup for your SQL workloads", "environmentBlueprintId": "", "deploymentMode": "ON_DEMAND", "awsAccount": { "awsAccountId": "$ACCOUNT" }, "awsRegion": { "regionName": "$REGION" }, "configurationParameters": { "parameterOverrides": [ { "name": "redshiftDbName", "value": "dev", "isEditable": true }, { "name": "redshiftMaxCapacity", "value": "512", "isEditable": true }, { "name": "redshiftWorkgroupName", "value": "redshift-serverless-workgroup", "isEditable": true }, { "name": "redshiftBaseCapacity", "value": "128", "isEditable": true }, { "name": "connectionName", "value": "redshift.serverless", "isEditable": true }, { "name": "connectToRMSCatalog", "value": "false", "isEditable": false } ] } }, { "name": "OnDemand Catalog for Redshift Managed Storage", "description": "Enables you to create additional catalogs in Amazon SageMaker Lakehouse for storing data in Redshift Managed Storage", "environmentBlueprintId": "", "deploymentMode": "ON_DEMAND", "awsAccount": { "awsAccountId": "$ACCOUNT" }, "awsRegion": { "regionName": "$REGION" }, "configurationParameters": { "parameterOverrides": [ { "name": "catalogName", "isEditable": true }, { "name": "catalogDescription", "value": "RMS catalog", "isEditable": true } ] } } ]' This project profile will have the tag ACME-Application = SageMaker placed on all projects associated to the project profile and cannot be modified by the project creator. The tag CostCenter = 123 can have the value modified by the project creator because the isValueEditable property is set to true . Grant permissions for users to use the project profile during project creation. In the Authorization section of the project profile set either Selected users or groups or Allow all users and groups . The use of the allow-custom-project-resource-tags parameter means the project creator can add their own tags (key-value pair). The key must conform to the condition check in the policy of the provisioning role ( AmazonSageMakerProvisioning-<accountId> ). If the allow-custom-project-resource-tags parameter is changed to false after a project created tags, tags created by the project will be removed during the next project update. Updates to the project profile Updates to project resource tags are possible through the update-project-profile command. The command will replace all values in the project-resource-tags section so be sure to include the exhaustive set of tags. Updates to the project profile are reflected in projects after running the update-project command or when a new project is created using the project profile. The following example adds a new tag, ACME-BusinessUnit = Retail . There are three ways to work with the project-resource-tags parameter when updating the project profile. Passing a non-empty list of project resource tags will replace the tags currently configured on the project profile. Passing an empty list of project resource tags will clear out all previously configured tags: --project-resource-tags '[]' Not including the project resource tag parameter will keep previously configured tags as-is. aws datazone update-project-profile \ --domain-identifier "$DOMAIN_ID" \ --identifier "$PROJECT_PROFILE_ID" \ --region "$REGION" \ --project-resource-tags '[ { "key": "ACME-Application", "value": "SageMaker", "isValueEditable": false }, { "key": "CostCenter", "value": "123", "isValueEditable": true }, { "key": "ACME-BusinessUnit", "value": "Retail", "isValueEditable": false } ]' Create a new project with project resource tags The following steps walk you through creating a new project that inherits tags from the project profile and lets the project creator modify one of the tag values. Create a project using the following example CLI command. Modify the CostCenter tag value using the --resource-tags parameter. Tags configured on the project profile where the isValueEditable attribute is false will be pushed to the project automatically. aws datazone create-project \ --domain-identifier "$DOMAIN_ID" \ --region "$REGION" \ --name "$PROJECT_NAME" \ --description "New project with tags" \ --project-profile-id "$PROJECT_PROFILE_ID" \ --resource-tags '{ "CostCenter": "456" }' Update existing project with project resource tags For existing projects associated to the project profile, you must update the project for the new tags to be applied. Update the project using the following example CLI command. In this scenario, an editable value needs to be updated and a new tag added. Tag CostCenter will have its default value overwritten as “789” and the new ACME-Department = Finance tag will be added. aws datazone update-project \ --domain-identifier "$DOMAIN_ID" \ --identifier "$PROJECT_ID" \ --project-profile-version "latest" \ --region "$REGION" \ --resource-tags '{ "CostCenter": "789", "ACME-Department": "Finance" }' Project level tags (those not configured from the project profile) need to be passed during project update to be preserved. For tags with isValueEditable = true configured from the project profile, any override previously set needs to be applied or the value will revert to the default from the project profile. Validating resources are tagged Validate that tags are placed correctly. An example resource that is created by the project is the project IAM role. Viewing the tags for this role should show the tags configured from the project profile. Open SageMaker Unified Studio to get the project role from the Project details section of the project. The role name begins with datazone_usr_role_ . Open the IAM console . In the navigation pane, choose Roles . Search for the project IAM role. Select the Tags tab. Conclusion In this post, we discussed tagging related use cases from customers and walked through getting started with custom tags in Amazon SageMaker to place tags on the resources created by the project. By giving administrators a way to configure project profiles with standardized tag configurations, you can now help ensure consistent tagging practices across all SageMaker Unified Studio projects while maintaining compliance with SCPs. This feature addresses two critical customer needs: enforcing organizational tagging standards through automated governance mechanisms and enabling accurate cost attribution reporting across multi-service deployments. To learn more, visit Amazon SageMaker , then get started with Project resource tags . About the authors David Victoria David is a Senior Technical Product Manager with Amazon SageMaker at AWS. He focuses on improving administration and governance capabilities needed for customers to support their analytics systems. He is passionate about helping customers realize the most value from their data in a secure, governed manner. Rohit Srikanta Rohit is a Senior Software Engineer at AWS. He works on building and scaling services within Amazon SageMaker. He focuses on developing robust and scalable distributed systems and is passionate about solving complex engineering challenges to deliver maximum customer value. Ahan Malli Ahan is a Software Development Engineer at AWS. He works on the core data and governance layer behind Amazon SageMaker. He’s passionate about building scalable distributed systems and streamlining developer workflows. When he’s not coding, you can find him traveling or hiking Pacific Northwest trails. Loading comments… Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow Twitter Facebook LinkedIn Twitch Email Updates Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center SDKs & Tools .NET on AWS Python on AWS Java on AWS PHP on AWS JavaScript on AWS Help Contact Us File a Support Ticket AWS re:Post Knowledge Center AWS Support Overview Get Expert Help AWS Accessibility Legal English Back to top Amazon is an Equal Opportunity Employer: Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation / Age. x facebook linkedin <path d="M4.68673 0.0559501C3.83553 0.0961101 3.25425 0.23195 2.74609 0.43163C2.22017 0.63659 1.77441 0.91163 1.3308 | 2026-01-13T09:29:12 |
https://aws.amazon.com/blogs/big-data/category/learning-levels/advanced-300/page/2/ | Advanced (300) | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Category: Advanced (300) Introducing Apache Spark upgrade agent for Amazon EMR by Keerthi Chadalavada , McCall Peltier , Rajendra Gujja , Bo Li , Malinda Malwala , Mohit Saxena , Mukul Prasad , Vaibhav Naik , Pradeep Patel , Shubham Mehta , and XiaoRun Yu on 15 DEC 2025 in Advanced (300) , Amazon EMR , Kiro , Technical How-to Permalink Comments Share In this post, you learn how to assess your existing Amazon EMR Spark applications, use the Spark upgrade agent directly from the Kiro IDE, upgrade a sample e-commerce order analytics Spark application project (including build configs, source code, tests, and data quality validation), and review code changes before rolling them out through your CI/CD pipeline. How Socure achieved 50% cost reduction by migrating from self-managed Spark to Amazon EMR Serverless by Junaid Effendi, Pengyu Wang and Raj Ramasubbu on 15 DEC 2025 in Advanced (300) , Amazon EMR , Customer Solutions , Serverless Permalink Comments Share Socure is one of the leading providers of digital identity verification and fraud solutions. Socure’s data science environment includes a streaming pipeline called Transaction ETL (TETL), built on OSS Apache Spark running on Amazon EKS. TETL ingests and processes data volumes ranging from small to large datasets while maintaining high-throughput performance. In this post, we show how Socure was able to achieve 50% cost reduction by migrating the TETL streaming pipeline from self-managed spark to Amazon EMR serverless. Introducing Apache Iceberg materialized views in AWS Glue Data Catalog by Tomohiro Tanaka , Layth Yassin , Leon Lin , Mahesh Mishra , and Noritaka Sekiyama on 09 DEC 2025 in Advanced (300) , Announcements , AWS Glue Permalink Comments Share Hundreds of thousands of customers build artificial intelligence and machine learning (AI/ML) and analytics applications on AWS, frequently transforming data through multiple stages for improved query performance—from raw data to processed datasets to final analytical tables. Data engineers must solve complex problems, including detecting what data has changed in base tables, writing and maintaining transformation […] Auto-optimize your Amazon OpenSearch Service vector database by Dylan Tong , Huibin Shen , Janelle Arita , Vamshi Vijay Nakkirtha , and Vikash Tiwari on 08 DEC 2025 in Advanced (300) , Amazon OpenSearch Service , Announcements Permalink Comments Share AWS recently announced the general availability of auto-optimize for the Amazon OpenSearch Service vector engine. This feature streamlines vector index optimization by automatically evaluating configuration trade-offs across search quality, speed, and cost savings. You can then run a vector ingestion pipeline to build an optimized index on your desired collection or domain. Previously, optimizing index […] Build billion-scale vector databases in under an hour with GPU acceleration on Amazon OpenSearch Service by Dylan Tong , Aruna Govindaraju , Corey Nolet , Kshitiz Gupta , Navneet Verma , and Vamshi Vijay Nakkirtha on 08 DEC 2025 in Advanced (300) , Amazon OpenSearch Service , Announcements Permalink Comments Share AWS recently announced the general availability of GPU-accelerated vector (k-NN) indexing on Amazon OpenSearch Service. You can now build billion-scale vector databases in under an hour and index vectors up to 10 times faster at a quarter of the cost. This feature dynamically attaches serverless GPUs to boost domains and collections running CPU-based instances. With […] SAP data ingestion and replication with AWS Glue zero-ETL by Shashank Sharma , Abhijeet Jangam , Diego Lombardini , and Parth Panchal on 08 DEC 2025 in Advanced (300) , Amazon S3 Tables , AWS Glue , Technical How-to Permalink Comments Share AWS Glue zero-ETL with SAP now supports data ingestion and replication from SAP data sources such as Operational Data Provisioning (ODP) managed SAP Business Warehouse (BW) extractors, Advanced Business Application Programming (ABAP), Core Data Services (CDS) views, and other non-ODP data sources. Zero-ETL data replication and schema synchronization writes extracted data to AWS services like Amazon Redshift, Amazon SageMaker lakehouse, and Amazon S3 Tables, alleviating the need for manual pipeline development. In this post, we show how to create and monitor a zero-ETL integration with various ODP and non-ODP SAP sources. Run Apache Spark and Iceberg 4.5x faster than open source Spark with Amazon EMR by Atul Payapilly , Akshaya KP , Giovanni Matteo Fumarola , and Hari Kishore Chaparala on 26 NOV 2025 in Advanced (300) , Amazon EMR , Announcements , Technical How-to Permalink Comments Share This post shows how Amazon EMR 7.12 can make your Apache Spark and Iceberg workloads up to 4.5x faster performance. Apache Spark encryption performance improvement with Amazon EMR 7.9 by Sonu Kumar Singh , Roshin Babu , Polaris Jhandi , and Zheng Yuan on 26 NOV 2025 in Advanced (300) , Amazon EMR , Announcements Permalink Comments Share In this post, we analyze the results from our benchmark tests comparing the Amazon EMR 7.9 optimized Spark runtime against Spark 3.5.5 without encryption optimizations. We walk through a detailed cost analysis and provide step-by-step instructions to reproduce the benchmark. Introducing catalog federation for Apache Iceberg tables in the AWS Glue Data Catalog by Debika D , Pratik Das , and Srividya Parthasarathy on 26 NOV 2025 in Advanced (300) , Amazon SageMaker , Announcements , AWS Glue , AWS Lake Formation Permalink Comments Share AWS Glue now supports catalog federation for remote Iceberg tables in the Data Catalog. With catalog federation, you can query remote Iceberg tables, stored in Amazon S3 and cataloged in remote Iceberg catalogs, using AWS analytics engines and without moving or duplicating tables. In this post, we discuss how to get started with catalog federation for Iceberg tables in the Data Catalog. Getting started with Apache Iceberg write support in Amazon Redshift by Sanket Hase , Harshida Patel , Ritesh Sinha , Raghu Kuppala , and Xiening Dai on 26 NOV 2025 in Advanced (300) , Amazon Redshift , Amazon S3 Tables , Technical How-to Permalink Comments Share In this post, we show how you can use Amazon Redshift to write data directly to Apache Iceberg tables stored in Amazon S3 and S3 Tables for seamless integration between your data warehouse and data lake while maintaining ACID compliance. ← Older posts Newer posts → Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers <a data-rg-n="Link" href | 2026-01-13T09:29:12 |
https://aws.amazon.com/blogs/big-data/category/learning-levels/advanced-300/#aws-page-content-main | Advanced (300) | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Category: Advanced (300) Access Databricks Unity Catalog data using catalog federation in the AWS Glue Data Catalog by Srividya Parthasarathy and Venkat Viswanathan on 12 JAN 2026 in Advanced (300) , Amazon SageMaker , AWS Glue , AWS Lake Formation , Technical How-to Permalink Comments Share AWS has launched the catalog federation capability, enabling direct access to Apache Iceberg tables managed in Databricks Unity Catalog through the AWS Glue Data Catalog. With this integration, you can discover and query Unity Catalog data in Iceberg format using an Iceberg REST API endpoint, while maintaining granular access controls through AWS Lake Formation. In this post, we demonstrate how to set up catalog federation between the Glue Data Catalog and Databricks Unity Catalog, enabling data querying using AWS analytics services. Use Amazon SageMaker custom tags for project resource governance and cost tracking by David Victoria , Ahan Malli , and Rohit Srikanta on 08 JAN 2026 in Advanced (300) , Amazon SageMaker , Amazon SageMaker Unified Studio , Technical How-to Permalink Comments Share Amazon SageMaker announced a new feature that you can use to add custom tags to resources created through an Amazon SageMaker Unified Studio project. This helps you enforce tagging standards that conform to your organization’s service control policies (SCPs) and helps enable cost tracking reporting practices on resources created across the organization. In this post, we look at use cases for custom tags and how to use the AWS Command Line Interface (AWS CLI) to add tags to project resources. Create AWS Glue Data Catalog views using cross-account definer roles by Aarthi Srinivasan and Sundeep Kumar on 08 JAN 2026 in Advanced (300) , Analytics , AWS Glue , Technical How-to Permalink Comments Share In this post, we demonstrate how to use cross-account IAM definer roles with AWS Glue Data Catalog views. We show how data owner accounts can create and manage views in a central governance account while maintaining security and control over their data assets. Simplify multi-warehouse data governance with Amazon Redshift federated permissions by Satesh Sonti , Ning Di , Sandeep Adwankar , Ramchandra Anil Kulkarni , and Abhishek Rai Sharma on 05 JAN 2026 in Advanced (300) , Amazon Redshift , Technical How-to Permalink Comments Share Amazon Redshift federated permissions simplify permissions management across multiple Redshift warehouses. In this post, we show you how to define data permissions one time and automatically enforce them across warehouses in your AWS account, removing the need to re-create security policies in each warehouse. Unifying governance and metadata across Amazon SageMaker Unified Studio and Atlan by Karan Singh Thakur, Satabrata Paul , Divij Bhatia , and Leonardo Gomez on 22 DEC 2025 in Advanced (300) , Amazon SageMaker Unified Studio , Technical How-to Permalink Comments Share In this post, we show you how to unify governance and metadata across Amazon SageMaker Unified Studio and Atlan through a comprehensive bidirectional integration. You’ll learn how to deploy the necessary AWS infrastructure, configure secure connections, and set up automated synchronization to maintain consistent metadata across both platforms. Modernize Apache Spark workflows using Spark Connect on Amazon EMR on Amazon EC2 by Philippe Wanner and Ege Oguzman on 18 DEC 2025 in Advanced (300) , Amazon EC2 , Amazon EMR , Technical How-to Permalink Comments Share In this post, we demonstrate how to implement Apache Spark Connect on Amazon EMR on Amazon Elastic Compute Cloud (Amazon EC2) to build decoupled data processing applications. We show how to set up and configure Spark Connect securely, so you can develop and test Spark applications locally while executing them on remote Amazon EMR clusters. Create and update Apache Iceberg tables with partitions in the AWS Glue Data Catalog using the AWS SDK and AWS CloudFormation by Aarthi Srinivasan and Pratik Das on 18 DEC 2025 in Advanced (300) , AWS Glue , Learning Levels , Technical How-to Permalink Comments Share In this post, we show how to create and update Iceberg tables with partitions in the Data Catalog using the AWS SDK and AWS CloudFormation. IPv6 addressing with Amazon Redshift by Srini Ponnada , Zirui Hua , Niranjan Kulkarni , Sandeep Adwankar , Sumanth Punyamurthula , and Yanzhu Ji on 17 DEC 2025 in Advanced (300) , Amazon Redshift , Technical How-to Permalink Comments Share As we witness the gradual transition from IPv4 to IPv6, AWS continues to expand its support for dual-stack networking across its service portfolio. In this post, we show how you can migrate your Amazon Redshift Serverless workgroup from IPv4-only to dual-stack mode, so you can make your data warehouse future ready. Reference guide for building a self-service analytics solution with Amazon SageMaker by Navnit Shukla , Ayan Majumder , and Karan Edikala on 16 DEC 2025 in Advanced (300) , Amazon SageMaker Data & AI Governance , Technical How-to Permalink Comments Share In this post, we show how to use Amazon SageMaker Catalog to publish data from multiple sources, including Amazon S3, Amazon Redshift, and Snowflake. This approach enables self-service access while ensuring robust data governance and metadata management. Introducing the Apache Spark troubleshooting agent for Amazon EMR and AWS Glue by Jake Zych , Andrew Kim , Maheedhar Reddy Chappidi , Arunav Gupta , Jeremy Samuel , Muhammad Ali Gulzar , Mohit Saxena , Mukul Prasad , Kartik Panjabi , Shubham Mehta , Vishal Kajjam , Vidyashankar Sivakumar , and Wei Tang on 15 DEC 2025 in Advanced (300) , Amazon EMR , AWS Glue , Kiro , Technical How-to Permalink Comments Share In this post, we show you how the Apache Spark troubleshooting agent helps analyze Apache Spark issues by providing detailed root causes and actionable recommendations. You’ll learn how to streamline your troubleshooting workflow by integrating this agent with your existing monitoring solutions across Amazon EMR and AWS Glue. ← Older posts Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center <a data-rg-n="Li | 2026-01-13T09:29:12 |
https://aws.amazon.com/blogs/big-data/aws-analytics-at-reinvent-2025-unifying-data-ai-and-governance-at-scale/ | AWS analytics at re:Invent 2025: Unifying Data, AI, and governance at scale | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog AWS analytics at re:Invent 2025: Unifying Data, AI, and governance at scale by Larry Weber on 07 JAN 2026 in Amazon EMR , Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , Amazon OpenSearch Service , Amazon Redshift , Amazon SageMaker Data & AI Governance , Amazon SageMaker Unified Studio , Analytics , AWS Glue , AWS Lake Formation , AWS re:Invent , Intermediate (200) Permalink Comments Share re:Invent 2025 showcased the bold Amazon Web Services (AWS) vision for the future of analytics, one where data warehouses, data lakes, and AI development converge into a seamless, open, intelligent platform, with Apache Iceberg compatibility at its core. Across over 18 major announcements spanning three weeks, AWS demonstrated how organizations can break down data silos, accelerate insights with AI, and maintain robust governance without sacrificing agility. Amazon SageMaker: Your data platform, simplified AWS introduced a faster, simpler approach to data platform onboarding for Amazon SageMaker Unified Studio . The new one-click onboarding experience eliminates weeks of setup, so teams can start working with existing datasets in minutes using their current AWS Identity and Access Management (IAM) roles and permissions. Accessible directly from Amazon SageMaker , Amazon Athena , Amazon Redshift , and Amazon S3 Tables consoles, this streamlined experience automatically creates SageMaker Unified Studio projects with existing data permissions intact. At its core is a powerful new serverless notebook that reimagines how data professionals work. This single interface combines SQL queries, Python code, Apache Spark processing, and natural language prompts, backed by Amazon Athena for Apache Spark to scale from interactive exploration to petabyte-scale jobs. Data engineers, analysts, and data scientists no longer need to context-switch between different tools based on workload—they can explore data with SQL, build models with Python, and use AI assistance, all in one place. The introduction of Amazon SageMaker Data Agent in the new SageMaker notebooks marks a pivotal moment in AI-assisted development for data builders. This built-in agent doesn’t only generate code, it understands your data context, catalog information, and business metadata to create intelligent execution plans from natural language descriptions. When you describe an objective, the agent breaks down complex analytics and machine learning (ML) tasks into manageable steps, generates the required SQL and Python code, and maintains awareness of your notebook environment throughout the entire process. This capability transforms hours of manual coding into minutes of guided development, which means teams can focus on gleaning insights rather than repetitive boilerplate. Embracing open data with Apache Iceberg One significant theme across this year’s launches was the widespread adoption of Apache Iceberg across AWS analytics, transforming how organizations manage petabyte-scale data lakes. Catalog federation to remote Iceberg catalogs through the AWS Glue Data Catalog addresses a critical challenge in modern data architectures. You can now query remote Iceberg tables, stored in Amazon Simple Storage Service (Amazon S3) and catalogued in remote Iceberg catalogs, using preferred AWS analytics services such as Amazon Redshift, Amazon EMR , Amazon Athena, AWS Glue, and Amazon SageMaker, without moving or copying tables. Metadata synchronizes in real time, providing query results that reflect the current state. Catalog federation supports both coarse-grained access control and fine-grained access permissions through AWS Lake Formation enabling cross-account sharing and trusted identity propagation while maintaining consistent security across federated catalogs. Amazon Redshift now writes directly to Apache Iceberg tables, enabling true open lakehouse architectures where analytics seamlessly span data warehouses and lakes. Apache Spark on Amazon EMR 7.12 , AWS Glue, Amazon SageMaker notebooks, Amazon S3 Tables, and the AWS Glue Data Catalog now support Iceberg V3’s capabilities, including deletion vectors that mark deleted rows without expensive file rewrites, dramatically reducing pipeline costs and accelerating data modifications and row lineage. V3 automatically tracks every record’s history, creating audit trails essential for compliance and has table-level encryption that helps organizations meet stringent privacy regulations. These innovations mean faster writes, lower storage costs, comprehensive audit trails, and efficient incremental processing across your data architecture. Governance that scales with your organization Data governance received substantial attention at re:Invent with major enhancements to Amazon SageMaker Catalog . Organizations can now curate data at the column level with custom metadata forms and rich text descriptions , indexed in real time for immediate discoverability. New metadata enforcement rules require data producers to classify assets with approved business vocabulary before publication, providing consistency across the enterprise. The catalog uses Amazon Bedrock large language models (LLMs) to automatically suggest relevant business glossary terms by analyzing table metadata and schema information, bridging the gap between technical schemas and business language. Perhaps most importantly, SageMaker Catalog now exports its entire asset metadata as queryable Apache Iceberg tables through Amazon S3 Tables. This way, teams can analyze catalog inventory with standard SQL to answer questions like “which assets lack business descriptions?” or “how many confidential datasets were registered last month?” without building custom ETL infrastructure. As organizations adopt multi-warehouse architectures to scale and isolate workloads, the new Amazon Redshift federated permissions capability eliminates governance complexity. Define data permissions one time from a Amazon Redshift warehouse, and they automatically enforce them across the warehouses in your account. Row-level, column-level, and masking controls apply consistently regardless of which warehouse queries originate from, and new warehouses automatically inherit permission policies. This horizontal scalability means organizations can add warehouses without increasing governance overhead, and analysts immediately see the databases from registered warehouses. Accelerating AI innovation with Amazon OpenSearch Service Amazon OpenSearch Service introduced powerful new capabilities to simplify and accelerate AI application development. With support for OpenSearch 3.3 , agentic search enables precise results using natural language inputs without the need for complex queries, making it easier to build intelligent AI agents. The new Apache Calcite-powered PPL engine delivers query optimization and an extensive library of commands for more efficient data processing. As seen in Matt Garman’s keynote , building large-scale vector databases is now dramatically faster with GPU acceleration and auto-optimization . Previously, creating large-scale vector indexes required days of building time and weeks of manual tuning by experts, which slowed innovation and prevented cost-performance optimizations. The new serverless auto-optimize jobs automatically evaluate index configurations—including k-nearest neighbors (k-NN) algorithms, quantization, and engine settings—based on your specified search latency and recall requirements. Combined with GPU acceleration, you can build optimized indexes up to ten times faster at 25% of the indexing cost, with serverless GPUs that activate dynamically and bill only when providing speed boosts. These advancements simplify scaling AI applications such as semantic search, recommendation engines, and agentic systems, so teams can innovate faster by dramatically reducing the time and effort needed to build large-scale, optimized vector databases. Performance and cost optimization Also announced in the keynote , Amazon EMR Serverless now eliminates local storage provisioning for Apache Spark workloads, introducing serverless storage that reduces data processing costs by up to 20% while preventing job failures from disk capacity constraints. The fully managed, auto scaling storage encrypts data in transit and at rest with job-level isolation, allowing Spark to release workers immediately when idle rather than keeping them active to preserve temporary data. Additionally, AWS Glue introduced materialized views based on Apache Iceberg, storing precomputed query results that automatically refresh as source data changes. Spark engines across Amazon Athena, Amazon EMR, and AWS Glue intelligently rewrite queries to use these views, accelerating performance by up to eight times while reducing compute costs. The service handles refresh schedules, change detection, incremental updates, and infrastructure management automatically. The new Apache Spark upgrade agent for Amazon EMR transforms version upgrades from months-long projects into week-long initiatives. Using conversational interfaces, engineers express upgrade requirements in natural language while the agent automatically identifies API changes and behavioral modifications across PySpark and Scala applications. Engineers review and approve suggested changes before implementation, maintaining full control while the agent validates functional correctness through data quality checks. Currently supporting upgrades from Spark 2.4 to 3.5, this capability is available through SageMaker Unified Studio, Kiro CLI , or an integrated development environment (IDE) with Model Context Protocol compatibility. For workflow optimization, AWS introduced a new Serverless deployment option for Amazon Managed Workflows for Apache Airflow (Amazon MWAA), which eliminates the operational overhead of managing Apache Airflow environments while optimizing costs through serverless scaling. This new offering addresses key challenges of operational scalability, cost optimization, and access management that data engineers and DevOps teams face when orchestrating workflows. With Amazon MWAA Serverless , data engineers can focus on defining their workflow logic rather than monitoring for provisioned capacity. They can now submit their Airflow workflows for execution on a schedule or on demand, paying only for the actual compute time used during each task’s execution. Looking forward These launches collectively represent more than incremental improvements. They signal a fundamental shift in how organizations are approaching analytics. By unifying data warehousing, data lakes, and ML under a common framework built on Apache Iceberg, simplifying access through intelligent interfaces powered by AI, and maintaining robust governance that scales effortlessly, AWS is giving organizations the tools to focus on insights rather than infrastructure. The emphasis on automation, from AI-assisted development to self-managing materialized views and serverless storage, reduces operational overhead while improving performance and cost efficiency. As data volumes continue to grow and AI becomes increasingly central to business operations, these capabilities position AWS customers to accelerate their data-driven initiatives with unprecedented simplicity and power. To view the Re:Invent 2025 Innovation Talk on analytics, visit Harnessing analytics for humans and AI on YouTube. About the authors Larry Weber Larry leads product marketing for the analytics portfolio at AWS. Loading comments… Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow Twitter Facebook LinkedIn Twitch Email Updates Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center SDKs & Tools .NET on AWS Python on AWS Java on AWS PHP on AWS JavaScript on AWS Help Contact Us File a Support Ticket AWS re:Post Knowledge Center AWS Support Overview Get Expert Help AWS Accessibility Legal English Back to top Amazon is an Equal Opportunity Employer: Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation / Age. x facebook linkedin instagram twitch youtube podcasts email Privacy Site terms Cookie Preferences © 2025, Amazon Web Services, Inc. or its affiliates. All rights reserved. | 2026-01-13T09:29:12 |
https://aws.amazon.com/blogs/big-data/category/serverless/#aws-page-content-main | Serverless | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Category: Serverless Amazon EMR Serverless eliminates local storage provisioning, reducing data processing costs by up to 20% by Karthik Prabhakar , Matt Tolton , Neil Mukerje , and Ravi Kumar Singh on 06 JAN 2026 in Amazon EMR , Analytics , Announcements , Intermediate (200) , Serverless Permalink Comments Share In this post, you’ll learn how Amazon EMR Serverless eliminates the need to configure local disk storage for Apache Spark workloads through a new serverless storage capability. We explain how this feature automatically handles shuffle operations, reduces data processing costs by up to 20%, prevents job failures from disk capacity constraints, and enables elastic scaling by decoupling storage from compute. How Socure achieved 50% cost reduction by migrating from self-managed Spark to Amazon EMR Serverless by Junaid Effendi, Pengyu Wang and Raj Ramasubbu on 15 DEC 2025 in Advanced (300) , Amazon EMR , Customer Solutions , Serverless Permalink Comments Share Socure is one of the leading providers of digital identity verification and fraud solutions. Socure’s data science environment includes a streaming pipeline called Transaction ETL (TETL), built on OSS Apache Spark running on Amazon EKS. TETL ingests and processes data volumes ranging from small to large datasets while maintaining high-throughput performance. In this post, we show how Socure was able to achieve 50% cost reduction by migrating the TETL streaming pipeline from self-managed spark to Amazon EMR serverless. Save up to 24% on Amazon Redshift Serverless compute costs with Reservations by Satesh Sonti and Ashish Agrawal on 24 NOV 2025 in Advanced (300) , Amazon Redshift , Best Practices , Cloud Cost Optimization , Serverless Permalink Comments Share In this post, you learn how Amazon Redshift Serverless Reservations can help you lower your data warehouse costs. We explore ways to determine the optimal number of RPUs to reserve, review example scenarios, and discuss important considerations when purchasing these reservations. Introducing Amazon MWAA Serverless by John Jackson on 17 NOV 2025 in Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , Announcements , Intermediate (200) , Serverless , Technical How-to Permalink Comments Share Today, AWS announced Amazon Managed Workflows for Apache Airflow (MWAA) Serverless. This is a new deployment option for MWAA that eliminates the operational overhead of managing Apache Airflow environments while optimizing costs through serverless scaling. In this post, we demonstrate how to use MWAA Serverless to build and deploy scalable workflow automation solutions. Amazon OpenSearch Serverless monitoring: A CloudWatch setup guide by Urmila Iyer and Parth Shah on 24 SEP 2025 in Advanced (300) , Amazon CloudWatch , Amazon OpenSearch Service , Monitoring and observability , Serverless , Technical How-to Permalink Comments Share In this post, we explore commonly used Amazon CloudWatch metrics and alarms for OpenSearch Serverless, walking through the process of selecting relevant metrics, setting appropriate thresholds, and configuring alerts. This guide will provide you with a comprehensive monitoring strategy that complements the serverless nature of your OpenSearch deployment while maintaining full operational visibility. How AppZen enhances operational efficiency, scalability, and security with Amazon OpenSearch Serverless by Prashanth Dudipala, Madhuri Andhale , Manoj Gupta , and Prashant Agrawal on 26 AUG 2025 in Advanced (300) , Amazon OpenSearch Service , Analytics , Customer Solutions , Serverless , Technical How-to Permalink Comments Share AppZen is a leading provider of AI-driven finance automation solutions. The company’s core offering centers around an innovative AI platform designed for modern finance teams, featuring expense management, fraud detection, and autonomous accounts payable solutions. AppZen’s technology stack uses computer vision, deep learning, and natural language processing (NLP) to automate financial processes and ensure compliance. […] Amazon Redshift Serverless at 4 RPUs: High-value analytics at low cost by Ricardo Serafim , Ashish Agrawal , and Andre Hass on 22 AUG 2025 in Amazon Redshift , Analytics , Announcements , Serverless Permalink Comments Share Amazon Redshift Serverless now supports 4 RPU configurations, helping you get started with a lower base capacity that runs scalable analytics workloads beginning at $1.50 per hour. In this post, we examine how this new sizing option makes Redshift Serverless accessible to smaller organizations while providing enterprises with cost-effective environments for development, testing, and variable workloads. Building serverless event streaming applications with Amazon MSK and AWS Lambda by Tarun Rai Madan and Masudur Rahaman Sayem on 26 JUN 2025 in Amazon Managed Streaming for Apache Kafka (Amazon MSK) , Analytics , AWS Lambda , Best Practices , Serverless Permalink Comments Share In this post, we describe how you can simplify your event-driven application architecture using AWS Lambda with Amazon MSK. We demonstrate how to configure Lambda as a consumer for Kafka topics, including a cross-account setup and how to optimize price and performance for these applications. Powering global payout intelligence: How MassPay uses Amazon Redshift Serverless and zero-ETL to drive deeper analytics. by Yossi Shlomo and Milind Oke on 02 JUN 2025 in Amazon Redshift , Analytics , Architecture , Customer Solutions , Financial Services , Intermediate (200) , Serverless Permalink Comments Share In this blog post we shall cover how understanding real-time payout performance, identifying customer behavior patterns across regions, and optimizing internal operations required more than traditional business intelligence and analytics tools. And how since implementing Amazon Redshift and Zero-ETL, MassPay has seen 90% reduction in data availability latency, payments data available for analytics 1.5x faster, leading to 45% reduction in time-to-insight and 37% fewer support tickets related to transaction visibility and payment inquiries. Petabyte-scale data migration made simple: AppsFlyer’s best practice journey with Amazon EMR Serverless by Roy Ninio , Avichay Marciano , Eitav Arditti , and Yonatan Dolan on 12 MAY 2025 in Amazon EMR , Architecture , Best Practices , Serverless , Technical How-to Permalink Comments Share In this post, we share how AppsFlyer successfully migrated their massive data infrastructure from self-managed Hadoop clusters to Amazon EMR Serverless, detailing their best practices, challenges to overcome, and lessons learned that can help guide other organizations in similar transformations. ← Older posts Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started <a data-rg-n="Link" href="/training/?nc1=f_cc" data-rigel-analytics="{"name":"Link","properties": | 2026-01-13T09:29:12 |
https://aws.amazon.com/blogs/big-data/category/compute/amazon-ec2/ | Amazon EC2 | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Category: Amazon EC2 Modernize Apache Spark workflows using Spark Connect on Amazon EMR on Amazon EC2 by Philippe Wanner and Ege Oguzman on 18 DEC 2025 in Advanced (300) , Amazon EC2 , Amazon EMR , Technical How-to Permalink Comments Share In this post, we demonstrate how to implement Apache Spark Connect on Amazon EMR on Amazon Elastic Compute Cloud (Amazon EC2) to build decoupled data processing applications. We show how to set up and configure Spark Connect securely, so you can develop and test Spark applications locally while executing them on remote Amazon EMR clusters. Automate and orchestrate Amazon EMR jobs using AWS Step Functions and Amazon EventBridge by Senthil Kamala Rathinam and Shashidhar Makkapati on 15 SEP 2025 in Advanced (300) , Amazon CloudWatch , Amazon EC2 , Amazon EMR , Amazon EventBridge , Analytics , AWS Step Functions , Technical How-to Permalink Comments Share In this post, we discuss how to build a fully automated, scheduled Spark processing pipeline using Amazon EMR on EC2, orchestrated with Step Functions and triggered by EventBridge. We walk through how to deploy this solution using AWS CloudFormation, processes COVID-19 public dataset data in Amazon Simple Storage Service (Amazon S3), and store the aggregated results in Amazon S3. Achieve low-latency data processing with Amazon EMR on AWS Local Zones by Gagan Brahmi , Arun Shanmugam , and George Oakes on 18 AUG 2025 in Advanced (300) , Amazon EC2 , Amazon EMR , Analytics , Technical How-to Permalink Comments Share By deploying Amazon EMR on AWS Local Zones, organizations can achieve single-digit millisecond latency data processing for applications while maintaining data residency compliance. This post demonstrates how to use AWS Local Zones to deploy EMR clusters closer to your users, enabling millisecond-level response times. PackScan: Building real-time sort center analytics with AWS Services by Sairam Vangapally and Nitin Goyal on 30 MAY 2025 in Amazon API Gateway , Amazon Data Firehose , Amazon EC2 , Amazon OpenSearch Service , Amazon Simple Notification Service (SNS) , Amazon Simple Queue Service (SQS) , AWS Lambda , Customer Solutions , Experience-Based Acceleration , Technical How-to Permalink Comments Share In this post, we explore how PackScan uses Amazon cloud-based services to drive real-time visibility, improve logistics efficiency, and support the seamless movement of packages across Amazon’s Middle Mile network. Analyze Amazon EMR on Amazon EC2 cluster usage with Amazon Athena and Amazon QuickSight by Boon Lee Eu , Kyara Labrador , Vikas Omer , and Lorenzo Ripani on 25 OCT 2024 in Amazon Athena , Amazon EC2 , Amazon EMR , Amazon Quick Sight , Technical How-to Permalink Comments Share In this post, we guide you through deploying a comprehensive solution in your Amazon Web Services (AWS) environment to analyze Amazon EMR on EC2 cluster usage. By using this solution, you will gain a deep understanding of resource consumption and associated costs of individual applications running on your EMR cluster. Stream data to Amazon S3 for real-time analytics using the Oracle GoldenGate S3 handler by Prasad Matkar , Arun Sankaranarayanan , and Giorgio Bonzi on 08 AUG 2024 in Amazon EC2 , Amazon Simple Storage Service (S3) , RDS for Oracle Permalink Comments Share Modern business applications rely on timely and accurate data with increasing demand for real-time analytics. There is a growing need for efficient and scalable data storage solutions. Data at times is stored in different datasets and needs to be consolidated before meaningful and complete insights can be drawn from the datasets. This is where replication […] Push Amazon EMR step logs from Amazon EC2 instances to Amazon CloudWatch logs by Ennio Pastore on 07 APR 2023 in Amazon CloudWatch , Amazon EC2 , Amazon EMR , Analytics , Intermediate (200) Permalink Comments Share Amazon EMR is a big data service offered by AWS to run Apache Spark and other open-source applications on AWS to build scalable data pipelines in a cost-effective manner. Monitoring the logs generated from the jobs deployed on EMR clusters is essential to help detect critical issues in real time and identify root causes quickly. […] Run fault tolerant and cost-optimized Spark clusters using Amazon EMR on EKS and Amazon EC2 Spot Instances by Kinnar Kumar Sen on 19 DEC 2022 in Amazon EC2 , Amazon Elastic Kubernetes Service , Amazon EMR , Amazon EMR on EKS , Analytics , Best Practices , Compute , Technical How-to Permalink Comments Share Amazon EMR on EKS is a deployment option in Amazon EMR that allows you to run Spark jobs on Amazon Elastic Kubernetes Service (Amazon EKS). Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances save you up to 90% over On-Demand Instances, and is a great way to cost optimize the Spark workloads running on Amazon […] Amazon EMR launches support for Amazon EC2 C6i, M6i, I4i, R6i and R6id instances to improve cost performance for Spark workloads by 6–33% by Al MS and Kevin Ryoo on 07 DEC 2022 in Amazon EC2 , Amazon EMR , Analytics , Foundational (100) Permalink Comments Share Amazon EMR provides a managed service to easily run analytics applications using open-source frameworks such as Apache Spark, Hive, Presto, Trino, HBase, and Flink. The Amazon EMR runtime for Spark and Presto includes optimizations that provide over two times performance improvements over open-source Apache Spark and Presto, so that your applications run faster and at […] How ZS created a multi-tenant self-service data orchestration platform using Amazon MWAA by Manish Mehra , Abhishek I S , Anirudh Vohra , Sidrah Sayyad , and Parnab Basak on 16 SEP 2022 in Amazon EC2 , Amazon EMR , Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , Amazon RDS , Customer Solutions Permalink Comments Share This is post is co-authored by Manish Mehra, Anirudh Vohra, Sidrah Sayyad, and Abhishek I S (from ZS), and Parnab Basak (from AWS). The team at ZS collaborated closely with AWS to build a modern, cloud-native data orchestration platform. ZS is a management consulting and technology firm focused on transforming global healthcare and beyond. We […] ← Older posts Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New <a data-rg-n="Link" href="/blogs/?nc1=f_cc" data-rigel-analytics="{"name":"Link","properties":{"size":1}}" class="rgft_8711ccd9 rgft_98b54368 rgft_13008707 rgft_27323f5c rgft_275 | 2026-01-13T09:29:13 |
https://aws.amazon.com/blogs/big-data/category/application-integration/amazon-eventbridge/page/2/ | Amazon EventBridge | AWS Big Data Blog Skip to Main Content Filter: All English Contact us Support My account Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Category: Amazon EventBridge Enable metric-based and scheduled scaling for Amazon Managed Service for Apache Flink by Francisco Morillo and Deepthi Mohan on 10 JAN 2024 in Amazon CloudWatch , Amazon EventBridge , Amazon Managed Service for Apache Flink , AWS Lambda , AWS Step Functions , Best Practices , Technical How-to Permalink Comments Share Thousands of developers use Apache Flink to build streaming applications to transform and analyze data in real time. Apache Flink is an open source framework and engine for processing data streams. It’s highly available and scalable, delivering high throughput and low latency for the most demanding stream-processing applications. Monitoring and scaling your applications is critical […] Introducing shared VPC support on Amazon MWAA by John Jackson on 15 NOV 2023 in Amazon EventBridge , Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , Amazon Simple Queue Service (SQS) , Amazon VPC , AWS Lambda , AWS Organizations , Intermediate (200) Permalink Comments Share In this post, we demonstrate automating deployment of Amazon Managed Workflows for Apache Airflow (Amazon MWAA) using customer-managed endpoints in a VPC, providing compatibility with shared, or otherwise restricted, VPCs. Data scientists and engineers have made Apache Airflow a leading open source tool to create data pipelines due to its active open source community, familiar […] Build event-driven architectures with Amazon MSK and Amazon EventBridge by Florian Mair and Benjamin Meyer on 28 SEP 2023 in Amazon EventBridge , Amazon Managed Streaming for Apache Kafka (Amazon MSK) , Application Integration , Technical How-to Permalink Comments Share Based on immutable facts (events), event-driven architectures (EDAs) allow businesses to gain deeper insights into their customers’ behavior, unlocking more accurate and faster decision-making processes that lead to better customer experiences. In EDAs, modern event brokers, such as Amazon EventBridge and Apache Kafka, play a key role to publish and subscribe to events. EventBridge is […] Simplify operational data processing in data lakes using AWS Glue and Apache Hudi by Ravi Itha and Srinivas Kandi on 13 SEP 2023 in Advanced (300) , Amazon Athena , Amazon EventBridge , Amazon Simple Queue Service (SQS) , Amazon Simple Storage Service (S3) , AWS Big Data , AWS Database Migration Service , AWS Glue , AWS Lambda , AWS Step Functions , Technical How-to Permalink Comments Share AWS has invested in native service integration with Apache Hudi and published technical contents to enable you to use Apache Hudi with AWS Glue (for example, refer to Introducing native support for Apache Hudi, Delta Lake, and Apache Iceberg on AWS Glue for Apache Spark, Part 1: Getting Started). In AWS ProServe-led customer engagements, the use cases we work on usually come with technical complexity and scalability requirements. In this post, we discuss a common use case in relation to operational data processing and the solution we built using Apache Hudi and AWS Glue. Monitor data pipelines in a serverless data lake by Virendhar Sivaraman and Vivek Shrivastava on 09 AUG 2023 in Amazon Athena , Amazon CloudWatch , Amazon EventBridge , Amazon Simple Notification Service (SNS) , Amazon Simple Storage Service (S3) , AWS Glue , AWS Lambda , Intermediate (200) , Technical How-to Permalink Comments Share AWS serverless services, including but not limited to AWS Lambda, AWS Glue, AWS Fargate, Amazon EventBridge, Amazon Athena, Amazon Simple Notification Service (Amazon SNS), Amazon Simple Queue Service (Amazon SQS), and Amazon Simple Storage Service (Amazon S3), have become the building blocks for any serverless data lake, providing key mechanisms to ingest and transform data […] Cross-account integration between SaaS platforms using Amazon AppFlow by Ramakant Joshi , Debaprasun Chakraborty , and Suraj Subramani Vineet on 25 APR 2023 in Amazon AppFlow , Amazon EventBridge , AWS Glue , AWS Step Functions , Intermediate (200) Permalink Comments Share Implementing an effective data sharing strategy that satisfies compliance and regulatory requirements is complex. Customers often need to share data between disparate software as a service (SaaS) platforms within their organization or across organizations. On many occasions, they need to apply business logic to the data received from the source SaaS platform before pushing it […] Build event-driven data pipelines using AWS Controllers for Kubernetes and Amazon EMR on EKS by Victor Gu , Peter Dalbhanjan , and Michael Gasch on 30 MAR 2023 in Amazon Elastic Kubernetes Service , Amazon EMR on EKS , Amazon EventBridge , Amazon Simple Storage Service (S3) , AWS Step Functions , Expert (400) , Technical How-to Permalink Comments Share An event-driven architecture is a software design pattern in which decoupled applications can asynchronously publish and subscribe to events via an event broker. By promoting loose coupling between components of a system, an event-driven architecture leads to greater agility and can enable components in the system to scale independently and fail without impacting other services. […] Use an event-driven architecture to build a data mesh on AWS by Jan Michael Go Tan , David Greenshtein , Vincent Gromakowski , and Dzenan Softic on 15 NOV 2022 in Advanced (300) , Amazon EventBridge , Analytics , AWS Big Data , AWS Glue , AWS Lake Formation , AWS Step Functions , Serverless Permalink Comments Share In this post, we take the data mesh design discussed in Design a data mesh architecture using AWS Lake Formation and AWS Glue, and demonstrate how to initialize data domain accounts to enable managed sharing; we also go through how we can use an event-driven approach to automate processes between the central governance account and […] Trigger an AWS Glue DataBrew job based on an event generated from another DataBrew job by Nipun Chagari and Prarthana Angadi on 02 JUN 2022 in Amazon EventBridge , Analytics , AWS Glue DataBrew , AWS Step Functions Permalink Comments Share Organizations today have continuous incoming data, and analyzing this data in a timely fashion is becoming a common requirement for data analytics and machine learning (ML) use cases. As part of this, you need clean data in order to gain insights that can enable enterprises to get the most out of their data for business […] Audit AWS service events with Amazon EventBridge and Amazon Kinesis Data Firehose by Anand Shah on 01 MAR 2022 in Amazon Athena , Amazon Data Firehose , Amazon EventBridge , Analytics , AWS Big Data , AWS Glue , AWS Lambda , Serverless , Technical How-to Permalink Comments Share February 9, 2024: Amazon Kinesis Data Firehose has been renamed to Amazon Data Firehose. Read the AWS What’s New post to learn more. Amazon EventBridge is a serverless event bus that makes it easy to build event-driven applications at scale using events generated from your applications, integrated software as a service (SaaS) applications, and AWS […] ← Older posts Newer posts → @charset "UTF-8";[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4{position:relative;transition:box-shadow .3s ease}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_3d631df0,[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_b27cc003,[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_5962fadc:hover{box-shadow:var(--rg-shadow-gray-elevation-1, 1px 1px 20px rgba(0, 0, 0, .1))}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_3d631df0.rgft_e79955da,[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_b27cc003.rgft_e79955da,[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_5962fadc:hover.rgft_e79955da{box-shadow:var(--rg-shadow-gray-elevation-2, 1px 1px 24px rgba(0, 0, 0, .25))}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_b27cc003:hover{box-shadow:none}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde{position:relative;transform-style:preserve-3d;overflow:unset!important}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:before{content:"";position:absolute;inset:0;border-radius:inherit;transform:translateZ(-1px);pointer-events:none;transition-property:filter,inset;transition-duration:.3s;transition-timing-function:ease;background-clip:content-box!important;padding:1px}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_3d631df0:before{filter:blur(15px)}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_3d631df0.rgft_4df65418:hover:before{filter:blur(20px)}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_5962fadc:hover:before{filter:blur(15px)}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_b27cc003:before{filter:blur(15px)}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_b27cc003:hover:before{filter:none}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_e90ac70d:active:before{filter:blur(8px)!important}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_a4f580d2:before{filter:blur(8px)!important}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=fuchsia] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=fuchsia].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-fuchsia, linear-gradient(123deg, #fa6f00 0%, #e433ff 50%, #8575ff 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=fuchsia] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=fuchsia].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-fuchsia, linear-gradient(123deg, #d14600 0%, #c300e0 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=indigo] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=indigo].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-indigo, linear-gradient(123deg, #0099ff 0%, #5c7fff 50%, #8575ff 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=indigo] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=indigo].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-indigo, linear-gradient(123deg, #006ce0 0%, #295eff 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=orange] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=orange].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-orange, linear-gradient(123deg, #ff1ae0 0%, #ff386a 50%, #fa6f00 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=orange] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=orange].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-orange, linear-gradient(123deg, #d600ba 0%, #eb003b 50%, #d14600 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=teal] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=teal].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-teal, linear-gradient(123deg, #00bd6b 0%, #00a4bd 50%, #0099ff 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=teal] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=teal].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-teal, linear-gradient(123deg, #008559 0%, #007e94 50%, #006ce0 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=blue] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=blue].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-blue, linear-gradient(123deg, #00bd6b 0%, #0099ff 50%, #8575ff 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=blue] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=blue].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-blue, linear-gradient(123deg, #008559 0%, #006ce0 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=violet] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=violet].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-violet, linear-gradient(123deg, #ad5cff 0%, #0099ff 50%, #00a4bd 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=violet] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=violet].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-violet, linear-gradient(123deg, #962eff 0%, #006ce0 50%, #007e94 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=purple] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=purple].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-purple, linear-gradient(123deg, #ff1ae0 0%, #8575ff 50%, #00a4bd 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=purple] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=purple].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-purple, linear-gradient(123deg, #d600ba 0%, #6842ff 50%, #007e94 100%))}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:linear-gradient(123deg,#d14600,#c300e0,#6842ff)}[data-eb-6a8f3296] [data-rg-theme=fuchsia] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=fuchsia].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-fuchsia, linear-gradient(123deg, #d14600 0%, #c300e0 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-theme=indigo] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=indigo].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-indigo, linear-gradient(123deg, #006ce0 0%, #295eff 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-theme=orange] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=orange].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-orange, linear-gradient(123deg, #d600ba 0%, #eb003b 50%, #d14600 100%))}[data-eb-6a8f3296] [data-rg-theme=teal] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=teal].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-teal, linear-gradient(123deg, #008559 0%, #007e94 50%, #006ce0 100%))}[data-eb-6a8f3296] [data-rg-theme=blue] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=blue].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-blue, linear-gradient(123deg, #008559 0%, #006ce0 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-theme=violet] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=violet].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-violet, linear-gradient(123deg, #962eff 0%, #006ce0 50%, #007e94 100%))}[data-eb-6a8f3296] [data-rg-theme=purple] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=purple].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-purple, linear-gradient(123deg, #d600ba 0%, #6842ff 50%, #007e94 100%))}[data-eb-6a8f3296] a.rgft_f7822e54,[data-eb-6a8f3296] button.rgft_f7822e54{--button-size: 44px;--button-pad-h: 24px;--button-pad-borderless-h: 26px;border:2px solid var(--rg-color-background-page-inverted, #0F141A);padding:8px var(--button-pad-h, 24px);border-radius:40px!important;align-items:center;justify-content:center;display:inline-flex;height:var(--button-size, 44px);text-decoration:none!important;text-wrap:nowrap;cursor:pointer;position:relative;transition:all .3s ease}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_094d67e1,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_094d67e1{--button-size: 32px;--button-pad-h: 14px;--button-pad-borderless-h: 16px}[data-eb-6a8f3296] a.rgft_f7822e54>span,[data-eb-6a8f3296] button.rgft_f7822e54>span{color:var(--btn-text-color, inherit)!important}[data-eb-6a8f3296] a.rgft_f7822e54:focus-visible,[data-eb-6a8f3296] button.rgft_f7822e54:focus-visible{outline:2px solid var(--rg-color-focus-ring, #006CE0)!important;outline-offset:4px!important;transition:outline 0s}[data-eb-6a8f3296] a.rgft_f7822e54:focus:not(:focus-visible),[data-eb-6a8f3296] button.rgft_f7822e54:focus:not(:focus-visible){outline:none!important}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_303c672b,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_303c672b{--btn-text-color: var(--rg-color-text-utility-inverted, #FFFFFF);background-color:var(--rg-color-btn-primary-bg, #161D26);border:none;padding:10px var(--button-pad-borderless-h, 24px)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_18409398,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_18409398{--btn-text-color: var(--rg-color-text-utility, #161D26);background-color:var(--rg-color-btn-secondary-bg, #FFFFFF);border-color:var(--rg-color-background-page-inverted, #0F141A)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_090951dc{--btn-text-color: var(--rg-color-text-utility, #161D26);background-color:var(--rg-color-background-object, #F3F3F7);border:none;padding:10px var(--button-pad-borderless-h, 24px)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_15529d9f,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_15529d9f{--btn-text-color: var(--rg-color-text-utility, #161D26);background-color:var(--rg-color-btn-secondary-bg, #FFFFFF);border:none;padding:10px var(--button-pad-borderless-h, 24px)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb950a4e{--btn-text-color: var(--rg-color-text-utility, #161D26);border:none;padding:10px var(--button-pad-borderless-h, 24px)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb950a4e{background-image:linear-gradient(97deg,#ffc0ad,#f8c7ff 37.79%,#d2ccff 75.81%,#c2d1ff)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb950a4e{--rg-gradient-angle:97deg;background-image:var(--rg-gradient-a, linear-gradient(120deg, #f8c7ff 20.08%, #d2ccff 75.81%))}[data-eb-6a8f3296] [data-rg-mode=dark] a.rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] [data-rg-mode=dark] button.rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] a[data-rg-mode=dark].rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] button[data-rg-mode=dark].rgft_f7822e54.rgft_bb950a4e{background-image:var(--rg-gradient-a, linear-gradient(120deg, #78008a 24.25%, #b2008f 69.56%))}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb419678,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb419678{--btn-text-color: var(--rg-color-text-utility-inverted, #FFFFFF);background-color:var(--rg-color-btn-visited-bg, #656871);border-color:var(--rg-color-btn-visited-bg, #656871)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb419678.rgft_18409398,[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb419678.rgft_15529d9f,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb419678.rgft_18409398,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb419678.rgft_15529d9f{--btn-text-color: var(--rg-color-text-utility, #161D26);background-color:var(--rg-color-btn-secondary-visited-bg, #FFFFFF)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_badebaf5,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_badebaf5{--btn-text-color: var(--rg-color-btn-disabled-text, #B4B4BB);background-color:var(--rg-color-btn-disabled-bg, #F3F3F7);border-color:var(--rg-color-btn-disabled-bg, #F3F3F7);cursor:default}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_badebaf5.rgft_18409398,[data-eb-6a8f3296] a.rgft_f7822e54.rgft_badebaf5.rgft_15529d9f,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_badebaf5.rgft_18409398,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_badebaf5.rgft_15529d9f{border:none;padding:10px var(--button-pad-borderless-h, 24px)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_badebaf5.rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_badebaf5.rgft_090951dc{--btn-text-color: var(--rg-color-btn-tertiary-disabled-text, #B4B4BB);background-color:#0000}[data-eb-6a8f3296] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_18409398:not(.rgft_bb950a4e),[data-eb-6a8f3296] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_15529d9f:not(.rgft_bb950a4e),[data-eb-6a8f3296] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_18409398:not(.rgft_bb950a4e),[data-eb-6a8f3296] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_15529d9f:not(.rgft_bb950a4e){--btn-text-color: var(--rg-color-text-utility, #161D26);background-color:var(--rg-color-btn-secondary-bg, #FFFFFF)}[data-eb-6a8f3296] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc{box-shadow:none}[data-eb-6a8f3296] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc{background-image:linear-gradient(97deg,#ffc0ad80,#f8c7ff80 37.79%,#d2ccff80 75.81%,#c2d1ff80)}[data-eb-6a8f3296] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc{--rg-gradient-angle:97deg;background-image:var(--rg-gradient-a-50, linear-gradient(120deg, #f8c7ff 20.08%, #d2ccff 75.81%))}[data-eb-6a8f3296] [data-rg-mode=dark] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] [data-rg-mode=dark] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] a[data-rg-mode=dark].rgft_f7822e54.rgft_090951dc:hover:not(.rgft_badebaf5),[data-eb-6a8f3296] button[data-rg-mode=dark].rgft_f7822e54.rgft_090951dc:hover:not(.rgft_badebaf5){background-image:var(--rg-gradient-a-50, linear-gradient(120deg, #78008a 24.25%, #b2008f 69.56%))}[data-eb-6a8f3296] a.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc{box-shadow:none}[data-eb-6a8f3296] a.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc{background-image:linear-gradient(97deg,#ffc0ad,#f8c7ff 37.79%,#d2ccff 75.81%,#c2d1ff)}[data-eb-6a8f3296] a.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc{--rg-gradient-angle:97deg;background-image:var(--rg-gradient-a-pressed, linear-gradient(120deg, rgba(248, 199, 255, .5) 20.08%, #d2ccff 75.81%))}[data-eb-6a8f3296] [data-rg-mode=dark] a.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] [data-rg-mode=dark] button.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] a[data-rg-mode=dark].rgft_f7822e54.rgft_090951dc:active:not(.rgft_badebaf5),[data-eb-6a8f3296] button[data-rg-mode=dark].rgft_f7822e54.rgft_090951dc:active:not(.rgft_badebaf5){background-image:var(--rg-gradient-a-pressed, linear-gradient(120deg, rgba(120, 0, 138, .5) 24.25%, #b2008f 69.56%))}[data-eb-6a8f3296] .rgft_8711ccd9{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;background:#0000;border:none;margin:0}[data-eb-6a8f3296] .rgft_8711ccd9.rgft_5e58a6df{text-align:center}[data-eb-6a8f3296] .rgft_8711ccd9.rgft_b7ada98b{display:block}[data-eb-6a8f3296] .rgft_8711ccd9.rgft_beb26dc7{font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace}[data-eb-6a8f3296] .rgft_8711ccd9 a{display:inline;position:relative;cursor:pointer;text-decoration:none!important;color:var(--rg-color-link-default, #006CE0);background:linear-gradient(to right,currentcolor,currentcolor);background-size:100% .1em;background-position:0 100%;background-repeat:no-repeat}[data-eb-6a8f3296] .rgft_8711ccd9 a:focus-visible{color:var(--rg-color-link-focus, #006CE0)}[data-eb-6a8f3296] .rgft_8711ccd9 a:hover{color:var(--rg-color-link-hover, #003B8F);animation:rgft_d72bdead .3s cubic-bezier(0,0,.2,1)}[data-eb-6a8f3296] .rgft_8711ccd9 a:visited{color:var(--rg-color-link-visited, #6842FF)}@keyframes rgft_d72bdead{0%{background-size:0 .1em}to{background-size:100% .1em}}[data-eb-6a8f3296] .rgft_8711ccd9 b,[data-eb-6a8f3296] b.rgft_8711ccd9,[data-eb-6a8f3296] .rgft_8711ccd9 strong,[data-eb-6a8f3296] strong.rgft_8711ccd9{font-weight:700}[data-eb-6a8f3296] i.rgft_8711ccd9,[data-eb-6a8f3296] .rgft_8711ccd9 i,[data-eb-6a8f3296] em.rgft_8711ccd9,[data-eb-6a8f3296] .rgft_8711ccd9 em{font-style:italic}[data-eb-6a8f3296] u.rgft_8711ccd9,[data-eb-6a8f3296] .rgft_8711ccd9 u{text-decoration:underline}[data-eb-6a8f3296] code.rgft_8711ccd9,[data-eb-6a8f3296] .rgft_8711ccd9 code{font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace;border-radius:4px;border:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);color:var(--rg-color-text-secondary, #232B37);padding-top:var(--rg-padding-8);padding-right:var(--rg-padding-8);padding-bottom:var(--rg-padding-8);padding-left:var(--rg-padding-8)}[data-eb-6a8f3296] .rgft_12e1c6fa{display:inline!important;vertical-align:middle}[data-eb-6a8f3296] .rgft_8711ccd9 p img{aspect-ratio:16/9;height:100%;object-fit:cover;width:100%;border-radius:8px;order:1;margin-bottom:var(--rg-margin-4)}[data-eb-6a8f3296] .rgft_8711ccd9 table{table-layout:fixed;border-spacing:0;width:100%}[data-eb-6a8f3296] .rgft_8711ccd9 table td{font-size:14px;border-right:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);border-bottom:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);padding-top:var(--rg-padding-6);padding-right:var(--rg-padding-6);padding-bottom:var(--rg-padding-6);padding-left:var(--rg-padding-6)}[data-eb-6a8f3296] .rgft_8711ccd9 table td:first-of-type{border-left:1px solid var(--rg-color-border-lowcontrast, #CCCCD1)}[data-eb-6a8f3296] .rgft_8711ccd9 table thead tr:first-of-type>*:first-of-type,[data-eb-6a8f3296] .rgft_8711ccd9 table:not(:has(thead)) tr:first-of-type>*:first-of-type{border-top-left-radius:16px}[data-eb-6a8f3296] .rgft_8711ccd9 table thead tr:first-of-type>*:last-of-type,[data-eb-6a8f3296] .rgft_8711ccd9 table:not(:has(thead)) tr:first-of-type>*:last-of-type{border-top-right-radius:16px}[data-eb-6a8f3296] .rgft_8711ccd9 table tr:last-of-type td:first-of-type{border-bottom-left-radius:16px}[data-eb-6a8f3296] .rgft_8711ccd9 table tr:last-of-type td:last-of-type{border-bottom-right-radius:16px}[data-eb-6a8f3296] .rgft_8711ccd9 table:not(:has(thead),:has(th)) tr:first-of-type td{border-top:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);border-right:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);border-bottom:1px solid var(--rg-color-border-lowcontrast, #CCCCD1)}[data-eb-6a8f3296] .rgft_8711ccd9 table th{color:var(--rg-color-text-primary-inverted, #FFFFFF);min-width:280px;max-width:400px;padding:0;text-align:left;vertical-align:top;background-color:var(--rg-color-background-object-inverted, #232B37);border-left:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);border-right:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);border-bottom:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);padding-top:var(--rg-padding-6);padding-right:var(--rg-padding-6);padding-bottom:var(--rg-padding-6);padding-left:var(--rg-padding-6);row-gap:var(--rg-margin-5);column-gap:var(--rg-margin-5);max-width:100%;min-width:150px}@media (min-width: 480px) and (max-width: 767px){[data-eb-6a8f3296] .rgft_8711ccd9 table th{max-width:100%;min-width:150px}}@media (min-width: 768px) and (max-width: 1023px){[data-eb-6a8f3296] .rgft_8711ccd9 table th{max-width:240px;min-width:180px}}@media (min-width: 1024px) and (max-width: 1279px){[data-eb-6a8f3296] .rgft_8711ccd9 table th{max-width:350px;min-width:240px}}@media (min-width: 1280px) and (max-width: 1599px){[data-eb-6a8f3296] .rgft_8711ccd9 table th{max-width:400px;min-width:280px}}@media (min-width: 1600px){[data-eb-6a8f3296] .rgft_8711ccd9 table th{max-width:400px;min-width:280px}}[data-eb-6a8f3296] .rgft_8711ccd9 table th:first-of-type{border-top-left-radius:16px;border-top:0 solid var(--rg-color-border-lowcontrast, #CCCCD1);border-left:0 solid var(--rg-color-border-lowcontrast, #CCCCD1);border-right:0 solid var(--rg-color-border-lowcontrast, #CCCCD1)}[data-eb-6a8f3296] .rgft_8711ccd9 table th:nth-of-type(n+3){border-left:0 solid var(--rg-color-border-lowcontrast, #CCCCD1)}[data-eb-6a8f3296] .rgft_8711ccd9 table th:last-of-type{border-top-right-radius:16px;border-top:0 solid var(--rg-color-border-lowcontrast, #CCCCD1);border-right:0 solid var(--rg-color-border-lowcontrast, #CCCCD1)}[data-eb-6a8f3296] .rgft_a1b66739{display:inline-flex;flex-direction:column;align-items:center;justify-content:center;color:var(--rg-color-text-primary, #161D26);--icon-color: currentcolor}[data-eb-6a8f3296] .rgft_a1b66739.rgft_bc1a8743{height:16px;width:16px}[data-eb-6a8f3296] .rgft_a1b66739.rgft_c0cbb35d{height:20px;width:20px}[data-eb-6a8f3296] .rgft_a1b66739.rgft_bd40fe12{height:32px;width:32px}[data-eb-6a8f3296] .rgft_a1b66739.rgft_27320e58{height:48px;width:48px}[data-eb-6a8f3296] .rgft_a1b66739 svg{fill:none;stroke:none}[data-eb-6a8f3296] .rgft_a1b66739 path[data-fill]:not([fill]){fill:var(--icon-color)}[data-eb-6a8f3296] .rgft_a1b66739 path[data-stroke]{stroke-width:2}[data-eb-6a8f3296] .rgft_a1b66739 path[data-stroke]:not([stroke]){stroke:var(--icon-color)}[data-eb-6a8f3296] .rgft_3ed66ff4{display:inline-flex;flex-direction:column;align-items:center;justify-content:center;color:var(--rg-color-text-primary, #161D26)}[data-eb-6a8f3296] .rgft_3ed66ff4.rgft_9124b200{height:10px;width:10px}[data-eb-6a8f3296] .rgft_3ed66ff4.rgft_bc1a8743{height:16px;width:16px}[data-eb-6a8f3296] .rgft_3ed66ff4.rgft_c0cbb35d{height:20px;width:20px}[data-eb-6a8f3296] .rgft_3ed66ff4.rgft_bd40fe12{height:32px;width:32px}[data-eb-6a8f3296] .rgft_3ed66ff4.rgft_27320e58{height:48px;width:48px}[data-eb-6a8f3296] .rgft_98b54368{color:var(--rg-color-text-body, #232B37)}[data-eb-6a8f3296] .rgft_98b54368.rgft_275611e5{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_98b54368.rgft_275611e5{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_98b54368.rgft_275611e5{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_98b54368.rgft_275611e5{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_98b54368.rgft_275611e5{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_98b54368.rgft_275611e5{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_98b54368.rgft_275611e5{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_98b54368.rgft_275611e5{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_98b54368.rgft_007aef8b{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.429;font-weight:400;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_98b54368.rgft_007aef8b{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.429;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_98b54368.rgft_007aef8b{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.429;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_98b54368.rgft_007aef8b{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_98b54368.rgft_007aef8b{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_98b54368.rgft_007aef8b{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_98b54368.rgft_007aef8b{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_98b54368.rgft_007aef8b{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_98b54368.rgft_ff19c5f9{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:1.333;font-weight:400;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_98b54368.rgft_ff19c5f9{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:1.333;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_98b54368.rgft_ff19c5f9{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:1.333;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_98b54368.rgft_ff19c5f9{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_98b54368.rgft_ff19c5f9{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_98b54368.rgft_ff19c5f9{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_98b54368.rgft_ff19c5f9{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_98b54368.rgft_ff19c5f9{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_98b54368 ul{list-style-type:disc;margin-top:2rem}[data-eb-6a8f3296] .rgft_98b54368.rgft_2a7f98ee{display:inline;position:relative;cursor:pointer;text-decoration:none!important;color:var(--rg-color-link-default, #006CE0);background:linear-gradient(to right,currentcolor,currentcolor);background-size:100% .1em;background-position:0 100%;background-repeat:no-repeat}[data-eb-6a8f3296] .rgft_98b54368.rgft_2a7f98ee:focus-visible{color:var(--rg-color-link-focus, #006CE0)}[data-eb-6a8f3296] .rgft_98b54368.rgft_2a7f98ee:hover{color:var(--rg-color-link-hover, #003B8F);animation:rgft_9beb7cc5 .3s cubic-bezier(0,0,.2,1)}[data-eb-6a8f3296] .rgft_98b54368.rgft_2a7f98ee:visited{color:var(--rg-color-link-visited, #6842FF)}@keyframes rgft_9beb7cc5{0%{background-size:0 .1em}to{background-size:100% .1em}}[data-eb-6a8f3296] .rgft_d835af5c{color:var(--rg-color-text-title, #161D26)}[data-eb-6a8f3296] .rgft_d835af5c.rgft_3e9243e1{font-size:calc(4.5rem * var(--font-size-multiplier, 1.6));line-height:1.111;font-weight:500;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_3e9243e1{font-size:calc(3.75rem * var(--font-size-multiplier, 1.6));line-height:1.133;font-weight:500}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_3e9243e1{font-size:calc(3rem * var(--font-size-multiplier, 1.6));line-height:1.167;font-weight:500}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_d835af5c.rgft_3e9243e1{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_d835af5c.rgft_3e9243e1{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_d835af5c.rgft_3e9243e1{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_d835af5c.rgft_3e9243e1{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_d835af5c.rgft_3e9243e1{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_d835af5c.rgft_54816d41{font-size:calc(3.75rem * var(--font-size-multiplier, 1.6));line-height:1.133;font-weight:500;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_54816d41{font-size:calc(3rem * var(--font-size-multiplier, 1.6));line-height:1.167;font-weight:500}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_54816d41{font-size:calc(2.5rem * var(--font-size-multiplier, 1.6));line-height:1.2;font-weight:500}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_d835af5c.rgft_54816d41{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_d835af5c.rgft_54816d41{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_d835af5c.rgft_54816d41{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_d835af5c.rgft_54816d41{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_d835af5c.rgft_54816d41{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_d835af5c.rgft_852a8b78{font-size:calc(3rem * var(--font-size-multiplier, 1.6));line-height:1.167;font-weight:500;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_852a8b78{font-size:calc(2.5rem * var(--font-size-multiplier, 1.6));line-height:1.2;font-weight:500}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_852a8b78{font-size:calc(2rem * var(--font-size-multiplier, 1.6));line-height:1.25;font-weight:500}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_d835af5c.rgft_852a8b78{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_d835af5c.rgft_852a8b78{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_d835af5c.rgft_852a8b78{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_d835af5c.rgft_852a8b78{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_d835af5c.rgft_852a8b78{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_286fbc8d{letter-spacing:1.6px;text-transform:uppercase;color:var(--rg-color-text-eyebrow, #161D26)}[data-eb-6a8f3296] .rgft_286fbc8d.rgft_cf5cdf86{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400;font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_286fbc8d.rgft_cf5cdf86{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.714;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_286fbc8d.rgft_cf5cdf86{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:2;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_286fbc8d.rgft_cf5cdf86{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_286fbc8d.rgft_cf5cdf86{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_286fbc8d.rgft_cf5cdf86{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_286fbc8d.rgft_cf5cdf86{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_286fbc8d.rgft_cf5cdf86{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_286fbc8d.rgft_c6f92487{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.714;font-weight:400;font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_286fbc8d.rgft_c6f92487{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:2;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_286fbc8d.rgft_c6f92487{font-size:calc(.625rem * var(--font-size-multiplier, 1.6));line-height:2.4;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_286fbc8d.rgft_c6f92487{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_286fbc8d.rgft_c6f92487{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_286fbc8d.rgft_c6f92487{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_286fbc8d.rgft_c6f92487{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_286fbc8d.rgft_c6f92487{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_d27b4751{color:var(--rg-color-text-utility, #161D26)}[data-eb-6a8f3296] .rgft_d27b4751.rgft_927d7fd1{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400;font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_d27b4751.rgft_927d7fd1{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_d27b4751.rgft_927d7fd1{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_d27b4751.rgft_927d7fd1{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_d27b4751.rgft_927d7fd1{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_d27b4751.rgft_927d7fd1{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_d27b4751.rgft_927d7fd1{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_d27b4751.rgft_927d7fd1{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_d27b4751.rgft_100c8a76{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.429;font-weight:400;font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_d27b4751.rgft_100c8a76{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.429;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_d27b4751.rgft_100c8a76{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.429;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_d27b4751.rgft_100c8a76{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_d27b4751.rgft_100c8a76{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_d27b4751.rgft_100c8a76{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_d27b4751.rgft_100c8a76{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_d27b4751.rgft_100c8a76{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_d27b4751.rgft_453dc601{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:1.333;font-weight:400;font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_d27b4751.rgft_453dc601{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:1.333;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_d27b4751.rgft_453dc601{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:1.333;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_d27b4751.rgft_453dc601{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_d27b4751.rgft_453dc601{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_d27b4751.rgft_453dc601{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_d27b4751.rgft_453dc601{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_d27b4751.rgft_453dc601{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_d27b4751.rgft_949ed5ce{font-size:calc(.625rem * var(--font-size-multiplier, 1.6));line-height:1.2;font-weight:400;font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_d27b4751.rgft_949ed5ce{font-size:calc(.625rem * var(--font-size-multiplier, 1.6));line-height:1.2;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_d27b4751.rgft_949ed5ce{font-size:calc(.625rem * var(--font-size-multiplier, 1.6));line-height:1.2;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_d27b4751.rgft_949ed5ce{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_d27b4751.rgft_949ed5ce{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data | 2026-01-13T09:29:13 |
https://aws.amazon.com/blogs/big-data/category/analytics/amazon-emr/ | Amazon EMR | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Category: Amazon EMR AWS analytics at re:Invent 2025: Unifying Data, AI, and governance at scale by Larry Weber on 07 JAN 2026 in Amazon EMR , Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , Amazon OpenSearch Service , Amazon Redshift , Amazon SageMaker Data & AI Governance , Amazon SageMaker Unified Studio , Analytics , AWS Glue , AWS Lake Formation , AWS re:Invent , Intermediate (200) Permalink Comments Share re:Invent 2025 showcased the bold Amazon Web Services (AWS) vision for the future of analytics, one where data warehouses, data lakes, and AI development converge into a seamless, open, intelligent platform, with Apache Iceberg compatibility at its core. Across over 18 major announcements spanning three weeks, AWS demonstrated how organizations can break down data silos, […] Amazon EMR Serverless eliminates local storage provisioning, reducing data processing costs by up to 20% by Karthik Prabhakar , Matt Tolton , Neil Mukerje , and Ravi Kumar Singh on 06 JAN 2026 in Amazon EMR , Analytics , Announcements , Intermediate (200) , Serverless Permalink Comments Share In this post, you’ll learn how Amazon EMR Serverless eliminates the need to configure local disk storage for Apache Spark workloads through a new serverless storage capability. We explain how this feature automatically handles shuffle operations, reduces data processing costs by up to 20%, prevents job failures from disk capacity constraints, and enables elastic scaling by decoupling storage from compute. Modernize Apache Spark workflows using Spark Connect on Amazon EMR on Amazon EC2 by Philippe Wanner and Ege Oguzman on 18 DEC 2025 in Advanced (300) , Amazon EC2 , Amazon EMR , Technical How-to Permalink Comments Share In this post, we demonstrate how to implement Apache Spark Connect on Amazon EMR on Amazon Elastic Compute Cloud (Amazon EC2) to build decoupled data processing applications. We show how to set up and configure Spark Connect securely, so you can develop and test Spark applications locally while executing them on remote Amazon EMR clusters. Introducing the Apache Spark troubleshooting agent for Amazon EMR and AWS Glue by Jake Zych , Andrew Kim , Maheedhar Reddy Chappidi , Arunav Gupta , Jeremy Samuel , Muhammad Ali Gulzar , Mohit Saxena , Mukul Prasad , Kartik Panjabi , Shubham Mehta , Vishal Kajjam , Vidyashankar Sivakumar , and Wei Tang on 15 DEC 2025 in Advanced (300) , Amazon EMR , AWS Glue , Kiro , Technical How-to Permalink Comments Share In this post, we show you how the Apache Spark troubleshooting agent helps analyze Apache Spark issues by providing detailed root causes and actionable recommendations. You’ll learn how to streamline your troubleshooting workflow by integrating this agent with your existing monitoring solutions across Amazon EMR and AWS Glue. Introducing Apache Spark upgrade agent for Amazon EMR by Keerthi Chadalavada , McCall Peltier , Rajendra Gujja , Bo Li , Malinda Malwala , Mohit Saxena , Mukul Prasad , Vaibhav Naik , Pradeep Patel , Shubham Mehta , and XiaoRun Yu on 15 DEC 2025 in Advanced (300) , Amazon EMR , Kiro , Technical How-to Permalink Comments Share In this post, you learn how to assess your existing Amazon EMR Spark applications, use the Spark upgrade agent directly from the Kiro IDE, upgrade a sample e-commerce order analytics Spark application project (including build configs, source code, tests, and data quality validation), and review code changes before rolling them out through your CI/CD pipeline. Accelerate Apache Hive read and write on Amazon EMR using enhanced S3A by Ramesh Kandasamy , Giovanni Matteo Fumarola , Himanshu Mishra , Paramvir Singh , and Anmol Sundaram on 15 DEC 2025 in Amazon EMR , Analytics , Announcements , Intermediate (200) Permalink Comments Share In this post, we demonstrate how Apache Hive on Amazon EMR 7.10 delivers significant performance improvements for both read and write operations on Amazon S3. Amazon EMR HBase on Amazon S3 transitioning to EMR S3A with comparable EMRFS performance by Dong Li , Giovanni Matteo Fumarola , and Ramesh Kandasamy on 15 DEC 2025 in Amazon EMR , Analytics , Announcements , AWS Big Data Permalink Comments Share Starting with version 7.10, Amazon EMR is transitioning from EMR File System (EMRFS) to EMR S3A as the default file system connector for Amazon S3 access. This transition brings HBase on Amazon S3 to a new level, offering performance parity with EMRFS while delivering substantial improvements, including better standardization, improved portability, stronger community support, improved performance through non-blocking I/O, asynchronous clients, and better credential management with AWS SDK V2 integration. In this post, we discuss this transition and its benefits. How Socure achieved 50% cost reduction by migrating from self-managed Spark to Amazon EMR Serverless by Junaid Effendi, Pengyu Wang and Raj Ramasubbu on 15 DEC 2025 in Advanced (300) , Amazon EMR , Customer Solutions , Serverless Permalink Comments Share Socure is one of the leading providers of digital identity verification and fraud solutions. Socure’s data science environment includes a streaming pipeline called Transaction ETL (TETL), built on OSS Apache Spark running on Amazon EKS. TETL ingests and processes data volumes ranging from small to large datasets while maintaining high-throughput performance. In this post, we show how Socure was able to achieve 50% cost reduction by migrating the TETL streaming pipeline from self-managed spark to Amazon EMR serverless. Run Apache Spark and Iceberg 4.5x faster than open source Spark with Amazon EMR by Atul Payapilly , Akshaya KP , Giovanni Matteo Fumarola , and Hari Kishore Chaparala on 26 NOV 2025 in Advanced (300) , Amazon EMR , Announcements , Technical How-to Permalink Comments Share This post shows how Amazon EMR 7.12 can make your Apache Spark and Iceberg workloads up to 4.5x faster performance. Apache Spark encryption performance improvement with Amazon EMR 7.9 by Sonu Kumar Singh , Roshin Babu , Polaris Jhandi , and Zheng Yuan on 26 NOV 2025 in Advanced (300) , Amazon EMR , Announcements Permalink Comments Share In this post, we analyze the results from our benchmark tests comparing the Amazon EMR 7.9 optimized Spark runtime against Spark 3.5.5 without encryption optimizations. We walk through a detailed cost analysis and provide step-by-step instructions to reproduce the benchmark. ← Older posts Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training <a d | 2026-01-13T09:29:13 |
https://aws.amazon.com/blogs/big-data/create-aws-glue-data-catalog-views-using-cross-account-definer-roles/ | Create AWS Glue Data Catalog views using cross-account definer roles | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Create AWS Glue Data Catalog views using cross-account definer roles by Aarthi Srinivasan and Sundeep Kumar on 08 JAN 2026 in Advanced (300) , Analytics , AWS Glue , Technical How-to Permalink Comments Share With AWS Glue Data Catalog views you can create a SQL view in the Data Catalog that references one or more base tables. These multi-dialect views support various SQL query engines, providing consistent access across multiple Amazon Web Services (AWS) services including Amazon Athena , Amazon Redshift Spectrum, and Apache Spark in both Amazon EMR and AWS Glue 5.0 . You can now create Data Catalog views using a cross-account AWS Identity and Access Management (IAM) definer role. A definer role is an IAM role used to create the Data Catalog view and has SELECT permissions on all columns of the underlying base tables. This definer role is assumed by AWS Glue and AWS Lake Formation service principals to vend credentials to the base tables’ data whenever the view is queried. The definer role allows the Data Catalog view to be shared to principals or AWS accounts so that you can share a filtered subset of data without sharing the base tables. Previously, Data Catalog views required a definer role within the same AWS account as the base tables. The introduction of cross-account definer roles enables Data Catalog view creation in enterprise data mesh architectures. In this setup, database and table metadata is centralized in a governance account, and individual data owner accounts maintain control over table creation and management through their IAM roles. Data owner accounts can now create and manage Data Catalog views in the central governance accounts using their existing continuous integration and continuous delivery (CI/CD) pipeline roles. In this post, we show you a cross-account scenario involving two AWS accounts: a central governance account containing the tables and hosting the views and a data owner (producer) account with the IAM role used to create and manage views. We provide implementation details for both SPARK dialect using AWS SDK code samples and ATHENA dialect using SQL commands. Using this approach, you can implement sophisticated data governance models at enterprise scale while maintaining operational efficiency across your AWS environment. Key benefits Key benefits for cross-account definer roles are as follows: Enhanced data mesh support – Enterprises with multi-account data lakehouse architectures can now maintain their existing operational model where data owner accounts manage table creation and updates using their established IAM roles. These same roles can now create and manage Data Catalog views across account boundaries. Strengthened security controls – By keeping table and view management within data owner account roles: Security posture is enhanced through proper separation of duties. Audit trails become more comprehensive and meaningful. Access controls follow the principle of least privilege. Elimination of data duplication – Data owner accounts can create views in central accounts that: Provide access to specific data subsets without duplicating tables. Reduce storage costs and management overhead. Maintain a single source of truth while enabling targeted data sharing. Solution overview An example customer has a database with two transaction tables in their central account, where the catalog and permissions are maintained. With the database shared to the data owner (producer) account, we create a Data Catalog view in the central account on these two tables, using the producer’s definer role. The view from the central account can be shared to additional consumer accounts and queried. We illustrate creating the SPARK dialect using create-table CLI , and add the ATHENA dialect for the same view from the Athena console . We also provide the AWS SDK sample code for CreateTable() and UpdateTable() , with view definition and a sample pySpark script to read and verify the view in AWS Glue. The following diagram shows the table, view, and definer IAM role placements between a central governance account and data producer account. Prerequisites To perform this solution, you need to have the following prerequisites: Two AWS accounts with AWS Lake Formation set up. For details, refer to Set up AWS Lake Formation . The Lake Formation setup includes registering your IAM admin role as Lake Formation administrator. In the Data Catalog settings , shown in the following screenshot, Default permissions for newly created databases and tables is set to use Lake Formation permissions only. Cross-account version settings is set to Version 4 . Create an IAM role Data-Analyst in the producer account. For the IAM permissions on this role, refer to Data analyst permissions . This role will also be used as the view definer role. Add the permissions to this definer role from the Prerequisites for creating views . Create database and tables in the central account In this step, you create two tables in the central governance account and populate them with few rows of data: Sign in to the central account as admin user. Open the Athena console and set up the Athena query results bucket . Run the following queries to create two sample Iceberg tables, representing bank customer transactions data: /* Check if the Database exists, if not create new database. */ CREATE DATABASE IF NOT EXISTS bankdata_icebergdb; /*Create transaction_table1*/ Replace the bucket name CREATE TABLE bankdata_icebergdb.transaction_table1 ( transaction_id string, transaction_type string, transaction_amount double) LOCATION 's3://<bucket-name>/bankdata_icebergdb/transaction-table1' TBLPROPERTIES ( 'table_type'='iceberg', 'write_compression'='zstd' ); /*Create transaction_table2*/ CREATE TABLE bankdata_icebergdb.transaction_table2 ( transaction_id string, transaction_location string, transaction_date date) LOCATION 's3://<bucket-name>/bankdata_icebergdb/transaction-table2' TBLPROPERTIES ( 'table_type'='iceberg', 'write_compression'='zstd' ); INSERT INTO bankdata_icebergdb.transaction_table1 (transaction_id, transaction_type, transaction_amount) VALUES ('T001', 'purchase', 50.0), ('T002', 'purchase', 120.0), ('T003', 'refund', 200.5), ('T004', 'purchase', 80.0), ('T005', 'withdrawal', 500.0), ('T006', 'purchase', 300.0), ('T007', 'deposit', 1000.0), ('T008', 'refund', 20.0), ('T009', 'purchase', 150.0), ('T010', 'withdrawal', 75.0); INSERT INTO bankdata_icebergdb.transaction_table2 (transaction_id, transaction_location, transaction_date) VALUES ('T001', 'Charlotte', DATE '2024-10-01'), ('T002', 'Seattle', DATE '2024-10-02'), ('T003', 'Chicago', DATE '2024-10-03'), ('T004', 'Miami', DATE '2024-10-04'), ('T005', 'New York', DATE '2024-10-05'), ('T006', 'Austin', DATE '2024-10-06'), ('T007', 'Denver', DATE '2024-10-07'), ('T008', 'Boston', DATE '2024-10-08'), ('T009', 'San Jose', DATE '2024-10-09'), ('T010', 'Phoenix', DATE '2024-10-10'); Verify the created tables in Athena query editor by running a preview. Share the database and tables from central to producer account In the central governance account, you share the database and the two tables to the producer account and the Data-Analyst role in producer. Sign in to the Lake Formation console as the Lake Formation admin role. In the navigation pane, choose Data permissions . Choose Grant and provide the following information: For Principals , select External accounts and enter the producer account ID, as shown in the following screenshot. For Named Data Catalog Resources , select the default catalog and database bankdata_icebergdb , as shown in the following screenshot. Under Database permissions , select Describe . For Grantable permissions , select Describe . Choose Grant . Repeat the preceding steps to grant access to the producer account definer role Data-Analyst on the database bankdata_icebergdb and the two tables transaction_table1 and transaction_table2 as follows. Under Database permissions , grant Create table and Describe permissions. Under Table permissions , grant Select and Describe on all columns. With these steps, the central governance account data admin steward has shared the database and tables to the producer account definer role. Steps for producer account Follow these steps for the producer account: Sign in to the Lake Formation console on the producer account as the Lake Formation administrator. In the left navigation pane, choose Databases . A blue banner will appear on the console, showing pending invitations from AWS Resource Access Manager (AWS RAM). Open the AWS RAM console and review the AWS RAM shares under Shared with me. You will see the AWS RAM shares in pending state. Select the pending AWS RAM share from central account and choose Accept resource share . After the resource share request is accepted, the shared database shows up in the producer account. On the Lake Formation console, select the database. On the Create dropdown list, choose Resource link . Provide a name rl_bank_iceberg and choose Create . Let’s grant Describe permission on the resource link to the Data-Analyst role in the producer account in the following steps. In the left navigation pane, choose Data permissions . Choose the Data-Analyst role. Select the resource link rl_bank_iceberg for the database as shown in the following screenshot. Grant Describe permission on the resource link. Note: Cross-account Data Catalog views can’t be created using a resource link, although a resource link is needed for the SDK use of SPARK dialect. Next, add the central account Data Catalog as a Data Source in Athena from producer account: Open the Athena console. On the left navigation pane, choose Data sources and catalogs . Choose Create data source . Select S3-AWS Glue Data Catalog . Choose AWS – Glue Data Catalog in another account and name the data source as centraladmin . Choose Next and then create data source. After the data source is created, navigate to the Query editor and verify the Data source centraladmin appears, as shown in the following screenshot. The definer role can also now access and query the central catalog database. Create SPARK dialect view In this step, you create a view with SPARK dialect, using AWS Glue CLI command create-table : Sign in to the AWS console in the producer account as Data-Analyst role. Enter the following command in your CLI environment, such as AWS CloudShell , to create a SPARK DIALECT: aws glue create-table --cli-input-json '{ "DatabaseName": "rl_bank_iceberg", "TableInput": { "Name": "mdv_transaction1", "StorageDescriptor": { "Columns": [ { "Name": "transaction_id", "Type": "string" }, { "Name": "transaction_type", "Type": "string" }, { "Name": "transaction_amount", "Type": "float" }, { "Name": "transaction_location", "Type": "string" }, { "Name": "transaction_date", "Type": "date" } ], "SerdeInfo": {} }, "ViewDefinition": { "SubObjects": [ "arn:aws:glue:<your-region>:<your-central-account-id>:table/bankdata_icebergdb/transaction_table1", "arn:aws:glue:<your-region>:<your-central-account-id>:table/bankdata_icebergdb/transaction_table2" ], "IsProtected": true, "Representations": [ { "Dialect": "SPARK", "DialectVersion": "1.0", "ViewOriginalText": "SELECT t1.transaction_id, t1.transaction_type, t1.transaction_amount, t2.transaction_location, t2.transaction_date FROM transaction_table1 t1 JOIN transaction_table2 t2 ON t1.transaction_id = t2.transaction_id WHERE t1.transaction_amount > 100;", "ViewExpandedText": "SELECT t1.transaction_id, t1.transaction_type, t1.transaction_amount, t2.transaction_location, t2.transaction_date FROM transaction_table1 t1 JOIN transaction_table2 t2 ON t1.transaction_id = t2.transaction_id WHERE t1.transaction_amount > 100;" } ] } } }' Open the Lake Formation console and verify if the view is created. Verify the dialect of the view on the SQL definitions tab for the view details. Add ATHENA dialect To add ATHENA dialect, follow these steps: On the Athena console, select centraladmin from the Data source . Enter the following SQL script to create the ATHENA dialect for the same view: ALTER VIEW mdv_transaction1 FORCE ADD DIALECT AS SELECT t1.transaction_id, t1.transaction_type, t1.transaction_amount, t2.transaction_location, t2.transaction_date FROM transaction_table1 t1 JOIN transaction_table2 t2 ON t1.transaction_id = t2.transaction_id WHERE t1.transaction_amount > 100 We can’t use the resource link rl_bank_iceberg in the Athena query editor to create or alter a view in the central account. Verify the added dialect by running a preview in Athena. For running the query, you can use either the resource link rl_bank_iceberg from the producer account catalog or use the centraladmin catalog. The following screenshot shows querying using the resource link of the database in the producer account catalog. The following screenshot shows querying the view from the producer using the connected catalog centraladmin as the data source. Verify the dialects on the view by inspecting the table in the Lake Formation console. You can now query the view as the Data-Analyst role in the producer account, using both Athena and Spark. The view will also show in the central account as shown in the following code example, with access to the Lake Formation admin. You can also create the view with ATHENA dialect and add the SPARK dialect. The SQL syntax to create the view in ATHENA dialect is shown in the following example: create protected multi dialect view mdv_transaction1 security definer as SELECT t1.transaction_id, t1.transaction_type, t1.transaction_amount, t2.transaction_location, t2.transaction_date FROM transaction_table1 t1 JOIN transaction_table2 t2 ON t1.transaction_id = t2.transaction_id WHERE t1.transaction_amount > 100; The update-table CLI to add the corresponding SPARK dialect is shown in the following example: aws glue update-table --cli-input-json '{ "DatabaseName": "rl_bankdatadb", "ViewUpdateAction": "ADD", "Force": true, "TableInput": { "Name": " mdv_transaction1", "StorageDescriptor": { "Columns": [ { "Name": "transaction_id", "Type": "string" }, { "Name": "transaction_type", "Type": "string" }, { "Name": "transaction_amount", "Type": "float" }, { "Name": "transaction_location", "Type": "string" }, { "Name": "transaction_date", "Type": "date" } ], "SerdeInfo": {} }, "ViewDefinition": { "SubObjects": [ " "arn:aws:glue:<your-region>:<your-central-account-id>:table/bankdata_icebergdb/transaction_table1", "arn:aws:glue:<your-region>:<your-central-account-id>:table/bankdata_icebergdb/transaction_table2" ], "IsProtected": true, "Representations": [ { "Dialect": "SPARK", "DialectVersion": "1.0", "ViewOriginalText": " SELECT t1.transaction_id, t1.transaction_type, t1.transaction_amount, t2.transaction_location, t2.transaction_date FROM transaction_table1 t1 JOIN transaction_table2 t2 ON t1.transaction_id = t2.transaction_id WHERE t1.transaction_amount > 100", "ViewExpandedText": " SELECT t1.transaction_id, t1.transaction_type, t1.transaction_amount, t2.transaction_location, t2.transaction_date FROM transaction_table1 t1 JOIN transaction_table2 t2 ON t1.transaction_id = t2.transaction_id WHERE t1.transaction_amount > 100" } ] } } }' The following is a sample Python script to create a SPARK dialect view: glueview-createtable.py . The following code block is a sample AWS Glue extract, transfer, and load (ETL) script to access the Spark dialect of the view from AWS Glue 5.0 from the central account. The AWS Glue job execution role should have Lake Formation SELECT permission on the AWS Glue view: from pyspark.context import SparkContext from pyspark.sql import SparkSession aws_region = "<your-region>" aws_account_id = "<your-central-account-id>" local_catalogname = "spark_catalog" warehouse_path = "s3://<your-bucket-name>/bankdata_icebergdb/transaction-table1" spark = SparkSession.builder.appName('query_glue_view') \ .config('spark.sql.extensions','org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions') \ .config(f'spark.sql.catalog.{local_catalogname}', 'org.apache.iceberg.spark.SparkSessionCatalog') \ .config(f'spark.sql.catalog.{local_catalogname}.catalog-impl', 'org.apache.iceberg.aws.glue.GlueCatalog') \ .config(f'spark.sql.catalog.{local_catalogname}.client.region', aws_region) \ .config(f'spark.sql.catalog.{local_catalogname}.glue.account-id', aws_account_id) \ .config(f'spark.sql.catalog.{local_catalogname}.io-impl', 'org.apache.iceberg.aws.s3.S3FileIO') \ .config(f'spark.sql.catalog.{local_catalogname}.warehouse',warehouse_path) \ .getOrCreate() spark.sql(f"show databases").show() spark.sql(f"SHOW TABLES IN {local_catalogname}.bankdata_icebergdb").show() spark.sql(f"SELECT * FROM {local_catalogname}.bankdata_icebergdb. mdv_transaction1").show() In the AWS Glue job-details, for Lake Formation managed tables and for Iceberg tables, set additional parameters respectively as follows: --enable-lakeformation-fine-grained-access = true --datalake-formats = iceberg Cleanup To avoid incurring costs, clean up the resources you used for this post: Revoke the Lake Formation permissions granted to the Data-Analyst role and Producer account Drop the Athena tables Delete the Athena query results from your Amazon Simple Storage Service (Amazon S3) bucket Delete the Data-Analyst role from IAM Conclusion In this post, we demonstrated how to use cross-account IAM definer roles with AWS Glue Data Catalog views . We showed how data owner accounts can create and manage views in a central governance account while maintaining security and control over their data assets. This feature enables enterprises to implement sophisticated data mesh architectures without compromising on security or requiring data duplication. The ability to use cross-account definer roles with Data Catalog views provides several key advantages: Streamlines view management in multi-account environments Maintains existing CI/CD workflows and automation Enhances security through centralized governance Reduces operational overhead by eliminating the need for data duplication As organizations continue to build and scale their data lakehouse architectures across multiple AWS accounts, cross-account definer roles for Data Catalog views provide a crucial capability for implementing efficient, secure, and well-governed data sharing patterns. About the authors Aarthi Srinivasan Aarthi is a Senior Big Data Architect at Amazon Web Services (AWS). She works with AWS customers and partners to architect data lake solutions, enhance product features, and establish best practices for data governance. Sundeep Kumar Sundeep is a Sr. Specialist Solutions Architect at Amazon Web Services (AWS), helping customers build data lake and analytics platforms and solutions. When not building and designing data lakes, Sundeep enjoys listening to music and playing guitar. Loading comments… Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow Twitter Facebook LinkedIn Twitch Email Updates Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center SDKs & Tools .NET on AWS Python on AWS Java on AWS PHP on AWS JavaScript on AWS Help Contact Us File a Support Ticket AWS re:Post Knowledge Center AWS Support Overview Get Expert Help AWS Accessibility Legal English Back to top Amazon is an Equal Opportunity Employer: Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation / Age. x facebook linkedin instagram twitch <svg role | 2026-01-13T09:29:13 |
https://aws.amazon.com/blogs/big-data/aws-analytics-at-reinvent-2025-unifying-data-ai-and-governance-at-scale/ | AWS analytics at re:Invent 2025: Unifying Data, AI, and governance at scale | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog AWS analytics at re:Invent 2025: Unifying Data, AI, and governance at scale by Larry Weber on 07 JAN 2026 in Amazon EMR , Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , Amazon OpenSearch Service , Amazon Redshift , Amazon SageMaker Data & AI Governance , Amazon SageMaker Unified Studio , Analytics , AWS Glue , AWS Lake Formation , AWS re:Invent , Intermediate (200) Permalink Comments Share re:Invent 2025 showcased the bold Amazon Web Services (AWS) vision for the future of analytics, one where data warehouses, data lakes, and AI development converge into a seamless, open, intelligent platform, with Apache Iceberg compatibility at its core. Across over 18 major announcements spanning three weeks, AWS demonstrated how organizations can break down data silos, accelerate insights with AI, and maintain robust governance without sacrificing agility. Amazon SageMaker: Your data platform, simplified AWS introduced a faster, simpler approach to data platform onboarding for Amazon SageMaker Unified Studio . The new one-click onboarding experience eliminates weeks of setup, so teams can start working with existing datasets in minutes using their current AWS Identity and Access Management (IAM) roles and permissions. Accessible directly from Amazon SageMaker , Amazon Athena , Amazon Redshift , and Amazon S3 Tables consoles, this streamlined experience automatically creates SageMaker Unified Studio projects with existing data permissions intact. At its core is a powerful new serverless notebook that reimagines how data professionals work. This single interface combines SQL queries, Python code, Apache Spark processing, and natural language prompts, backed by Amazon Athena for Apache Spark to scale from interactive exploration to petabyte-scale jobs. Data engineers, analysts, and data scientists no longer need to context-switch between different tools based on workload—they can explore data with SQL, build models with Python, and use AI assistance, all in one place. The introduction of Amazon SageMaker Data Agent in the new SageMaker notebooks marks a pivotal moment in AI-assisted development for data builders. This built-in agent doesn’t only generate code, it understands your data context, catalog information, and business metadata to create intelligent execution plans from natural language descriptions. When you describe an objective, the agent breaks down complex analytics and machine learning (ML) tasks into manageable steps, generates the required SQL and Python code, and maintains awareness of your notebook environment throughout the entire process. This capability transforms hours of manual coding into minutes of guided development, which means teams can focus on gleaning insights rather than repetitive boilerplate. Embracing open data with Apache Iceberg One significant theme across this year’s launches was the widespread adoption of Apache Iceberg across AWS analytics, transforming how organizations manage petabyte-scale data lakes. Catalog federation to remote Iceberg catalogs through the AWS Glue Data Catalog addresses a critical challenge in modern data architectures. You can now query remote Iceberg tables, stored in Amazon Simple Storage Service (Amazon S3) and catalogued in remote Iceberg catalogs, using preferred AWS analytics services such as Amazon Redshift, Amazon EMR , Amazon Athena, AWS Glue, and Amazon SageMaker, without moving or copying tables. Metadata synchronizes in real time, providing query results that reflect the current state. Catalog federation supports both coarse-grained access control and fine-grained access permissions through AWS Lake Formation enabling cross-account sharing and trusted identity propagation while maintaining consistent security across federated catalogs. Amazon Redshift now writes directly to Apache Iceberg tables, enabling true open lakehouse architectures where analytics seamlessly span data warehouses and lakes. Apache Spark on Amazon EMR 7.12 , AWS Glue, Amazon SageMaker notebooks, Amazon S3 Tables, and the AWS Glue Data Catalog now support Iceberg V3’s capabilities, including deletion vectors that mark deleted rows without expensive file rewrites, dramatically reducing pipeline costs and accelerating data modifications and row lineage. V3 automatically tracks every record’s history, creating audit trails essential for compliance and has table-level encryption that helps organizations meet stringent privacy regulations. These innovations mean faster writes, lower storage costs, comprehensive audit trails, and efficient incremental processing across your data architecture. Governance that scales with your organization Data governance received substantial attention at re:Invent with major enhancements to Amazon SageMaker Catalog . Organizations can now curate data at the column level with custom metadata forms and rich text descriptions , indexed in real time for immediate discoverability. New metadata enforcement rules require data producers to classify assets with approved business vocabulary before publication, providing consistency across the enterprise. The catalog uses Amazon Bedrock large language models (LLMs) to automatically suggest relevant business glossary terms by analyzing table metadata and schema information, bridging the gap between technical schemas and business language. Perhaps most importantly, SageMaker Catalog now exports its entire asset metadata as queryable Apache Iceberg tables through Amazon S3 Tables. This way, teams can analyze catalog inventory with standard SQL to answer questions like “which assets lack business descriptions?” or “how many confidential datasets were registered last month?” without building custom ETL infrastructure. As organizations adopt multi-warehouse architectures to scale and isolate workloads, the new Amazon Redshift federated permissions capability eliminates governance complexity. Define data permissions one time from a Amazon Redshift warehouse, and they automatically enforce them across the warehouses in your account. Row-level, column-level, and masking controls apply consistently regardless of which warehouse queries originate from, and new warehouses automatically inherit permission policies. This horizontal scalability means organizations can add warehouses without increasing governance overhead, and analysts immediately see the databases from registered warehouses. Accelerating AI innovation with Amazon OpenSearch Service Amazon OpenSearch Service introduced powerful new capabilities to simplify and accelerate AI application development. With support for OpenSearch 3.3 , agentic search enables precise results using natural language inputs without the need for complex queries, making it easier to build intelligent AI agents. The new Apache Calcite-powered PPL engine delivers query optimization and an extensive library of commands for more efficient data processing. As seen in Matt Garman’s keynote , building large-scale vector databases is now dramatically faster with GPU acceleration and auto-optimization . Previously, creating large-scale vector indexes required days of building time and weeks of manual tuning by experts, which slowed innovation and prevented cost-performance optimizations. The new serverless auto-optimize jobs automatically evaluate index configurations—including k-nearest neighbors (k-NN) algorithms, quantization, and engine settings—based on your specified search latency and recall requirements. Combined with GPU acceleration, you can build optimized indexes up to ten times faster at 25% of the indexing cost, with serverless GPUs that activate dynamically and bill only when providing speed boosts. These advancements simplify scaling AI applications such as semantic search, recommendation engines, and agentic systems, so teams can innovate faster by dramatically reducing the time and effort needed to build large-scale, optimized vector databases. Performance and cost optimization Also announced in the keynote , Amazon EMR Serverless now eliminates local storage provisioning for Apache Spark workloads, introducing serverless storage that reduces data processing costs by up to 20% while preventing job failures from disk capacity constraints. The fully managed, auto scaling storage encrypts data in transit and at rest with job-level isolation, allowing Spark to release workers immediately when idle rather than keeping them active to preserve temporary data. Additionally, AWS Glue introduced materialized views based on Apache Iceberg, storing precomputed query results that automatically refresh as source data changes. Spark engines across Amazon Athena, Amazon EMR, and AWS Glue intelligently rewrite queries to use these views, accelerating performance by up to eight times while reducing compute costs. The service handles refresh schedules, change detection, incremental updates, and infrastructure management automatically. The new Apache Spark upgrade agent for Amazon EMR transforms version upgrades from months-long projects into week-long initiatives. Using conversational interfaces, engineers express upgrade requirements in natural language while the agent automatically identifies API changes and behavioral modifications across PySpark and Scala applications. Engineers review and approve suggested changes before implementation, maintaining full control while the agent validates functional correctness through data quality checks. Currently supporting upgrades from Spark 2.4 to 3.5, this capability is available through SageMaker Unified Studio, Kiro CLI , or an integrated development environment (IDE) with Model Context Protocol compatibility. For workflow optimization, AWS introduced a new Serverless deployment option for Amazon Managed Workflows for Apache Airflow (Amazon MWAA), which eliminates the operational overhead of managing Apache Airflow environments while optimizing costs through serverless scaling. This new offering addresses key challenges of operational scalability, cost optimization, and access management that data engineers and DevOps teams face when orchestrating workflows. With Amazon MWAA Serverless , data engineers can focus on defining their workflow logic rather than monitoring for provisioned capacity. They can now submit their Airflow workflows for execution on a schedule or on demand, paying only for the actual compute time used during each task’s execution. Looking forward These launches collectively represent more than incremental improvements. They signal a fundamental shift in how organizations are approaching analytics. By unifying data warehousing, data lakes, and ML under a common framework built on Apache Iceberg, simplifying access through intelligent interfaces powered by AI, and maintaining robust governance that scales effortlessly, AWS is giving organizations the tools to focus on insights rather than infrastructure. The emphasis on automation, from AI-assisted development to self-managing materialized views and serverless storage, reduces operational overhead while improving performance and cost efficiency. As data volumes continue to grow and AI becomes increasingly central to business operations, these capabilities position AWS customers to accelerate their data-driven initiatives with unprecedented simplicity and power. To view the Re:Invent 2025 Innovation Talk on analytics, visit Harnessing analytics for humans and AI on YouTube. About the authors Larry Weber Larry leads product marketing for the analytics portfolio at AWS. Loading comments… Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow Twitter Facebook LinkedIn Twitch Email Updates Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center SDKs & Tools .NET on AWS Python on AWS Java on AWS PHP on AWS JavaScript on AWS Help Contact Us File a Support Ticket AWS re:Post Knowledge Center AWS Support Overview Get Expert Help AWS Accessibility Legal English Back to top Amazon is an Equal Opportunity Employer: Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation / Age. x facebook linkedin instagram twitch youtube podcasts email Privacy Site terms Cookie Preferences © 2025, Amazon Web Services, Inc. or its affiliates. All rights reserved. | 2026-01-13T09:29:13 |
https://support.atlassian.com/ja/bitbucket-cloud/docs/search-in-bitbucket-cloud/ | Bitbucket Cloud での検索 | Bitbucket Cloud | アトラシアン サポート メイン コンテンツにスキップ Bitbucket サポート アプリ 使用を開始する 関連ドキュメント リソース お問い合わせ サインイン サインイン Bitbucket Cloud 関連ドキュメント Bitbucket Cloud の使用を開始する バージョン管理を開始する バージョン管理の種類について Bitbucket の DVCS ワークフロー コマンド ラインについて Git のインストールとセットアップ Git コマンド リポジトリのセットアップを開始する Bitbucket Cloud でリポジトリを作成する リポジトリのインポート リポジトリのクローン リポジトリでの作業を開始する コードを Bitbucket にプッシュする コードを Bitbucket からプルする Bitbucket Cloud で課題を作成する Wiki ページを作成または編集する ブランチとプル リクエストの使用を開始する ブランチを作成してプッシュする Bitbucket Cloud でブランチをチェックアウトする レビュー用にプル リクエストを作成する プル リクエストをレビューする Bitbucket Cloud でプル リクエストをマージする Bitbucket Cloud でのワークスペースへの参加、またはワークスペースの作成および管理 ワークスペースとは ワークスペースへの参加、またはワークスペースの作成 ワークスペースへのアクセス権を付与する ワークスペース メンバーをグループに整理する ワークスペースの非公開コンテンツへのアクセスを制御する ワークスペースのアクセス トークン アクセス トークンの有効期限を必要とする ワークスペース用のアクセス トークンを作成する ワークスペース レベルのアクセス トークンの権限 ワークスペース用のアクセス トークンを使用する ワークスペース用のアクセス トークンを取り消す ワークスペース用のアクセス トークンをローテーションする ワークスペース ID の変更 リポジトリやグループをワークスペースに転送する Bitbucket Cloud で Web サイトを公開する カスタム マージ チェックを設定して使用する パッケージの設定と使用 パッケージの使用を開始する コンテナーイメージとタグを削除 Bitbucket Cloud でリポジトリをセットアップおよび操作する Bitbucket Cloud での検索 リポジトリのセットアップ リポジトリを作成する バージョン管理外のコードをリポジトリに追加する 既存のツールからコードをインポートまたは変換する GitHub または GitLab からのリポジトリのインポート リポジトリで作業する README コンテンツ マークアップでコメントを設定する リポジトリを 2 つに分割する スニペットの概要 Git リポジトリのクローン リポジトリに変更をプッシュする リポジトリのブランチまたはフォーク リポジトリのブランチ リポジトリのフォーク リポジトリでブランチをリストする ブランチをチェック アウトする マージされていないブランチを管理する コミットの使用方法をご確認ください。 ソース ファイルの追加、編集、およびソース ファイルへのコミット コミットの DVCS ユーザー名を設定する リポジトリ タグ GPG キーを使ってコミットに署名する SSH キーでコミットに署名する コード ビューにプル リクエストを使用する プル リクエストを作成する プル リクエストのコードをレビューする プル リクエストでビルドのステータスを確認する プル リクエストのマージ マージの競合を解決する プル リクエストを却下する プル リクエストの下書き コンテンツの作成や編集に AI による文章作成アシスタントを活用する Bitbucket Cloud でスマート ミラーリングを使用する Bitbucket スマート ミラーリングでの作業 Bitbucket のスマート ミラーリングのトラブルシューティング Git Large File Storage (LFS) で大規模なファイルを管理する Bitbucket で Git LFS を使用する 既存の Bitbucket リポジトリで Git LFS を使用する BFG を使用してリポジトリを Git LFS へ移行する Bitbucket での Git LFS の現在の制限事項 Bitbucket での Git LFS のストレージ ポリシー Git のフィーチャー ブランチ ワークフロー コンテンツと diff の表示制限 リポジトリ設定を構成する リポジトリのプライバシーとフォーク オプションを設定する リポジトリへのアクセス権をユーザーとグループに付与する macOS でリポジトリ アクセス キーをセットアップする Windows でリポジトリ アクセス キーをセットアップする Linux でリポジトリ アクセス キーをセットアップする 1 台のデバイスで複数のリポジトリ アクセス キーを管理する 既存のコミットをユーザー名エイリアスにマッピングする Web サービスにリンクする リポジトリ所有権を譲渡する リポジトリのサイズを減らす Git リポジトリのメンテナンス リポジトリを削除する プルリクエストおよびマージの設定 Git の fast forward とブランチの管理 ブランチ権限を使用する マージ前にチェックを提案または要求する プル リクエストの diff からファイルを除外 リモート URL をリポジトリに変更する Webhook を管理する イベント ペイロード webhook の作成とトリガーのチュートリアル 署名済みコミットを必須にする リポジトリのアクセス トークン リポジトリ用のアクセス トークンを作成する リポジトリレベルのアクセス トークンの権限 リポジトリでのアクセス トークンの使用 リポジトリのアクセス トークンを取り消す リポジトリのアクセス トークンをローテーションする コード所有者の設定と使用 通知の表示 Wiki を使用してドキュメントを保存する Wiki を作成する Wiki のクローン Creole の特別なサポート Creole マークアップのマクロ参照 Wiki を公開または非公開にする Wiki でシンタックス ハイライトを使用する Wiki ページにイメージを追加する Wiki に目次を追加する Bitbucket 課題について 課題トラッカーを使用する 課題トラッカーを有効化する 課題フィールドの既定値を設定する 課題データをエクスポートまたはインポートする Bitbucket で課題データをエクスポートまたはインポートする 課題のインポート/エクスポートでのデータ形式 トラッカーを公開または非公開にする ユーザーがコードをプッシュしたときに課題を自動的に解決する 課題トラッカーのメール設定を行う シンタックス ハイライトと課題のマークアップについて Code Insights Pipelines を使用したビルド、テスト、およびデプロイ Bitbucket Pipelines を使い始める 最初のパイプラインを構成する パイプラインを表示 Pipelines のビルドで依存性を指定する Bitbucket Pipelines の制限事項 Bitbucket Pipelines glossary Pipelines をさまざまなソフトウェア言語で使用する Bitbucket Pipelines で Docker コマンドを実行する Javascript (Node.js) と Bitbucket Pipelines Java と Bitbucket Pipelines Laravel と Bitbucket Pipelines PHP と Bitbucket Pipelines Python と Bitbucket Pipelines Ruby と Bitbucket Pipelines Docker イメージをビルド環境として使用する 式言語 Pipelines デプロイメント ガイドへのアクセス AWS S3 へのデプロイ CodeDeploy を使用して AWS にデプロイする Elastic Beanstalk を使用して AWS にデプロイする AWS EKS (Kubernetes) にデプロイする Lambda 関数の更新を AWS にデプロイする Amazon ECS にデプロイする Firebase へのデプロイ Google Cloud にデプロイする Heroku にデプロイする Kubernetes にデプロイする Microsoft Azure にデプロイする npm へのデプロイ プル リクエストでデプロイする SCP を使用したデプロイ Bitbucket ダウンロードにビルド アーティファクトをデプロイする ビルド アーティファクトの公開とリンク Docker イメージをビルドしてコンテナ レジストリへプッシュする Bitbucket Pipelines 設定参照 グローバル オプション Git クローンの動作 キャッシュ、サービス コンテナー、エクスポート パイプラインの定義 Docker イメージ オプション パイプラインの開始条件 並行ステップ オプション 子パイプライン ステップのオプション Runtime v3 を有効にして使用する ステージ オプション ステップ オプション Pipelines の yaml ファイルで glob パターンを使用する YAML アンカー デプロイメント Bitbucket Pipelines OpenID Connect を使用して AWS にデプロイする デプロイを設定、監視する Bitbucket のデプロイ ガイドライン 同時実行の制御 指標 スケジュールされたパイプラインと、手動でトリガーされたパイプライン 変数とシークレット Bitbucket Pipelines で SSH キーを使用する macOS で Pipelines SSH キーをセットアップする Windows で Pipelines SSH キーをセットアップする Linux で Pipelines SSH キーをセットアップする パイプラインで複数の SSH キーを使用する キャッシュ パイプラインのアーティファクト データベースとサービス コンテナ Bitbucket Pipelines でパイプを使用する パイプとは Bitbucket Pipelines 用パイプを作成する パイプの記述の高度なテクニック 連携 Jira と Pipelines の連携 OIDC を使用して Pipelines とリソース サーバーを統合する OpenID Connect を使用して、Pipelines の AWS ECR イメージを使用する Slack を Pipelines と連携させる サードパーティのサービスに接続する さまざまなサードパーティ プロバイダーによるレート制限 テスト Bitbucket Pipelines でのクロスプラットフォーム テスト Pipelines のテスト レポート ランナー Bitbucket で新しいランナーを追加する bitbucket-pipelines.yml でランナーを設定する Linux Docker 用のランナーをセットアップする Linux Shell 用のランナーをセットアップする Windows 用のランナーをセットアップする MacOS のランナーをセットアップする プロキシを使用するようにランナーを設定する 自社ホスト ランナーへのログインを設定する ランナーの同時実行性を設定し、ステップ キューを検査する 会社のファイアウォールの背後で実行されているランナーの IP アドレス Docker イメージを自社ホスト ランナーで使用する Autoscaler for Runners on Kubernetes リポジトリにプッシュして戻す パイプライン構成の共有 動的なパイプライン Bitbucket Pipelines への移行 Bamboo から Bitbucket Pipelines に移行する方法 Jenkins から Bitbucket Pipelines に移行する方法 Bitbucket Cloud のプランと設定を管理する プランと請求の管理 Bitbucket Cloud にログインまたは接続する SSH と 2 段階認証を設定する macOS で個人用 SSH キーをセットアップする Windows で個人用 SSH キーをセットアップする Linux で個人用 SSH キーをセットアップする 1 台のデバイスで複数の Bitbucket ユーザーの SSH キーを管理する 2 段階認証の有効化 サポートされている SSH キー形式 Sourcetree を使用して SSH を設定する Bitbucket とアカウントの設定を管理する 個人設定にアクセスする ユーザー名を更新する アカウントの削除 メール エイリアスを設定する ウォッチしているオブジェクトのメール通知の管理 リポジトリのグループを整理する 非公開コンテンツへのアクセスを制御する Atlassian アカウントにアップグレードする キーボード ショートカット テーマを変更する アクセス トークン API トークン API トークンの使用 API トークンの作成 API トークンの権限 API トークンを取り消す Sourcetree または別のアプリに API トークンを追加する 会社のファイアウォールで許可リストに登録されている IP アドレスとドメイン Bitbucket Cloud とアプリおよび他の製品との連携 Bitbucket と Jira を連携する Bitbucket Cloud を Jira Cloud に接続する Bitbucket Cloud を Jira Data Center と接続する Jira Cloud プロジェクトを Bitbucket Cloud で使用する Bitbucket から Jira 作業項目を作成 プル リクエストのマージ時に Jira 作業項目をトランジション チームを自動で招待する設定 スマートコミットの有効化 Smart Commit を使用する Bitbucket と Compass を統合する Bitbucket Cloud を Compass に接続する リポジトリを Compass コンポーネントにリンクする Bitbucket Cloud を Marketplace アプリと使用する Bitbucket Cloud アプリの概要 Bitbucket Cloud を Slack と連携する Bitbucket で Trello ボードと連携する OAuth 経由で別のアプリケーションと連携する Cloud IDE アドオンのインストール ビルド システムを Bitbucket Cloud と連携する Bitbucket のソース コードへのハイパーリンク Bitbucket Cloud の開発者モードを有効にする Atlassian for VS Code 拡張機能を使用する VS Code の使用を開始する VS Code での Jira 作業項目 VS Code での Bitbucket プル リクエスト VS Code での Bitbucket Cloud パイプライン セキュリティ Snyk でセキュリティを追加して設定する Bitbucket Cloud REST API でサードパーティ製アプリをビルドする API リクエストの制限 Bitbucket Cloud で OAuth を使用する OAuth コンシューマーの例 Bitbucket REST API バージョン 1 を使用する groups エンドポイント group-privileges エンドポイント invitations エンドポイント users エンドポイント - 1.0 invitations リソース Bitbucket Cloud のアドバイザリやその他のリソースを確認する Bitbucket Cloud のセキュリティ アドバイザリにアクセスする セキュリティ アドバイザリ: URL によるアプリのインストール方法の変更 セキュリティ アドバイザリ - 2016-06-17 - パスワードのリセット Bitbucket Cloud のサポート終了のお知らせを見る AWS CodeDeploy アプリ削除のサポート終了 - 2019-12-03 チュートリアル チュートリアル: Bitbucket と Git を使用する Git リポジトリを作成する Git リポジトリをコピーしてファイルを追加する Bitbucket Cloud で Git リポジトリから変更をプルする Git ブランチを使用してファイルをマージする チュートリアル: Bitbucket と Sourcetree を使用する 新しいリポジトリを作成する リポジトリをコピーしてファイルを追加する Bitbucket でリポジトリから変更をプルする Sourcetree ブランチを使用して更新をマージする チュートリアル: Bitbucket のプル リクエストについて リポジトリを作成 (およびレビュアーを追加) する クローンを行い、新しいブランチに変更を加える プル リクエストを作成して変更をマージする Bitbucket Cloud でプロジェクトを作成して管理する プロジェクト用のアクセス トークン プロジェクトのアクセス トークン作成 プロジェクト レベルのアクセス トークンの権限 プロジェクト用のアクセス トークンを使用する プロジェクト用のアクセス トークンを取り消す プロジェクト用のアクセス トークンをローテーションする プロジェクト設定を行う macOS でプロジェクト アクセス キーをセットアップする Windows でプロジェクト アクセス キーをセットアップする Linux でプロジェクト アクセス キーをセットアップする 1 台のデバイスで複数のプロジェクト アクセス キーを管理する 初期設定のレビュアーをプロジェクトに追加する プロジェクトにリポジトリを追加する プロジェクトのブランチ モデルの構成 プロジェクトを見つけて共有する 既存のプロジェクトを管理および編集する プロジェクトのマージ戦略をセットアップする プロジェクト詳細の更新 プロジェクトのブランチ制限を設定する ユーザーとグループのプロジェクト権限を設定する プロジェクトの作成 アトラシアン サポート Bitbucket リソース Bitbucket Cloud でリポジトリをセットアップおよび操作する クラウド データセンター Bitbucket Cloud での検索 Bitbucket で検索を開始するには、上部ナビゲーション バーの右上隅にある検索フィールドを選択して、1 つの単語またはフレーズ全体 (二重引用符で囲む) を入力します。 Bitbucket の任意の場所で検索を開始するキーボード ショートカットは " / " です。検索語はファイル パス、ファイル名、およびファイル内のあらゆるコンテンツと照合されます。 Bitbucket の検索結果はコードを認識します。つまり、検索結果はランク付けされ、関数とタイプの定義が優先して表示されます。また、検索結果を絞り込みやすくするために演算子や修飾子を使用することもできます。 検索範囲 検索範囲は、検索場所によって変わります。 検索コンテキスト 検索範囲 アカウント ユーザーやワークスペースが所有している、またはアクセス可能なすべてのリポジトリ リポジトリ 個別のリポジトリおよびそれらのサブディレクトリ 自身のアクセス権に関連付けられていない公開リポジトリで検索するには、そのリポジトリに移動し、そこで検索します。 ファイルまたはパスの検索 ファイル名またはパスの一部を検索するだけで、ファイルを検索できます。検索で path を使用する場合、パス セグメントの完全一致のみがサポートされますが、修飾子を使用せずにファイル名の一部を使用して検索できます。以下の表の例を参照してくだい。 クエリ 結果 package.json package.json という名前のファイルを見つける package lock json package 、 lock 、および json を含むファイルを見つける ( package-lock.json など) package.json path:test test を含むパスを持つ、 package.json という名前のファイルを見つける MyClass MyClass.java および MyClassTest.java のファイル名を見つける フレーズ クエリ フレーズ クエリを使用すると、特定の組み合わせで表示される複数の単語を検索できます。 フレーズを検索する場合、単語を引用符で囲みます。たとえば、 abstract に class (または単語の一部) が続く一連の単語を含むフレーズを検索する場合、クエリは次のようになります。 "abstract class" このクエリは " abstract(class " などのフレーズも検索します。 同じ検索クエリでも二重引用符を使用しない場合は abstract と class の両方を任意の順序で含むファイルが返されます。 検索演算子 検索演算子を使用して検索結果を絞り込むことができます。 演算子はすべて大文字にする必要があります。 演算子を単独で使用することはできません。必ず検索語句と一緒に使用する必要があります。 検索クエリで AND を使用することはできません。複数の検索語は暗黙的に組み合わせられます。たとえば、 bitbucket jira のクエリは、 bitbucket と cloud の両方を含むファイルのみが一致することを示します。 次の検索演算子を利用できます。 演算子 クエリ例 結果 なし bitbucket jira bitbucket と jira という単語を任意の順序で含むファイルを返します。 NOT bitbucket NOT jira bitbucket を含み、 jira を含まないファイルを返します。 - bitbucket -jira 単語の前に使用します。 bitbucket を含み、 jira を含まないファイルを返します。 有効 () および無効 () な検索構文の例: 有効性 クエリ 結果 MyClass AND MyComponent NOT "YourClass" AND は有効な構文ではありません。複数の検索語は暗黙的に組み合わせられます。 NOT "YourClass" 演算子を単独で使用することはできません。除外対象の前に一致すべき検索語を指定する必要があります。 MyClass MyComponent NOT "YourClass" MyClass および MyComponent という単語を含み、 YourClass を含まないファイルを見つけます。 検索修飾子 修飾子を指定すると、検索結果を絞り込むことができます。 修飾子は key:value の形式で使用します。 複数の修飾子を組み合わせることができます。以降の「複数の修飾子を使用する」セクションを参照してください。 修飾子は NOT 演算子を使用して否定できます。上述の「検索演算子」セクションを参照してください。 次の検索修飾子を利用できます。 修飾子 クエリ例 結果 repo:<repo slug> <term> repo:myrepo MyClass myrepo にある、 MyClass という単語を含むファイルを照合します。 リポジトリ名ではワイルドカードはサポートされません。 リポジトリの既定のブランチのみが検索されます。 project:<project key> <term> project:MYPROJ jira MYPROJ キーのプロジェクトで、 jira という単語を含むファイルを照合します。 path:<directory|filename> <term> path:src MyClass パスが src と一致し、 MyClass という単語を含むファイル ext:<file extension> <term> ext:lhs jira jira という単語を含む、 .lhs 拡張子の Haskell ファイルを照合します。 lang:<language> <term> lang:c jira .c or .h 拡張子で jira という単語を含む C 言語のファイルを照合します。 パス修飾子 コード検索では、特定のパスのみを考慮するように制限することができます。検索では、ファイル パスはディレクトリとファイル名を含むセグメント ( / で区切られたパーツ) で分割されます。照合は 1 つ以上のセグメントで実行され、大文字と小文字を区別します。セグメント内での部分一致は行われません。 クエリ例 結果 path:src MyClass パスが src と一致し、 MyClass という単語を含むファイル path:/src MyClass src から始まるパスを持ち、 MyClass という単語を含むファイル path:src/main MyClass パスが src/main と一致し、 MyClass という単語を含むファイル path:src/*/module MyClass src 、および module 以外の任意の文字列と一致するパスを持ち、 MyClass という単語を含むファイル path:styles/*.css class styles と一致するパスを持ち、拡張子が css で、 class という単語を含むファイル。 styles とファイル名の間には任意の数の他のセグメントが含まれる可能性があります。 MyClass NOT path:src MyClass という単語を含み、 src と一致しないパスを持つファイル 言語およびファイル拡張子の修飾子 コード検索では、特定の言語またはファイル拡張子のみを考慮するように制限することができます。一部の言語では、言語の条件をファイル拡張子の指定と同じように指定できます。たとえば、 lang:java は ext:java に相当します。他の言語では、複数のファイル拡張子が 1 つの言語にマッピングされます。たとえば、 .hs 、 .lhs および .hs-boot ファイル拡張子は Haskell プログラミング言語で使用され、 lang:haskell を指定した場合に照合されます。 ここで使用する 'language' は、リポジトリの 言語 設定には関係しません。 コード検索で認識されるすべての言語を表示するにはここをクリックします... ada asp.net assembly c c++ c# clojure cobol cql css cython fortran go groovy haskell html java javascript json kotlin latex less lisp markdown mathematica matlab objective-c ocaml pascal perl php plain plsql properties python r ruby rust sas scala scss shell sieve soy sql swift velocity xml yaml 複数の修飾子を使用する 修飾子はタイプに応じて暗黙的に組み合わせられるため、演算子と修飾子を一緒に使用する必要はありません。検索クエリで複数の検索修飾子を使用する場合は次の点に注意します。 同じ 種類の検索修飾子は暗黙的に組み合わせられます。 異なる 種類の検索修飾子は暗黙的に組み合わせられます。 検索修飾子は、検索式全体に適用されます。 たとえば、 repo A または repo B にあり、" search-term " というフレーズを含み、 .js または .jsx 拡張子を持つファイルを検索するクエリは次のようになります。 repo:A repo:B ext:js ext:jsx search-term 有効 () および無効 () な検索構文の例: 有効性 クエリ 結果 ext:js project:myProject MyComponent js 拡張子を持ち、 myProject プロジェクトに含まれ、 MyComponent という単語を含むファイルを検索します。 MyClass NOT repo:test MyClass という単語を含むすべてのファイル ( test リポジトリを除く) を検索します MyClass -ext:java MyClass という単語を含むすべてのファイル ( java ファイルを除く) を検索します ext:js AND project:myProject MyComponent AND は有効な構文ではありません。複数の検索語は暗黙的に組み合わせられます。 ext:js NOT project:myProject MyComponent js 拡張子を持ち、 myProject 以外の任意のプロジェクトに含まれ、 MyComponent という単語を含むファイルを検索します。 ext:js ext:java MyComponent js または java の拡張子を持ち、 MyComponent という単語を含むファイルを見つけます。 コード検索の考慮事項 検索の実行方法については、いくつかの考慮事項があります。 検索はリポジトリで main ブランチを使用します。 320 KB 未満のファイルがインデックスされます – 大きなファイルからの検索結果は表示されません。 ワイルドカード検索 (例: qu?ck buil* ) はサポートされません。 次の文字は検索語から削除されます。 !"#$%&'()*+,/;:<=>?@[\]^`{|}~- 正規表現はクエリではサポートされません。 大文字と小文字は区別されません (ただし、検索演算子はすべて大文字にする必要があります)。 クエリには最大で 9 つの式 (単語と演算子の組み合わせ) を設定できます。 クエリの長さは、最大 250 文字です。 検索結果には、閲覧権限を持つコードのみが表示されます。 この内容はお役に立ちましたか? はい いいえ 正確ではなかった 明確ではなかった 関係なかった この記事についてのフィードバックを送信する さらにヘルプが必要ですか? アトラシアン コミュニティをご利用ください。 コミュニティに質問 Bitbucket Cloud でリポジトリをセットアップおよび操作する Bitbucket Cloud での検索 リポジトリのセットアップ リポジトリで作業する リポジトリのブランチまたはフォーク コミットの使用方法をご確認ください。 詳細を表示する このページの内容 検索範囲 ファイルまたはパスの検索 フレーズ クエリ 検索演算子 検索修飾子 パス修飾子 言語およびファイル拡張子の修飾子 複数の修飾子を使用する コード検索の考慮事項 コミュニティ 質問、ディスカッション、記事 アクセシビリティ データ収集時の通知 プライバシー ポリシー 利用規約 セキュリティ 2026 年 Atlassian | 2026-01-13T09:29:13 |
https://aws.amazon.com/blogs/big-data/use-amazon-sagemaker-custom-tags-for-project-resource-governance-and-cost-tracking/#Comments | Use Amazon SageMaker custom tags for project resource governance and cost tracking | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Use Amazon SageMaker custom tags for project resource governance and cost tracking by David Victoria , Ahan Malli , and Rohit Srikanta on 08 JAN 2026 in Advanced (300) , Amazon SageMaker , Amazon SageMaker Unified Studio , Technical How-to Permalink Comments Share Amazon SageMaker announced a new feature that you can use to add custom tags to resources created through an Amazon SageMaker Unified Studio project. This helps you enforce tagging standards that conform to your organization’s service control policies (SCPs) and helps enable cost tracking reporting practices on resources created across the organization. As a SageMaker administrator, you can configure a project profile with tag configurations that will be pushed down to projects that currently use or will use that project profile. The project profile is set up to pass either required key and value tag pairings or pass the key of the tag with a default value that can be modified during project creation. All tags passed to the project will result in the resources created by that project being tagged. This provides you with a governance mechanism that enforces that project resources have the expected tags across all projects of the domain. The first release of custom tags for project resources is supported through an application programming interface (API), through Amazon DataZone SDKs. In this post, we look at use cases for custom tags and how to use the AWS Command Line Interface (AWS CLI) to add tags to project resources. What we hear from customers As customers continue to build and collaborate using AWS tools for model development, generative AI, data processing, and SQL analytics, they see the need to bring control and visibility into the resources being created. To support connectivity to these AWS tools from SageMaker Unified Studio projects, many different types of resources across AWS services need to be created. These resources are created through AWS CloudFormation stacks (through project environment deployment) by the Amazon SageMaker service. From customers we hear the following use cases: Customers need to enforce that tagging practices conform to company policies through the use of AWS controls, such as SCPs, for resource creation. These controls block the creation of resources unless specific tags are placed on the resource. Customers can also start with policies to enforce that the correct tags are placed when resources are created with the additional goal of standardizing on resource reporting. By placing identifiable information on resources when created, they enforce consistency and completeness when performing cost attribution reporting and observability. Customer Swiss Life uses SageMaker as a single solution for cataloging, discovery, sharing, and governance of their enterprise data across business domains. They require all resources have a set of mandatory tags for their finance group to bill organizations across their company for the AWS resources created. “The launch of project resource tags for Amazon SageMaker allows us to bring visibility to the costs incurred across our accounts. With this capability we are able to meet the resource tagging guidelines of our company and have confidence in attributing costs across our multi-account setup for the resources created by Amazon SageMaker projects.” – Tim Kopacz, Software Developer at Swiss Life Prerequisites To get started with custom tags, you must have the following resources: A SageMaker Unified Studio domain. An AWS Identity and Access Management (IAM) entity with privileges to make AWS CLI calls to the domain. An IAM entity authorized to make changes to the domain IAM provisioning role. If SageMaker created this for you, it will be called AmazonSageMakerProvisioning-<accountId> . The provisioning role provisions and manages resources defined in the selected blueprints in your account. How to set up project resource tags The following steps outline how you can configure custom tags for your SageMaker Unified Studio project resources: (Optional) Update the SageMaker provisioning role to permit specific tag keys. Create a new project profile with project resource tags configured. Create a new project with project resource tags. Update an existing project with project resource tags. Validate that the resources are tagged. (Optional) Update a SageMaker provisioning role to permit tag key values The AmazonSageMakerProvisioning-<accountId> role has an AWS managed policy with condition aws:TagKeys allowing tags to be created by this role only if the tag key begins with AmazonDataZone . For this example, we will change the tag key to begin with different strings. Skip to Create a new project profile with project resource tags configured if you don’t need tag keys to have a different structure (such as begins with, contains, and so on) Open the AWS Management Console and go to IAM . In the navigation pane, choose Roles . In the list, choose AmazonSageMakerProvisioning- <accountId> . Choose the Permissions tab. Choose Add permissions , and then choose Create inline policy . Under Policy editor , select JSON . Enter the following policy. Add the strings under the condition aws:TagKeys . In this example, tag keys beginning with ACME or tag keys with the exact match of CostCenter will be created by the role. { "Version": "2012-10-17", "Statement": [ { "Sid": "CustomTagsUnTagPermissions", "Effect": "Allow", "Action": [ "codecommit:UntagResource", "iam:UntagRole", "logs:UntagResource", "athena:UntagResource", "redshift-serverless:UntagResource", "scheduler:UntagResource", "bedrock:UntagResource", "neptune-graph:UntagResource", "quicksight:UntagResource", "glue:UntagResource", "airflow:UntagResource", "secretsmanager:UntagResource", "lambda:UntagResource", "emr-serverless:UntagResource", "elasticmapreduce:RemoveTags", "sagemaker:DeleteTags", "ec2:DeleteTags" ], "Resource": "*", "Condition": { "StringEquals": { "aws:ResourceAccount": "${aws:PrincipalAccount}" }, "ForAllValues:StringLike": { "aws:TagKeys": [ "AmazonDataZone*", "ACME*", "CostCenter" ] }, "Null": { "aws:ResourceTag/AmazonDataZoneProject": "false" } } }, { "Sid": "CustomTagsTaggingPermissions", "Effect": "Allow", "Action": [ "cloudformation:TagResource", "codecommit:TagResource", "iam:TagRole", "glue:TagResource", "athena:TagResource", "lambda:TagResource", "redshift-serverless:TagResource", "logs:TagResource", "secretsmanager:TagResource", "sagemaker:AddTags", "emr-serverless:TagResource", "neptune-graph:TagResource", "bedrock:TagResource", "elasticmapreduce:AddTags", "airflow:TagResource", "scheduler:TagResource", "quicksight:TagResource", "emr-containers:TagResource", "logs:CreateLogGroup", "athena:CreateWorkGroup", "scheduler:CreateScheduleGroup", "cloudformation:CreateStack", "ec2:*" ], "Resource": "*", "Condition": { "ForAnyValue:StringLike": { "aws:TagKeys": [ "AmazonDataZone*", "ACME*", "CostCenter" ] }, "StringEquals": { "aws:ResourceAccount": "${aws:PrincipalAccount}" } } } ] } It’s possible to scope down the specific AWS service tag and un-tag permissions based on which blueprints or capabilities are being used. Create a new project profile with project resource tags configured Use the following steps to create a new SQL Analytics project profile with custom tags. The example uses AWS CLI commands. Open the AWS CloudShell console. Create a project profile using the following CLI command. The project-resource-tags parameter consists of key (tag key), value (tag value), and isValueEditable (boolean indicating if the tag value can be modified during project creation or update). The allow-custom-project-resource-tags parameter set to true permits the project creator to create additional key-value pairs. The key needs to conform to the inline policy of the AmazonSageMakerProvisioning-<accountId> role. The project-resource-tags-description parameter is a description field for project resource tags. The max character limit is 2,048. The description needs to be passed in every time create-project-profile or update-project-profile is called. aws datazone create-project-profile \ --name "SQL Analytics with Project Resource Tags" \ --description "Analyze your data in SageMaker Lakehouse using SQL" \ --domain-identifier "$DOMAIN_ID" \ --region "$REGION" \ --status ENABLED \ --project-resource-tags '[ { "key": "ACME-Application", "value": "SageMaker", "isValueEditable": false }, { "key": "CostCenter", "value": "123", "isValueEditable": true } ]' \ --allow-custom-project-resource-tags \ --environment-configurations '[ { "name": "Tooling", "description": "Configuration for the Tooling Environment", "environmentBlueprintId": "", "deploymentMode": "ON_CREATE", "deploymentOrder": 0, "awsAccount": { "awsAccountId": "$ACCOUNT" }, "awsRegion": { "regionName": "$REGION" }, "configurationParameters": { "parameterOverrides": [ { "name": "enableSpaces", "value": "false", "isEditable": false }, { "name": "maxEbsVolumeSize", "isEditable": false }, { "name": "idleTimeoutInMinutes", "isEditable": false }, { "name": "lifecycleManagement", "isEditable": false }, { "name": "enableNetworkIsolation", "isEditable": false } ] } }, { "name": "Lakehouse Database", "description": "Creates databases in Amazon SageMaker Lakehouse for storing tables in S3 and Amazon Athena resources for your SQL workloads", "environmentBlueprintId": "", "deploymentMode": "ON_CREATE", "deploymentOrder": 1, "awsAccount": { "awsAccountId": "$ACCOUNT" }, "awsRegion": { "regionName": "$REGION" }, "configurationParameters": { "parameterOverrides": [ { "name": "glueDbName", "value": "glue_db", "isEditable": true } ] } }, { "name": "OnDemand RedshiftServerless", "description": "Enables you to create an additional Amazon Redshift Serverless workgroup for your SQL workloads", "environmentBlueprintId": "", "deploymentMode": "ON_DEMAND", "awsAccount": { "awsAccountId": "$ACCOUNT" }, "awsRegion": { "regionName": "$REGION" }, "configurationParameters": { "parameterOverrides": [ { "name": "redshiftDbName", "value": "dev", "isEditable": true }, { "name": "redshiftMaxCapacity", "value": "512", "isEditable": true }, { "name": "redshiftWorkgroupName", "value": "redshift-serverless-workgroup", "isEditable": true }, { "name": "redshiftBaseCapacity", "value": "128", "isEditable": true }, { "name": "connectionName", "value": "redshift.serverless", "isEditable": true }, { "name": "connectToRMSCatalog", "value": "false", "isEditable": false } ] } }, { "name": "OnDemand Catalog for Redshift Managed Storage", "description": "Enables you to create additional catalogs in Amazon SageMaker Lakehouse for storing data in Redshift Managed Storage", "environmentBlueprintId": "", "deploymentMode": "ON_DEMAND", "awsAccount": { "awsAccountId": "$ACCOUNT" }, "awsRegion": { "regionName": "$REGION" }, "configurationParameters": { "parameterOverrides": [ { "name": "catalogName", "isEditable": true }, { "name": "catalogDescription", "value": "RMS catalog", "isEditable": true } ] } } ]' This project profile will have the tag ACME-Application = SageMaker placed on all projects associated to the project profile and cannot be modified by the project creator. The tag CostCenter = 123 can have the value modified by the project creator because the isValueEditable property is set to true . Grant permissions for users to use the project profile during project creation. In the Authorization section of the project profile set either Selected users or groups or Allow all users and groups . The use of the allow-custom-project-resource-tags parameter means the project creator can add their own tags (key-value pair). The key must conform to the condition check in the policy of the provisioning role ( AmazonSageMakerProvisioning-<accountId> ). If the allow-custom-project-resource-tags parameter is changed to false after a project created tags, tags created by the project will be removed during the next project update. Updates to the project profile Updates to project resource tags are possible through the update-project-profile command. The command will replace all values in the project-resource-tags section so be sure to include the exhaustive set of tags. Updates to the project profile are reflected in projects after running the update-project command or when a new project is created using the project profile. The following example adds a new tag, ACME-BusinessUnit = Retail . There are three ways to work with the project-resource-tags parameter when updating the project profile. Passing a non-empty list of project resource tags will replace the tags currently configured on the project profile. Passing an empty list of project resource tags will clear out all previously configured tags: --project-resource-tags '[]' Not including the project resource tag parameter will keep previously configured tags as-is. aws datazone update-project-profile \ --domain-identifier "$DOMAIN_ID" \ --identifier "$PROJECT_PROFILE_ID" \ --region "$REGION" \ --project-resource-tags '[ { "key": "ACME-Application", "value": "SageMaker", "isValueEditable": false }, { "key": "CostCenter", "value": "123", "isValueEditable": true }, { "key": "ACME-BusinessUnit", "value": "Retail", "isValueEditable": false } ]' Create a new project with project resource tags The following steps walk you through creating a new project that inherits tags from the project profile and lets the project creator modify one of the tag values. Create a project using the following example CLI command. Modify the CostCenter tag value using the --resource-tags parameter. Tags configured on the project profile where the isValueEditable attribute is false will be pushed to the project automatically. aws datazone create-project \ --domain-identifier "$DOMAIN_ID" \ --region "$REGION" \ --name "$PROJECT_NAME" \ --description "New project with tags" \ --project-profile-id "$PROJECT_PROFILE_ID" \ --resource-tags '{ "CostCenter": "456" }' Update existing project with project resource tags For existing projects associated to the project profile, you must update the project for the new tags to be applied. Update the project using the following example CLI command. In this scenario, an editable value needs to be updated and a new tag added. Tag CostCenter will have its default value overwritten as “789” and the new ACME-Department = Finance tag will be added. aws datazone update-project \ --domain-identifier "$DOMAIN_ID" \ --identifier "$PROJECT_ID" \ --project-profile-version "latest" \ --region "$REGION" \ --resource-tags '{ "CostCenter": "789", "ACME-Department": "Finance" }' Project level tags (those not configured from the project profile) need to be passed during project update to be preserved. For tags with isValueEditable = true configured from the project profile, any override previously set needs to be applied or the value will revert to the default from the project profile. Validating resources are tagged Validate that tags are placed correctly. An example resource that is created by the project is the project IAM role. Viewing the tags for this role should show the tags configured from the project profile. Open SageMaker Unified Studio to get the project role from the Project details section of the project. The role name begins with datazone_usr_role_ . Open the IAM console . In the navigation pane, choose Roles . Search for the project IAM role. Select the Tags tab. Conclusion In this post, we discussed tagging related use cases from customers and walked through getting started with custom tags in Amazon SageMaker to place tags on the resources created by the project. By giving administrators a way to configure project profiles with standardized tag configurations, you can now help ensure consistent tagging practices across all SageMaker Unified Studio projects while maintaining compliance with SCPs. This feature addresses two critical customer needs: enforcing organizational tagging standards through automated governance mechanisms and enabling accurate cost attribution reporting across multi-service deployments. To learn more, visit Amazon SageMaker , then get started with Project resource tags . About the authors David Victoria David is a Senior Technical Product Manager with Amazon SageMaker at AWS. He focuses on improving administration and governance capabilities needed for customers to support their analytics systems. He is passionate about helping customers realize the most value from their data in a secure, governed manner. Rohit Srikanta Rohit is a Senior Software Engineer at AWS. He works on building and scaling services within Amazon SageMaker. He focuses on developing robust and scalable distributed systems and is passionate about solving complex engineering challenges to deliver maximum customer value. Ahan Malli Ahan is a Software Development Engineer at AWS. He works on the core data and governance layer behind Amazon SageMaker. He’s passionate about building scalable distributed systems and streamlining developer workflows. When he’s not coding, you can find him traveling or hiking Pacific Northwest trails. Loading comments… Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow Twitter Facebook LinkedIn Twitch Email Updates Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center SDKs & Tools .NET on AWS Python on AWS Java on AWS PHP on AWS JavaScript on AWS Help Contact Us File a Support Ticket AWS re:Post Knowledge Center AWS Support Overview Get Expert Help AWS Accessibility Legal English Back to top Amazon is an Equal Opportunity Employer: Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation / Age. x facebook linkedin <path d="M4.68673 0.0559501C3.83553 0.0961101 3.25425 0.23195 2.74609 0.43163C2.22017 0.63659 1.77441 0.91163 1.3308 | 2026-01-13T09:29:13 |
https://aws.amazon.com/blogs/big-data/automate-and-orchestrate-amazon-emr-jobs-using-aws-step-functions-and-amazon-eventbridge/ | Automate and orchestrate Amazon EMR jobs using AWS Step Functions and Amazon EventBridge | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Automate and orchestrate Amazon EMR jobs using AWS Step Functions and Amazon EventBridge by Senthil Kamala Rathinam and Shashidhar Makkapati on 15 SEP 2025 in Advanced (300) , Amazon CloudWatch , Amazon EC2 , Amazon EMR , Amazon EventBridge , Analytics , AWS Step Functions , Technical How-to Permalink Comments Share Many enterprises are adopting Apache Spark for scalable data processing tasks such as extract, transform, and load (ETL), batch analytics, and data enrichment. As data pipelines evolve, the need for flexible and cost-efficient execution environments that support automation, governance, and performance at scale also evolve in parallel. Amazon EMR provides a powerful environment to run Spark workloads, and depending on workload characteristics and compliance requirements, teams can choose between fully managed options like Amazon EMR Serverless or more customizable configurations using Amazon EMR on Amazon Elastic Compute Cloud (Amazon EC2) . In use cases where infrastructure control, data locality, or strict security postures are essential, such as in financial services, healthcare, or government, running transient EMR on EC2 clusters becomes a preferred choice. However, orchestrating the full lifecycle of these clusters, from provisioning to job submission and eventual teardown, can introduce operational overhead and risk if done manually. To streamline this process, the AWS Cloud offers built-in orchestration capabilities using AWS Step Functions and Amazon EventBridge . Together, these services help you automate and schedule the entire EMR job lifecycle, reducing manual intervention while optimizing cost and compliance. Step Functions provides the workflow logic to manage cluster creation, Spark job execution, and cluster termination, and EventBridge schedules these workflows based on business or operational needs. In this post, we discuss how to build a fully automated, scheduled Spark processing pipeline using Amazon EMR on EC2, orchestrated with Step Functions and triggered by EventBridge. We walk through how to deploy this solution using AWS CloudFormation , processes COVID-19 public dataset data in Amazon Simple Storage Service (Amazon S3) , and store the aggregated results in Amazon S3. This architecture is ideal for periodic or scheduled batch processing scenarios where infrastructure control, auditability, and cost-efficiency are critical. Solution overview This solution uses the publicly available COVID-19 dataset to illustrate how to build a modular, scheduled architecture for scalable and cost-efficient batch processing for time-bound data workloads.The solution follows these steps: Raw COVID-19 data in CSV format is stored in an S3 input bucket. A scheduled rule in EventBridge triggers a Step Functions workflow. The Step Functions workflow provisions a transient Amazon EMR cluster using EC2 instances. A PySpark job is submitted to the cluster to calculate COVID-19 hospital utilization data to compute monthly state-level averages of inpatient and ICU bed utilization, and COVID-19 patient percentages. The processed results are written back to an S3 output bucket. After successful job completion, the EMR cluster is automatically deleted. Logs are persisted to Amazon S3 for observability and troubleshooting. By automating this workflow, you alleviate the need to manually manage EMR clusters while gaining cost-efficiency by running compute only when needed. This architecture is ideal for periodic Spark jobs such as ETL pipelines, regulatory reporting, and batch analytics, especially when control, compliance, and customization are required.The following diagram illustrates the architecture for this use case. The infrastructure is deployed using AWS CloudFormation to provide consistency and repeatability. AWS Identity and Access Management (IAM) roles grant least‑privilege access to Step Functions, Amazon EMR, EC2 instances, and S3 buckets, and optional AWS Key Management Service (AWS KMS) encryption can secure data at rest in Amazon S3 and Amazon CloudWatch Logs . By combining a scheduled trigger, stateful orchestration, and centralized logging, this solution delivers a fully automated, cost‑optimized, and secure way to run transient Spark workloads in production. Prerequisites Before you get started, make sure you have the following prerequisites: An AWS account. If you don’t have one, you can sign up for one. An IAM user with administrator access. For instructions, see Create a user with administrative access . The AWS Command Line Interface (AWS CLI) is installed on your local machine A default virtual private cloud (VPC) and subnet in the target AWS Region where you plan to run the CloudFormation template. Set up resources with AWS CloudFormation To provision the required resources using a single CloudFormation template, complete the following steps: Sign in to the AWS Management Console as an admin user. Clone the sample repository to your local machine or AWS CloudShell and navigate into the project directory. git clone https://github.com/aws-samples/sample-emr-transient-cluster-step-functions-eventbridge.git cd sample-emr-transient-cluster-step-functions-eventbridge Set an environment variable for the AWS Region where you plan to deploy the resources. Replace the placeholder with your Region code, for example, us-east-1 . export AWS_REGION=<YOUR AWS REGION> Deploy the stack using the following command. Update the stack name if needed. In this example, the stack is created with the name covid19-analysis . aws cloudformation deploy \ --template-file emr_transient_cluster_step_functions_eventbridge.yaml \ --stack-name covid19-analysis \ --capabilities CAPABILITY_IAM \ --region $AWS_REGION You can monitor the stack creation progress on the AWS CloudFormation console on the Events tab. The deployment typically completes in under 5 minutes. After the stack is successfully created, go to the Outputs tab on the AWS CloudFormation console and note the following values for use in later steps: InputBucketName OutputBucketName LogBucketName Set up the COVID-19 dataset With your infrastructure in place, complete the following steps to set up the input data: Download the COVID-19 data CSV file from HealthData.gov to your local machine. Rename the downloaded file to covid19-dataset.csv. Upload the renamed file to your S3 input bucket under the raw/ folder path. Set up the PySpark Script Complete the following steps to set up the PySpark script: Open AWS CloudShell from the console. Confirm that you are working inside the sample-emr-transient-cluster-step-functions-eventbridge directory before running the next command. Copy the PySpark script needed for this walkthrough into your input bucket: aws s3 cp covid19_processor.py s3://<InputBucketName>/scripts/ This script processes COVID-19 hospital utilization data stored as CSV files in your S3 input bucket. When running the job, provide the following command-line arguments: --input – The S3 path to the input CSV files --output – The S3 path to store the processed results The script reads the raw dataset, standardizes various date formats, and filters out records with invalid or missing dates. It then extracts key utilization metrics such as inpatient bed usage, ICU bed usage, and the percentage of beds occupied by COVID-19 patients and calculates monthly averages grouped by state. The aggregated output is saved as timestamped CSV files in the specified S3 location. This example demonstrates how you can use PySpark to efficiently clean, transform, and analyze large-scale healthcare data to gain actionable insights on hospital capacity trends during the pandemic. Configure a schedule in EventBridge The Step Functions state machine is by default scheduled to run on December 31, 2025, as a one-time execution. You can update the schedule for recurring or one-time execution as needed. Complete the following steps: On the EventBridge console, choose Schedules under Scheduler in the navigation pane. Select the schedule named <StackName>-covid19-analysis and choose Edit . Set your preferred schedule pattern. If you want to run the schedule one time, select One-time schedule for Occurrence and enter a date and time. If you want to run this on a recurring basis, select Recurring schedule . Specify the schedule type as either Cron-based schedule or Rate-based schedule as needed. Choose Next twice and choose Save schedule . Start the workflow in Step Functions Based on your EventBridge schedule, the Step Functions workflow will run automatically. For this walkthrough, complete the following steps to trigger it manually: On the Step Functions console, choose State machines in the navigation pane. Choose the state machine that begins with Covid19AnalysisStateMachine-*. Choose Start execution . In the Input section, provide the following JSON (provide the log bucket and output bucket names with the appropriate values captured earlier): { "LogUri": "s3://<LogBucketName>/logs/", "OutputS3Location": "s3://<OutputBucketName>/processed/" } Choose Start execution to initiate the workflow. Monitor the EMR job and workflow execution After you start the workflow, you can track both the Step Functions state transitions and the EMR job progress in real time on the console. Monitor the Step Functions state machine Complete the following steps to monitor the Step Functions state machine: On the Step Functions console, choose State machines in the navigation pane. Choose the state machine that begins with Covid19AnalysisStateMachine-*. Choose the running execution to view the visual workflow. Each state node will update as it progresses—green for success, red for failure. To explore a step, choose its node and inspect the input, output, and error details in the side pane. The following screenshot shows an example of a successfully executed workflow. Monitor the EMR cluster and EMR step Complete the following steps to monitor the EMR cluster and EMR step status: While the cluster is active, open the Amazon EMR console and choose Clusters in the navigation pane. Locate the Covid19Cluster transient EMR cluster. Initially, it will be in Starting status. On the Steps tab, you can see your Spark submit step listed. As the job progresses, the step status changes from Pending to Running to finally Completed or Failed . Choose the Applications tab to view the application UIs, in which you can access the Spark History Server and YARN Timeline Server for monitoring and troubleshooting. Monitor CloudWatch logs To enable CloudWatch logging and enhanced monitoring for your EMR on EC2 cluster, refer to Amazon EMR on EC2 – Enhanced Monitoring with CloudWatch using custom metrics and logs . This guide explains how to install and configure the CloudWatch agent using a bootstrap action, so you can stream system-level metrics (such as CPU, memory, and disk usage) and application logs from EMR nodes directly to CloudWatch. With this setup, you can gain real-time visibility into cluster health and performance, simplify troubleshooting, and retain critical logs even after the cluster is terminated. For this walkthrough, check the logs in the S3 log output location. Confirm cluster deletion When the Spark step is complete, Step Functions will automatically delete the Amazon EMR cluster. Refresh the Clusters page on the Amazon EMR console. You should see your cluster status change from Terminating to Terminated within a minute. By following these steps, you gain full end-to-end visibility into your workflow from the moment the Step Functions state machine is triggered to the automatic shutdown of the EMR cluster. You can monitor execution progress, troubleshoot issues, confirm job success, and continuously optimize your transient Spark workloads. Verify job output in Amazon S3 When the job is complete, complete the following steps to check the processed results in the S3 output bucket: On the Amazon S3 console, choose Buckets in the navigation pane. Open the output S3 bucket you noted earlier. Open the processed folder. Navigate into the timestamped subfolder to view the CSV output file. Download the CSV file to view the processed results, as shown in the following screenshot. Monitoring and troubleshooting To monitor the progress of your Spark job running on a transient EMR on EC2 cluster, use the Step Functions console. It provides real-time visibility into each state transition in your workflow, from cluster creation and job submission to cluster deletion. This makes it straightforward to track execution flow and identify where issues might occur.During job execution, you can use the Amazon EMR console to access cluster-level monitoring. This includes YARN application statuses, step-level logs, and overall cluster health. If CloudWatch logging is enabled in your job configuration, driver and executor logs stream in near real time, so you can quickly detect and diagnose errors, resource constraints, or data skew within your Spark application. After the workflow is complete, regardless of whether it succeeds or fails, you can perform a detailed post-execution analysis by reviewing the logs stored in the S3 bucket specified in the LogUri parameter. This log directory includes standard output and error logs, along with Spark history files, offering insights into execution behavior and performance metrics. For continued access to the Spark UI during job execution, you can use persistent application UIs on the EMR console. These links remain accessible even after the cluster is stopped, enabling deeper root-cause analysis and performance tuning for future runs. This visibility into both workflow orchestration and job execution can help teams optimize their Spark workloads, reduce troubleshooting time, and build confidence in their EMR automation pipelines. Clean up To avoid incurring ongoing charges, clean up the resources provisioned during this walkthrough: Empty the S3 buckets: On the Amazon S3 console, choose Buckets in the navigation pane. Select the input, output, and log buckets used in this tutorial. Choose Empty to remove all objects before deleting the buckets (optional). Delete the CloudFormation stack: On the AWS CloudFormation console, choose Stacks in the navigation pane. Select the stack you created for this solution and choose Delete . Confirm the deletion to remove associated resources. Conclusion In this post, we showed how to build a fully automated and cost-effective Spark processing pipeline using Step Functions, EventBridge, and Amazon EMR on EC2. The workflow provisions a transient EMR cluster, runs a Spark job to process data, and stops the cluster after the job completes. This approach helps reduce costs while giving you full control over the process. This solution is ideal for scheduled data processing tasks such as ETL jobs, log analytics, or batch reporting, especially when you need detailed control over infrastructure, security, and compliance settings. To get started, deploy the solution in your environment using the CloudFormation stack provided and adjust it to fit your data processing needs. Check out the Step Functions Developer Guide and Amazon EMR Management Guide to explore further. Share your feedback and ideas in the comments or connect with your AWS Solutions Architect to fine-tune this pattern for your use case. About the authors Senthil Kamala Rathinam Senthil is a Solutions Architect at Amazon Web Services, specializing in Data and Analytics for banking customers across North America. With deep expertise in Data and Analytics, AI/ML, and Generative AI, he helps organizations unlock business value through data-driven transformation. Beyond work, Senthil enjoys spending time with his family and playing badminton. Shashi Makkapati Shashi is a Senior Solutions Architect serving banking customers across North America. He specializes in data analytics, AI/ML, and generative AI, focusing on innovative solutions that transform financial organizations. Shashi is passionate about leveraging technology to solve complex business challenges in the banking sector. Outside of work, he enjoys traveling and spending quality time with his family. Loading comments… Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow Twitter Facebook LinkedIn Twitch Email Updates @charset "UTF-8";[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4{position:relative;transition:box-shadow .3s ease}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_3d631df0,[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_b27cc003,[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_5962fadc:hover{box-shadow:var(--rg-shadow-gray-elevation-1, 1px 1px 20px rgba(0, 0, 0, .1))}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_3d631df0.rgft_e79955da,[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_b27cc003.rgft_e79955da,[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_5962fadc:hover.rgft_e79955da{box-shadow:var(--rg-shadow-gray-elevation-2, 1px 1px 24px rgba(0, 0, 0, .25))}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_b27cc003:hover{box-shadow:none}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde{position:relative;transform-style:preserve-3d;overflow:unset!important}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:before{content:"";position:absolute;inset:0;border-radius:inherit;transform:translateZ(-1px);pointer-events:none;transition-property:filter,inset;transition-duration:.3s;transition-timing-function:ease;background-clip:content-box!important;padding:1px}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_3d631df0:before{filter:blur(15px)}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_3d631df0.rgft_4df65418:hover:before{filter:blur(20px)}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_5962fadc:hover:before{filter:blur(15px)}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_b27cc003:before{filter:blur(15px)}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_b27cc003:hover:before{filter:none}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_e90ac70d:active:before{filter:blur(8px)!important}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_a4f580d2:before{filter:blur(8px)!important}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=fuchsia] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=fuchsia].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-fuchsia, linear-gradient(123deg, #fa6f00 0%, #e433ff 50%, #8575ff 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=fuchsia] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=fuchsia].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-fuchsia, linear-gradient(123deg, #d14600 0%, #c300e0 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=indigo] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=indigo].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-indigo, linear-gradient(123deg, #0099ff 0%, #5c7fff 50%, #8575ff 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=indigo] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=indigo].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-indigo, linear-gradient(123deg, #006ce0 0%, #295eff 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=orange] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=orange].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-orange, linear-gradient(123deg, #ff1ae0 0%, #ff386a 50%, #fa6f00 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=orange] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=orange].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-orange, linear-gradient(123deg, #d600ba 0%, #eb003b 50%, #d14600 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=teal] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=teal].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-teal, linear-gradient(123deg, #00bd6b 0%, #00a4bd 50%, #0099ff 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=teal] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=teal].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-teal, linear-gradient(123deg, #008559 0%, #007e94 50%, #006ce0 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=blue] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=blue].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-blue, linear-gradient(123deg, #00bd6b 0%, #0099ff 50%, #8575ff 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=blue] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=blue].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-blue, linear-gradient(123deg, #008559 0%, #006ce0 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=violet] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=violet].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-violet, linear-gradient(123deg, #ad5cff 0%, #0099ff 50%, #00a4bd 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=violet] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=violet].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-violet, linear-gradient(123deg, #962eff 0%, #006ce0 50%, #007e94 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=purple] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=purple].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-purple, linear-gradient(123deg, #ff1ae0 0%, #8575ff 50%, #00a4bd 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=purple] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=purple].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-purple, linear-gradient(123deg, #d600ba 0%, #6842ff 50%, #007e94 100%))}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:linear-gradient(123deg,#d14600,#c300e0,#6842ff)}[data-eb-6a8f3296] [data-rg-theme=fuchsia] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=fuchsia].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-fuchsia, linear-gradient(123deg, #d14600 0%, #c300e0 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-theme=indigo] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=indigo].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-indigo, linear-gradient(123deg, #006ce0 0%, #295eff 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-theme=orange] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=orange].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-orange, linear-gradient(123deg, #d600ba 0%, #eb003b 50%, #d14600 100%))}[data-eb-6a8f3296] [data-rg-theme=teal] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=teal].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-teal, linear-gradient(123deg, #008559 0%, #007e94 50%, #006ce0 100%))}[data-eb-6a8f3296] [data-rg-theme=blue] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=blue].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-blue, linear-gradient(123deg, #008559 0%, #006ce0 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-theme=violet] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=violet].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-violet, linear-gradient(123deg, #962eff 0%, #006ce0 50%, #007e94 100%))}[data-eb-6a8f3296] [data-rg-theme=purple] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=purple].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-purple, linear-gradient(123deg, #d600ba 0%, #6842ff 50%, #007e94 100%))}[data-eb-6a8f3296] a.rgft_f7822e54,[data-eb-6a8f3296] button.rgft_f7822e54{--button-size: 44px;--button-pad-h: 24px;--button-pad-borderless-h: 26px;border:2px solid var(--rg-color-background-page-inverted, #0F141A);padding:8px var(--button-pad-h, 24px);border-radius:40px!important;align-items:center;justify-content:center;display:inline-flex;height:var(--button-size, 44px);text-decoration:none!important;text-wrap:nowrap;cursor:pointer;position:relative;transition:all .3s ease}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_094d67e1,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_094d67e1{--button-size: 32px;--button-pad-h: 14px;--button-pad-borderless-h: 16px}[data-eb-6a8f3296] a.rgft_f7822e54>span,[data-eb-6a8f3296] button.rgft_f7822e54>span{color:var(--btn-text-color, inherit)!important}[data-eb-6a8f3296] a.rgft_f7822e54:focus-visible,[data-eb-6a8f3296] button.rgft_f7822e54:focus-visible{outline:2px solid var(--rg-color-focus-ring, #006CE0)!important;outline-offset:4px!important;transition:outline 0s}[data-eb-6a8f3296] a.rgft_f7822e54:focus:not(:focus-visible),[data-eb-6a8f3296] button.rgft_f7822e54:focus:not(:focus-visible){outline:none!important}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_303c672b,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_303c672b{--btn-text-color: var(--rg-color-text-utility-inverted, #FFFFFF);background-color:var(--rg-color-btn-primary-bg, #161D26);border:none;padding:10px var(--button-pad-borderless-h, 24px)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_18409398,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_18409398{--btn-text-color: var(--rg-color-text-utility, #161D26);background-color:var(--rg-color-btn-secondary-bg, #FFFFFF);border-color:var(--rg-color-background-page-inverted, #0F141A)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_090951dc{--btn-text-color: var(--rg-color-text-utility, #161D26);background-color:var(--rg-color-background-object, #F3F3F7);border:none;padding:10px var(--button-pad-borderless-h, 24px)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_15529d9f,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_15529d9f{--btn-text-color: var(--rg-color-text-utility, #161D26);background-color:var(--rg-color-btn-secondary-bg, #FFFFFF);border:none;padding:10px var(--button-pad-borderless-h, 24px)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb950a4e{--btn-text-color: var(--rg-color-text-utility, #161D26);border:none;padding:10px var(--button-pad-borderless-h, 24px)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb950a4e{background-image:linear-gradient(97deg,#ffc0ad,#f8c7ff 37.79%,#d2ccff 75.81%,#c2d1ff)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb950a4e{--rg-gradient-angle:97deg;background-image:var(--rg-gradient-a, linear-gradient(120deg, #f8c7ff 20.08%, #d2ccff 75.81%))}[data-eb-6a8f3296] [data-rg-mode=dark] a.rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] [data-rg-mode=dark] button.rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] a[data-rg-mode=dark].rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] button[data-rg-mode=dark].rgft_f7822e54.rgft_bb950a4e{background-image:var(--rg-gradient-a, linear-gradient(120deg, #78008a 24.25%, #b2008f 69.56%))}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb419678,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb419678{--btn-text-color: var(--rg-color-text-utility-inverted, #FFFFFF);background-color:var(--rg-color-btn-visited-bg, #656871);border-color:var(--rg-color-btn-visited-bg, #656871)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb419678.rgft_18409398,[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb419678.rgft_15529d9f,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb419678.rgft_18409398,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb419678.rgft_15529d9f{--btn-text-color: var(--rg-color-text-utility, #161D26);background-color:var(--rg-color-btn-secondary-visited-bg, #FFFFFF)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_badebaf5,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_badebaf5{--btn-text-color: var(--rg-color-btn-disabled-text, #B4B4BB);background-color:var(--rg-color-btn-disabled-bg, #F3F3F7);border-color:var(--rg-color-btn-disabled-bg, #F3F3F7);cursor:default}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_badebaf5.rgft_18409398,[data-eb-6a8f3296] a.rgft_f7822e54.rgft_badebaf5.rgft_15529d9f,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_badebaf5.rgft_18409398,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_badebaf5.rgft_15529d9f{border:none;padding:10px var(--button-pad-borderless-h, 24px)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_badebaf5.rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_badebaf5.rgft_090951dc{--btn-text-color: var(--rg-color-btn-tertiary-disabled-text, #B4B4BB);background-color:#0000}[data-eb-6a8f3296] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_18409398:not(.rgft_bb950a4e),[data-eb-6a8f3296] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_15529d9f:not(.rgft_bb950a4e),[data-eb-6a8f3296] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_18409398:not(.rgft_bb950a4e),[data-eb-6a8f3296] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_15529d9f:not(.rgft_bb950a4e){--btn-text-color: var(--rg-color-text-utility, #161D26);background-color:var(--rg-color-btn-secondary-bg, #FFFFFF)}[data-eb-6a8f3296] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc{box-shadow:none}[data-eb-6a8f3296] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc{background-image:linear-gradient(97deg,#ffc0ad80,#f8c7ff80 37.79%,#d2ccff80 75.81%,#c2d1ff80)}[data-eb-6a8f3296] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc{--rg-gradient-angle:97deg;background-image:var(--rg-gradient-a-50, linear-gradient(120deg, #f8c7ff 20.08%, #d2ccff 75.81%))}[data-eb-6a8f3296] [data-rg-mode=dark] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] [data-rg-mode=dark] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] a[data-rg-mode=dark].rgft_f7822e54.rgft_090951dc:hover:not(.rgft_badebaf5),[data-eb-6a8f3296] button[data-rg-mode=dark].rgft_f7822e54.rgft_090951dc:hover:not(.rgft_badebaf5){background-image:var(--rg-gradient-a-50, linear-gradient(120deg, #78008a 24.25%, #b2008f 69.56%))}[data-eb-6a8f3296] a.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc{box-shadow:none}[data-eb-6a8f3296] a.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc{background-image:linear-gradient(97deg,#ffc0ad,#f8c7ff 37.79%,#d2ccff 75.81%,#c2d1ff)}[data-eb-6a8f3296] a.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc{--rg-gradient-angle:97deg;background-image:var(--rg-gradient-a-pressed, linear-gradient(120deg, rgba(248, 199, 255, .5) 20.08%, #d2ccff 75.81%))}[data-eb-6a8f3296] [data-rg-mode=dark] a.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] [data-rg-mode=dark] button.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] a[data-rg-mode=dark].rgft_f7822e54.rgft_090951dc:active:not(.rgft_badebaf5),[data-eb-6a8f3296] button[data-rg-mode=dark].rgft_f7822e54.rgft_090951dc:active:not(.rgft_badebaf5){background-image:var(--rg-gradient-a-pressed, linear-gradient(120deg, rgba(120, 0, 138, .5) 24.25%, #b2008f 69.56%))}[data-eb-6a8f3296] .rgft_8711ccd9{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;background:#0000;border:none;margin:0}[data-eb-6a8f3296] .rgft_8711ccd9.rgft_5e58a6df{text-align:center}[data-eb-6a8f3296] .rgft_8711ccd9.rgft_b7ada98b{display:block}[data-eb-6a8f3296] .rgft_8711ccd9.rgft_beb26dc7{font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace}[data-eb-6a8f3296] .rgft_8711ccd9 a{display:inline;position:relative;cursor:pointer;text-decoration:none!important;color:var(--rg-color-link-default, #006CE0);background:linear-gradient(to right,currentcolor,currentcolor);background-size:100% .1em;background-position:0 100%;background-repeat:no-repeat}[data-eb-6a8f3296] .rgft_8711ccd9 a:focus-visible{color:var(--rg-color-link-focus, #006CE0)}[data-eb-6a8f3296] .rgft_8711ccd9 a:hover{color:var(--rg-color-link-hover, #003B8F);animation:rgft_d72bdead .3s cubic-bezier(0,0,.2,1)}[data-eb-6a8f3296] .rgft_8711ccd9 a:visited{color:var(--rg-color-link-visited, #6842FF)}@keyframes rgft_d72bdead{0%{background-size:0 .1em}to{background-size:100% .1em}}[data-eb-6a8f3296] .rgft_8711ccd9 b,[data-eb-6a8f3296] b.rgft_8711ccd9,[data-eb-6a8f3296] .rgft_8711ccd9 strong,[data-eb-6a8f3296] strong.rgft_8711ccd9{font-weight:700}[data-eb-6a8f3296] i.rgft_8711ccd9,[data-eb-6a8f3296] .rgft_8711ccd9 i,[data-eb-6a8f3296] em.rgft_8711ccd9,[data-eb-6a8f3296] .rgft_8711ccd9 em{font-style:italic}[data-eb-6a8f3296] u.rgft_8711ccd9,[data-eb-6a8f3296] .rgft_8711ccd9 u{text-decoration:underline}[data-eb-6a8f3296] code.rgft_8711ccd9,[data-eb-6a8f3296] .rgft_8711ccd9 code{font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace;border-radius:4px;border:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);color:var(--rg-color-text-secondary, #232B37);padding-top:var(--rg-padding-8);padding-right:var(--rg-padding-8);padding-bottom:var(--rg-padding-8);padding-left:var(--rg-padding-8)}[data-eb-6a8f3296] .rgft_12e1c6fa{display:inline!important;vertical-align:middle}[data-eb-6a8f3296] .rgft_8711ccd9 p img{aspect-ratio:16/9;height:100%;object-fit:cover;width:100%;border-radius:8px;order:1;margin-bottom:var(--rg-margin-4)}[data-eb-6a8f3296] .rgft_8711ccd9 table{table-layout:fixed;border-spacing:0;width:100%}[data-eb-6a8f3296] .rgft_8711ccd9 table td{font-size:14px;border-right:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);border-bottom:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);padding-top:var(--rg-padding-6);padding-right:var(--rg-padding-6);padding-bottom:var(--rg-padding-6);padding-left:var(--rg-padding-6)}[data-eb-6a8f3296] .rgft_8711ccd9 table td:first-of-type{border-left:1px solid var(--rg-color-border-lowcontrast, #CCCCD1)}[data-eb-6a8f3296] .rgft_8711ccd9 table thead tr:first-of-type>*:first-of-type,[data-eb-6a8f3296] .rgft_8711ccd9 table:not(:has(thead)) tr:first-of-type>*:first-of-type{border-top-left-radius:16px}[data-eb-6a8f3296] .rgft_8711ccd9 table thead tr:first-of-type>*:last-of-type,[data-eb-6a8f3296] .rgft_8711ccd9 table:not(:has(thead)) tr:first-of-type>*:last-of-type{border-top-right-radius:16px}[data-eb-6a8f3296] .rgft_8711ccd9 table tr:last-of-type td:first-of-type{border-bottom-left-radius:16px}[data-eb-6a8f3296] .rgft_8711ccd9 table tr:last-of-type td:last-of-type{border-bottom-right-radius:16px}[data-eb-6a8f3296] .rgft_8711ccd9 table:not(:has(thead),:has(th)) tr:first-of-type td{border-top:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);border-right:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);border-bottom:1px solid var(--rg-color-border-lowcontrast, #CCCCD1)}[data-eb-6a8f3296] .rgft_8711ccd9 table th{color:var(--rg-color-text-primary-inverted, #FFFFFF);min-width:280px;max-width:400px;padding:0;text-align:left;vertical-align:top;background-color:var(--rg-color-background-object-inverted, #232B37);border-left:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);border-right:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);border-bottom:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);padding-top:var(--rg-padding-6);padding-right:var(--rg-padding-6);padding-bottom:var(--rg-padding-6);padding-left:var(--rg-padding-6);row-gap:var(--rg-margin-5);column-gap:var(--rg-margin-5);max-width:100%;min-width:150px}@media (min-width: 480px) and (max-width: 767px){[data-eb-6a8f3296] .rgft_8711ccd9 table th{max-width:100%;min-width:150px}}@media (min-width: 768px) and (max-width: 1023px){[data-eb-6a8f3296] .rgft_8711ccd9 table th{max-width:240px;min-width:180px}}@media (min-width: 1024px) and (max-width: 1279px){[data-eb-6a8f3296] .rgft_8711ccd9 table th{max-width:350px;min-width:240px}}@media (min-width: 1280px) and (max-width: 1599px){[data-eb-6a8f3296] .rgft_8711ccd9 table th{max-width:400px;min-width:280px}}@media (min-width: 1600px){[data-eb-6a8f3296] .rgft_8711ccd9 table th{max-width:400px;min-width:280px}}[data-eb-6a8f3296] .rgft_8711ccd9 table th:first-of-type{border-top-left-radius:16px;border-top:0 solid var(--rg-color-border-lowcontrast, #CCCCD1);border-left:0 solid var(--rg-color-border-lowcontrast, #CCCCD1);border-right:0 solid var(--rg-color-border-lowcontrast, #CCCCD1)}[data-eb-6a8f3296] .rgft_8711ccd9 table th:nth-of-type(n+3){border-left:0 solid var(--rg-color-border-lowcontrast, #CCCCD1)}[data-eb-6a8f3296] .rgft_8711ccd9 table th:last-of-type{border-top-right-radius:16px;border-top:0 solid var(--rg-color-border-lowcontrast, #CCCCD1);border-right:0 solid var(--rg-color-border-lowcontrast, #CCCCD1)}[data-eb-6a8f3296] .rgft_a1b66739{display:inline-flex;flex-direction:column;align-items:center;justify-content:center;color:var(--rg-color-text-primary, #161D26);--icon-color: currentcolor}[data-eb-6a8f3296] .rgft_a1b66739.rgft_bc1a8743{height:16px;width:16px}[data-eb-6a8f3296] .rgft_a1b66739.rgft_c0cbb35d{height:20px;width:20px}[data-eb-6a8f3296] .rgft_a1b66739.rgft_bd40fe12{height:32px;width:32px}[data-eb-6a8f3296] .rgft_a1b66739.rgft_27320e58{height:48px;width:48px}[data-eb-6a8f3296] .rgft_a1b66739 svg{fill:none;stroke:none}[data-eb-6a8f3296] .rgft_a1b66739 path[data-fill]:not([fill]){fill:var(--icon-color)}[data-eb-6a8f3296] .rgft_a1b66739 path[data-stroke]{stroke-width:2}[data-eb-6a8f3296] .rgft_a1b66739 path[data-stroke]:not([stroke]){stroke:var(--icon-color)}[data-eb-6a8f3296] .rgft_3ed66ff4{display:inline-flex;flex-direction:column;align-items:center;justify-content:center;color:var(--rg-color-text-primary, #161D26)}[data-eb-6a8f3296] .rgft_3ed66ff4.rgft_9124b200{height:10px;width:10px}[data-eb-6a8f3296] .rgft_3ed66ff4.rgft_bc1a8743{height:16px;width:16px}[data-eb-6a8f3296] .rgft_3ed66ff4.rgft_c0cbb35d{height:20px;width:20px}[data-eb-6a8f3296] .rgft_3ed66ff4.rgft_bd40fe12{height:32px;width:32px}[data-eb-6a8f3296] .rgft_3ed66ff4.rgft_27320e58{height:48px;width:48px}[data-eb-6a8f3296] .rgft_98b54368{color:var(--rg-color-text-body, #232B37)}[data-eb-6a8f3296] .rgft_98b54368.rgft_275611e5{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_98b54368.rgft_275611e5{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_98b54368.rgft_275611e5{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_98b54368.rgft_275611e5{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_98b54368.rgft_275611e5{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_98b54368.rgft_275611e5{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_98b54368.rgft_275611e5{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_98b54368.rgft_275611e5{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_98b54368.rgft_007aef8b{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.429;font-weight:400;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_98b54368.rgft_007aef8b{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.429;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_98b54368.rgft_007aef8b{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.429;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_98b54368.rgft_007aef8b{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_98b54368.rgft_007aef8b{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_98b54368.rgft_007aef8b{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_98b54368.rgft_007aef8b{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_98b54368.rgft_007aef8b{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_98b54368.rgft_ff19c5f9{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:1.333;font-weight:400;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_98b54368.rgft_ff19c5f9{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:1.333;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_98b54368.rgft_ff19c5f9{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:1.333;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_98b54368.rgft_ff19c5f9{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_98b54368.rgft_ff19c5f9{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_98b54368.rgft_ff19c5f9{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_98b54368.rgft_ff19c5f9{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_98b54368.rgft_ff19c5f9{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_98b54368 ul{list-style-type:disc;margin-top:2rem}[data-eb-6a8f3296] .rgft_98b54368.rgft_2a7f98ee{display:inline;position:relative;cursor:pointer;text-decoration:none!important;color:var(--rg-color-link-default, #006CE0);background:linear-gradient(to right,currentcolor,currentcolor);background-size:100% .1em;background-position:0 100%;background-repeat:no-repeat}[data-eb-6a8f3296] .rgft_98b54368.rgft_2a7f98ee:focus-visible{color:var(--rg-color-link-focus, #006CE0)}[data-eb-6a8f3296] .rgft_98b54368.rgft_2a7f98ee:hover{color:var(--rg-color-link-hover, #003B8F);animation:rgft_9beb7cc5 .3s cubic-bezier(0,0,.2,1)}[data-eb-6a8f3296] .rgft_98b54368.rgft_2a7f98ee:visited{color:var(--rg-color-link-visited, #6842FF)}@keyframes rgft_9beb7cc5{0%{background-size:0 .1em}to{background-size:100% .1em}}[data-eb-6a8f3296] .rgft_d835af5c{color:var(--rg-color-text-title, #161D26)}[data-eb-6a8f3296] .rgft_d835af5c.rgft_3e9243e1{font-size:calc(4.5rem * var(--font-size-multiplier, 1.6));line-height:1.111;font-weight:500;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_3e9243e1{font-size:calc(3.75rem * var(--font-size-multiplier, 1.6));line-height:1.133;font-weight:500}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_3e9243e1{font-size:calc(3rem * var(--font-size-multiplier, 1.6));line-height:1.167;font-weight:500}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_d835af5c.rgft_3e9243e1{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_d835af5c.rgft_3e9243e1{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_d835af5c.rgft_3e9243e1{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_d835af5c.rgft_3e9243e1{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_d835af5c.rgft_3e9243e1{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_d835af5c.rgft_54816d41{font-size:calc(3.75rem * var(--font-size-multiplier, 1.6));line-height:1.133;font-weight:500;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_54816d41{font-size:calc(3rem * var(--font-size-multiplier, 1.6));line-height:1.167;font-weight:500}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_54816d41{font-size:calc(2.5rem * var(--font-size-multiplier, 1.6));line-height:1.2;font-weight:500}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_d835af5c.rgft_54816d41{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_d835af5c.rgft_54816d41{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_d835af5c.rgft_54816d41{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_d835af5c.rgft_54816d41{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_d835af5c.rgft_54816d41{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_d835af5c.rgft_852a8b78{font-size:calc(3rem * var(--font-size-multiplier, 1.6));line-height:1.167;font-weight:500;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_852a8b78{font-size:calc(2.5rem * var(--font-size-multiplier, 1.6));line-height:1.2;font-weight:500}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_852a8b78{font-size:calc(2rem * var(--font-size-multiplier, 1.6));line-height:1.25;font-weight:500}}[data-eb-6a8f3296] [data-rg-lang= | 2026-01-13T09:29:13 |
https://aws.amazon.com/blogs/big-data/create-aws-glue-data-catalog-views-using-cross-account-definer-roles/ | Create AWS Glue Data Catalog views using cross-account definer roles | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Create AWS Glue Data Catalog views using cross-account definer roles by Aarthi Srinivasan and Sundeep Kumar on 08 JAN 2026 in Advanced (300) , Analytics , AWS Glue , Technical How-to Permalink Comments Share With AWS Glue Data Catalog views you can create a SQL view in the Data Catalog that references one or more base tables. These multi-dialect views support various SQL query engines, providing consistent access across multiple Amazon Web Services (AWS) services including Amazon Athena , Amazon Redshift Spectrum, and Apache Spark in both Amazon EMR and AWS Glue 5.0 . You can now create Data Catalog views using a cross-account AWS Identity and Access Management (IAM) definer role. A definer role is an IAM role used to create the Data Catalog view and has SELECT permissions on all columns of the underlying base tables. This definer role is assumed by AWS Glue and AWS Lake Formation service principals to vend credentials to the base tables’ data whenever the view is queried. The definer role allows the Data Catalog view to be shared to principals or AWS accounts so that you can share a filtered subset of data without sharing the base tables. Previously, Data Catalog views required a definer role within the same AWS account as the base tables. The introduction of cross-account definer roles enables Data Catalog view creation in enterprise data mesh architectures. In this setup, database and table metadata is centralized in a governance account, and individual data owner accounts maintain control over table creation and management through their IAM roles. Data owner accounts can now create and manage Data Catalog views in the central governance accounts using their existing continuous integration and continuous delivery (CI/CD) pipeline roles. In this post, we show you a cross-account scenario involving two AWS accounts: a central governance account containing the tables and hosting the views and a data owner (producer) account with the IAM role used to create and manage views. We provide implementation details for both SPARK dialect using AWS SDK code samples and ATHENA dialect using SQL commands. Using this approach, you can implement sophisticated data governance models at enterprise scale while maintaining operational efficiency across your AWS environment. Key benefits Key benefits for cross-account definer roles are as follows: Enhanced data mesh support – Enterprises with multi-account data lakehouse architectures can now maintain their existing operational model where data owner accounts manage table creation and updates using their established IAM roles. These same roles can now create and manage Data Catalog views across account boundaries. Strengthened security controls – By keeping table and view management within data owner account roles: Security posture is enhanced through proper separation of duties. Audit trails become more comprehensive and meaningful. Access controls follow the principle of least privilege. Elimination of data duplication – Data owner accounts can create views in central accounts that: Provide access to specific data subsets without duplicating tables. Reduce storage costs and management overhead. Maintain a single source of truth while enabling targeted data sharing. Solution overview An example customer has a database with two transaction tables in their central account, where the catalog and permissions are maintained. With the database shared to the data owner (producer) account, we create a Data Catalog view in the central account on these two tables, using the producer’s definer role. The view from the central account can be shared to additional consumer accounts and queried. We illustrate creating the SPARK dialect using create-table CLI , and add the ATHENA dialect for the same view from the Athena console . We also provide the AWS SDK sample code for CreateTable() and UpdateTable() , with view definition and a sample pySpark script to read and verify the view in AWS Glue. The following diagram shows the table, view, and definer IAM role placements between a central governance account and data producer account. Prerequisites To perform this solution, you need to have the following prerequisites: Two AWS accounts with AWS Lake Formation set up. For details, refer to Set up AWS Lake Formation . The Lake Formation setup includes registering your IAM admin role as Lake Formation administrator. In the Data Catalog settings , shown in the following screenshot, Default permissions for newly created databases and tables is set to use Lake Formation permissions only. Cross-account version settings is set to Version 4 . Create an IAM role Data-Analyst in the producer account. For the IAM permissions on this role, refer to Data analyst permissions . This role will also be used as the view definer role. Add the permissions to this definer role from the Prerequisites for creating views . Create database and tables in the central account In this step, you create two tables in the central governance account and populate them with few rows of data: Sign in to the central account as admin user. Open the Athena console and set up the Athena query results bucket . Run the following queries to create two sample Iceberg tables, representing bank customer transactions data: /* Check if the Database exists, if not create new database. */ CREATE DATABASE IF NOT EXISTS bankdata_icebergdb; /*Create transaction_table1*/ Replace the bucket name CREATE TABLE bankdata_icebergdb.transaction_table1 ( transaction_id string, transaction_type string, transaction_amount double) LOCATION 's3://<bucket-name>/bankdata_icebergdb/transaction-table1' TBLPROPERTIES ( 'table_type'='iceberg', 'write_compression'='zstd' ); /*Create transaction_table2*/ CREATE TABLE bankdata_icebergdb.transaction_table2 ( transaction_id string, transaction_location string, transaction_date date) LOCATION 's3://<bucket-name>/bankdata_icebergdb/transaction-table2' TBLPROPERTIES ( 'table_type'='iceberg', 'write_compression'='zstd' ); INSERT INTO bankdata_icebergdb.transaction_table1 (transaction_id, transaction_type, transaction_amount) VALUES ('T001', 'purchase', 50.0), ('T002', 'purchase', 120.0), ('T003', 'refund', 200.5), ('T004', 'purchase', 80.0), ('T005', 'withdrawal', 500.0), ('T006', 'purchase', 300.0), ('T007', 'deposit', 1000.0), ('T008', 'refund', 20.0), ('T009', 'purchase', 150.0), ('T010', 'withdrawal', 75.0); INSERT INTO bankdata_icebergdb.transaction_table2 (transaction_id, transaction_location, transaction_date) VALUES ('T001', 'Charlotte', DATE '2024-10-01'), ('T002', 'Seattle', DATE '2024-10-02'), ('T003', 'Chicago', DATE '2024-10-03'), ('T004', 'Miami', DATE '2024-10-04'), ('T005', 'New York', DATE '2024-10-05'), ('T006', 'Austin', DATE '2024-10-06'), ('T007', 'Denver', DATE '2024-10-07'), ('T008', 'Boston', DATE '2024-10-08'), ('T009', 'San Jose', DATE '2024-10-09'), ('T010', 'Phoenix', DATE '2024-10-10'); Verify the created tables in Athena query editor by running a preview. Share the database and tables from central to producer account In the central governance account, you share the database and the two tables to the producer account and the Data-Analyst role in producer. Sign in to the Lake Formation console as the Lake Formation admin role. In the navigation pane, choose Data permissions . Choose Grant and provide the following information: For Principals , select External accounts and enter the producer account ID, as shown in the following screenshot. For Named Data Catalog Resources , select the default catalog and database bankdata_icebergdb , as shown in the following screenshot. Under Database permissions , select Describe . For Grantable permissions , select Describe . Choose Grant . Repeat the preceding steps to grant access to the producer account definer role Data-Analyst on the database bankdata_icebergdb and the two tables transaction_table1 and transaction_table2 as follows. Under Database permissions , grant Create table and Describe permissions. Under Table permissions , grant Select and Describe on all columns. With these steps, the central governance account data admin steward has shared the database and tables to the producer account definer role. Steps for producer account Follow these steps for the producer account: Sign in to the Lake Formation console on the producer account as the Lake Formation administrator. In the left navigation pane, choose Databases . A blue banner will appear on the console, showing pending invitations from AWS Resource Access Manager (AWS RAM). Open the AWS RAM console and review the AWS RAM shares under Shared with me. You will see the AWS RAM shares in pending state. Select the pending AWS RAM share from central account and choose Accept resource share . After the resource share request is accepted, the shared database shows up in the producer account. On the Lake Formation console, select the database. On the Create dropdown list, choose Resource link . Provide a name rl_bank_iceberg and choose Create . Let’s grant Describe permission on the resource link to the Data-Analyst role in the producer account in the following steps. In the left navigation pane, choose Data permissions . Choose the Data-Analyst role. Select the resource link rl_bank_iceberg for the database as shown in the following screenshot. Grant Describe permission on the resource link. Note: Cross-account Data Catalog views can’t be created using a resource link, although a resource link is needed for the SDK use of SPARK dialect. Next, add the central account Data Catalog as a Data Source in Athena from producer account: Open the Athena console. On the left navigation pane, choose Data sources and catalogs . Choose Create data source . Select S3-AWS Glue Data Catalog . Choose AWS – Glue Data Catalog in another account and name the data source as centraladmin . Choose Next and then create data source. After the data source is created, navigate to the Query editor and verify the Data source centraladmin appears, as shown in the following screenshot. The definer role can also now access and query the central catalog database. Create SPARK dialect view In this step, you create a view with SPARK dialect, using AWS Glue CLI command create-table : Sign in to the AWS console in the producer account as Data-Analyst role. Enter the following command in your CLI environment, such as AWS CloudShell , to create a SPARK DIALECT: aws glue create-table --cli-input-json '{ "DatabaseName": "rl_bank_iceberg", "TableInput": { "Name": "mdv_transaction1", "StorageDescriptor": { "Columns": [ { "Name": "transaction_id", "Type": "string" }, { "Name": "transaction_type", "Type": "string" }, { "Name": "transaction_amount", "Type": "float" }, { "Name": "transaction_location", "Type": "string" }, { "Name": "transaction_date", "Type": "date" } ], "SerdeInfo": {} }, "ViewDefinition": { "SubObjects": [ "arn:aws:glue:<your-region>:<your-central-account-id>:table/bankdata_icebergdb/transaction_table1", "arn:aws:glue:<your-region>:<your-central-account-id>:table/bankdata_icebergdb/transaction_table2" ], "IsProtected": true, "Representations": [ { "Dialect": "SPARK", "DialectVersion": "1.0", "ViewOriginalText": "SELECT t1.transaction_id, t1.transaction_type, t1.transaction_amount, t2.transaction_location, t2.transaction_date FROM transaction_table1 t1 JOIN transaction_table2 t2 ON t1.transaction_id = t2.transaction_id WHERE t1.transaction_amount > 100;", "ViewExpandedText": "SELECT t1.transaction_id, t1.transaction_type, t1.transaction_amount, t2.transaction_location, t2.transaction_date FROM transaction_table1 t1 JOIN transaction_table2 t2 ON t1.transaction_id = t2.transaction_id WHERE t1.transaction_amount > 100;" } ] } } }' Open the Lake Formation console and verify if the view is created. Verify the dialect of the view on the SQL definitions tab for the view details. Add ATHENA dialect To add ATHENA dialect, follow these steps: On the Athena console, select centraladmin from the Data source . Enter the following SQL script to create the ATHENA dialect for the same view: ALTER VIEW mdv_transaction1 FORCE ADD DIALECT AS SELECT t1.transaction_id, t1.transaction_type, t1.transaction_amount, t2.transaction_location, t2.transaction_date FROM transaction_table1 t1 JOIN transaction_table2 t2 ON t1.transaction_id = t2.transaction_id WHERE t1.transaction_amount > 100 We can’t use the resource link rl_bank_iceberg in the Athena query editor to create or alter a view in the central account. Verify the added dialect by running a preview in Athena. For running the query, you can use either the resource link rl_bank_iceberg from the producer account catalog or use the centraladmin catalog. The following screenshot shows querying using the resource link of the database in the producer account catalog. The following screenshot shows querying the view from the producer using the connected catalog centraladmin as the data source. Verify the dialects on the view by inspecting the table in the Lake Formation console. You can now query the view as the Data-Analyst role in the producer account, using both Athena and Spark. The view will also show in the central account as shown in the following code example, with access to the Lake Formation admin. You can also create the view with ATHENA dialect and add the SPARK dialect. The SQL syntax to create the view in ATHENA dialect is shown in the following example: create protected multi dialect view mdv_transaction1 security definer as SELECT t1.transaction_id, t1.transaction_type, t1.transaction_amount, t2.transaction_location, t2.transaction_date FROM transaction_table1 t1 JOIN transaction_table2 t2 ON t1.transaction_id = t2.transaction_id WHERE t1.transaction_amount > 100; The update-table CLI to add the corresponding SPARK dialect is shown in the following example: aws glue update-table --cli-input-json '{ "DatabaseName": "rl_bankdatadb", "ViewUpdateAction": "ADD", "Force": true, "TableInput": { "Name": " mdv_transaction1", "StorageDescriptor": { "Columns": [ { "Name": "transaction_id", "Type": "string" }, { "Name": "transaction_type", "Type": "string" }, { "Name": "transaction_amount", "Type": "float" }, { "Name": "transaction_location", "Type": "string" }, { "Name": "transaction_date", "Type": "date" } ], "SerdeInfo": {} }, "ViewDefinition": { "SubObjects": [ " "arn:aws:glue:<your-region>:<your-central-account-id>:table/bankdata_icebergdb/transaction_table1", "arn:aws:glue:<your-region>:<your-central-account-id>:table/bankdata_icebergdb/transaction_table2" ], "IsProtected": true, "Representations": [ { "Dialect": "SPARK", "DialectVersion": "1.0", "ViewOriginalText": " SELECT t1.transaction_id, t1.transaction_type, t1.transaction_amount, t2.transaction_location, t2.transaction_date FROM transaction_table1 t1 JOIN transaction_table2 t2 ON t1.transaction_id = t2.transaction_id WHERE t1.transaction_amount > 100", "ViewExpandedText": " SELECT t1.transaction_id, t1.transaction_type, t1.transaction_amount, t2.transaction_location, t2.transaction_date FROM transaction_table1 t1 JOIN transaction_table2 t2 ON t1.transaction_id = t2.transaction_id WHERE t1.transaction_amount > 100" } ] } } }' The following is a sample Python script to create a SPARK dialect view: glueview-createtable.py . The following code block is a sample AWS Glue extract, transfer, and load (ETL) script to access the Spark dialect of the view from AWS Glue 5.0 from the central account. The AWS Glue job execution role should have Lake Formation SELECT permission on the AWS Glue view: from pyspark.context import SparkContext from pyspark.sql import SparkSession aws_region = "<your-region>" aws_account_id = "<your-central-account-id>" local_catalogname = "spark_catalog" warehouse_path = "s3://<your-bucket-name>/bankdata_icebergdb/transaction-table1" spark = SparkSession.builder.appName('query_glue_view') \ .config('spark.sql.extensions','org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions') \ .config(f'spark.sql.catalog.{local_catalogname}', 'org.apache.iceberg.spark.SparkSessionCatalog') \ .config(f'spark.sql.catalog.{local_catalogname}.catalog-impl', 'org.apache.iceberg.aws.glue.GlueCatalog') \ .config(f'spark.sql.catalog.{local_catalogname}.client.region', aws_region) \ .config(f'spark.sql.catalog.{local_catalogname}.glue.account-id', aws_account_id) \ .config(f'spark.sql.catalog.{local_catalogname}.io-impl', 'org.apache.iceberg.aws.s3.S3FileIO') \ .config(f'spark.sql.catalog.{local_catalogname}.warehouse',warehouse_path) \ .getOrCreate() spark.sql(f"show databases").show() spark.sql(f"SHOW TABLES IN {local_catalogname}.bankdata_icebergdb").show() spark.sql(f"SELECT * FROM {local_catalogname}.bankdata_icebergdb. mdv_transaction1").show() In the AWS Glue job-details, for Lake Formation managed tables and for Iceberg tables, set additional parameters respectively as follows: --enable-lakeformation-fine-grained-access = true --datalake-formats = iceberg Cleanup To avoid incurring costs, clean up the resources you used for this post: Revoke the Lake Formation permissions granted to the Data-Analyst role and Producer account Drop the Athena tables Delete the Athena query results from your Amazon Simple Storage Service (Amazon S3) bucket Delete the Data-Analyst role from IAM Conclusion In this post, we demonstrated how to use cross-account IAM definer roles with AWS Glue Data Catalog views . We showed how data owner accounts can create and manage views in a central governance account while maintaining security and control over their data assets. This feature enables enterprises to implement sophisticated data mesh architectures without compromising on security or requiring data duplication. The ability to use cross-account definer roles with Data Catalog views provides several key advantages: Streamlines view management in multi-account environments Maintains existing CI/CD workflows and automation Enhances security through centralized governance Reduces operational overhead by eliminating the need for data duplication As organizations continue to build and scale their data lakehouse architectures across multiple AWS accounts, cross-account definer roles for Data Catalog views provide a crucial capability for implementing efficient, secure, and well-governed data sharing patterns. About the authors Aarthi Srinivasan Aarthi is a Senior Big Data Architect at Amazon Web Services (AWS). She works with AWS customers and partners to architect data lake solutions, enhance product features, and establish best practices for data governance. Sundeep Kumar Sundeep is a Sr. Specialist Solutions Architect at Amazon Web Services (AWS), helping customers build data lake and analytics platforms and solutions. When not building and designing data lakes, Sundeep enjoys listening to music and playing guitar. Loading comments… Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow Twitter Facebook LinkedIn Twitch Email Updates Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center SDKs & Tools .NET on AWS Python on AWS Java on AWS PHP on AWS JavaScript on AWS Help Contact Us File a Support Ticket AWS re:Post Knowledge Center AWS Support Overview Get Expert Help AWS Accessibility Legal English Back to top Amazon is an Equal Opportunity Employer: Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation / Age. x facebook linkedin instagram twitch <svg role | 2026-01-13T09:29:13 |
https://aws.amazon.com/blogs/big-data/aws-analytics-at-reinvent-2025-unifying-data-ai-and-governance-at-scale/ | AWS analytics at re:Invent 2025: Unifying Data, AI, and governance at scale | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog AWS analytics at re:Invent 2025: Unifying Data, AI, and governance at scale by Larry Weber on 07 JAN 2026 in Amazon EMR , Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , Amazon OpenSearch Service , Amazon Redshift , Amazon SageMaker Data & AI Governance , Amazon SageMaker Unified Studio , Analytics , AWS Glue , AWS Lake Formation , AWS re:Invent , Intermediate (200) Permalink Comments Share re:Invent 2025 showcased the bold Amazon Web Services (AWS) vision for the future of analytics, one where data warehouses, data lakes, and AI development converge into a seamless, open, intelligent platform, with Apache Iceberg compatibility at its core. Across over 18 major announcements spanning three weeks, AWS demonstrated how organizations can break down data silos, accelerate insights with AI, and maintain robust governance without sacrificing agility. Amazon SageMaker: Your data platform, simplified AWS introduced a faster, simpler approach to data platform onboarding for Amazon SageMaker Unified Studio . The new one-click onboarding experience eliminates weeks of setup, so teams can start working with existing datasets in minutes using their current AWS Identity and Access Management (IAM) roles and permissions. Accessible directly from Amazon SageMaker , Amazon Athena , Amazon Redshift , and Amazon S3 Tables consoles, this streamlined experience automatically creates SageMaker Unified Studio projects with existing data permissions intact. At its core is a powerful new serverless notebook that reimagines how data professionals work. This single interface combines SQL queries, Python code, Apache Spark processing, and natural language prompts, backed by Amazon Athena for Apache Spark to scale from interactive exploration to petabyte-scale jobs. Data engineers, analysts, and data scientists no longer need to context-switch between different tools based on workload—they can explore data with SQL, build models with Python, and use AI assistance, all in one place. The introduction of Amazon SageMaker Data Agent in the new SageMaker notebooks marks a pivotal moment in AI-assisted development for data builders. This built-in agent doesn’t only generate code, it understands your data context, catalog information, and business metadata to create intelligent execution plans from natural language descriptions. When you describe an objective, the agent breaks down complex analytics and machine learning (ML) tasks into manageable steps, generates the required SQL and Python code, and maintains awareness of your notebook environment throughout the entire process. This capability transforms hours of manual coding into minutes of guided development, which means teams can focus on gleaning insights rather than repetitive boilerplate. Embracing open data with Apache Iceberg One significant theme across this year’s launches was the widespread adoption of Apache Iceberg across AWS analytics, transforming how organizations manage petabyte-scale data lakes. Catalog federation to remote Iceberg catalogs through the AWS Glue Data Catalog addresses a critical challenge in modern data architectures. You can now query remote Iceberg tables, stored in Amazon Simple Storage Service (Amazon S3) and catalogued in remote Iceberg catalogs, using preferred AWS analytics services such as Amazon Redshift, Amazon EMR , Amazon Athena, AWS Glue, and Amazon SageMaker, without moving or copying tables. Metadata synchronizes in real time, providing query results that reflect the current state. Catalog federation supports both coarse-grained access control and fine-grained access permissions through AWS Lake Formation enabling cross-account sharing and trusted identity propagation while maintaining consistent security across federated catalogs. Amazon Redshift now writes directly to Apache Iceberg tables, enabling true open lakehouse architectures where analytics seamlessly span data warehouses and lakes. Apache Spark on Amazon EMR 7.12 , AWS Glue, Amazon SageMaker notebooks, Amazon S3 Tables, and the AWS Glue Data Catalog now support Iceberg V3’s capabilities, including deletion vectors that mark deleted rows without expensive file rewrites, dramatically reducing pipeline costs and accelerating data modifications and row lineage. V3 automatically tracks every record’s history, creating audit trails essential for compliance and has table-level encryption that helps organizations meet stringent privacy regulations. These innovations mean faster writes, lower storage costs, comprehensive audit trails, and efficient incremental processing across your data architecture. Governance that scales with your organization Data governance received substantial attention at re:Invent with major enhancements to Amazon SageMaker Catalog . Organizations can now curate data at the column level with custom metadata forms and rich text descriptions , indexed in real time for immediate discoverability. New metadata enforcement rules require data producers to classify assets with approved business vocabulary before publication, providing consistency across the enterprise. The catalog uses Amazon Bedrock large language models (LLMs) to automatically suggest relevant business glossary terms by analyzing table metadata and schema information, bridging the gap between technical schemas and business language. Perhaps most importantly, SageMaker Catalog now exports its entire asset metadata as queryable Apache Iceberg tables through Amazon S3 Tables. This way, teams can analyze catalog inventory with standard SQL to answer questions like “which assets lack business descriptions?” or “how many confidential datasets were registered last month?” without building custom ETL infrastructure. As organizations adopt multi-warehouse architectures to scale and isolate workloads, the new Amazon Redshift federated permissions capability eliminates governance complexity. Define data permissions one time from a Amazon Redshift warehouse, and they automatically enforce them across the warehouses in your account. Row-level, column-level, and masking controls apply consistently regardless of which warehouse queries originate from, and new warehouses automatically inherit permission policies. This horizontal scalability means organizations can add warehouses without increasing governance overhead, and analysts immediately see the databases from registered warehouses. Accelerating AI innovation with Amazon OpenSearch Service Amazon OpenSearch Service introduced powerful new capabilities to simplify and accelerate AI application development. With support for OpenSearch 3.3 , agentic search enables precise results using natural language inputs without the need for complex queries, making it easier to build intelligent AI agents. The new Apache Calcite-powered PPL engine delivers query optimization and an extensive library of commands for more efficient data processing. As seen in Matt Garman’s keynote , building large-scale vector databases is now dramatically faster with GPU acceleration and auto-optimization . Previously, creating large-scale vector indexes required days of building time and weeks of manual tuning by experts, which slowed innovation and prevented cost-performance optimizations. The new serverless auto-optimize jobs automatically evaluate index configurations—including k-nearest neighbors (k-NN) algorithms, quantization, and engine settings—based on your specified search latency and recall requirements. Combined with GPU acceleration, you can build optimized indexes up to ten times faster at 25% of the indexing cost, with serverless GPUs that activate dynamically and bill only when providing speed boosts. These advancements simplify scaling AI applications such as semantic search, recommendation engines, and agentic systems, so teams can innovate faster by dramatically reducing the time and effort needed to build large-scale, optimized vector databases. Performance and cost optimization Also announced in the keynote , Amazon EMR Serverless now eliminates local storage provisioning for Apache Spark workloads, introducing serverless storage that reduces data processing costs by up to 20% while preventing job failures from disk capacity constraints. The fully managed, auto scaling storage encrypts data in transit and at rest with job-level isolation, allowing Spark to release workers immediately when idle rather than keeping them active to preserve temporary data. Additionally, AWS Glue introduced materialized views based on Apache Iceberg, storing precomputed query results that automatically refresh as source data changes. Spark engines across Amazon Athena, Amazon EMR, and AWS Glue intelligently rewrite queries to use these views, accelerating performance by up to eight times while reducing compute costs. The service handles refresh schedules, change detection, incremental updates, and infrastructure management automatically. The new Apache Spark upgrade agent for Amazon EMR transforms version upgrades from months-long projects into week-long initiatives. Using conversational interfaces, engineers express upgrade requirements in natural language while the agent automatically identifies API changes and behavioral modifications across PySpark and Scala applications. Engineers review and approve suggested changes before implementation, maintaining full control while the agent validates functional correctness through data quality checks. Currently supporting upgrades from Spark 2.4 to 3.5, this capability is available through SageMaker Unified Studio, Kiro CLI , or an integrated development environment (IDE) with Model Context Protocol compatibility. For workflow optimization, AWS introduced a new Serverless deployment option for Amazon Managed Workflows for Apache Airflow (Amazon MWAA), which eliminates the operational overhead of managing Apache Airflow environments while optimizing costs through serverless scaling. This new offering addresses key challenges of operational scalability, cost optimization, and access management that data engineers and DevOps teams face when orchestrating workflows. With Amazon MWAA Serverless , data engineers can focus on defining their workflow logic rather than monitoring for provisioned capacity. They can now submit their Airflow workflows for execution on a schedule or on demand, paying only for the actual compute time used during each task’s execution. Looking forward These launches collectively represent more than incremental improvements. They signal a fundamental shift in how organizations are approaching analytics. By unifying data warehousing, data lakes, and ML under a common framework built on Apache Iceberg, simplifying access through intelligent interfaces powered by AI, and maintaining robust governance that scales effortlessly, AWS is giving organizations the tools to focus on insights rather than infrastructure. The emphasis on automation, from AI-assisted development to self-managing materialized views and serverless storage, reduces operational overhead while improving performance and cost efficiency. As data volumes continue to grow and AI becomes increasingly central to business operations, these capabilities position AWS customers to accelerate their data-driven initiatives with unprecedented simplicity and power. To view the Re:Invent 2025 Innovation Talk on analytics, visit Harnessing analytics for humans and AI on YouTube. About the authors Larry Weber Larry leads product marketing for the analytics portfolio at AWS. Loading comments… Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow Twitter Facebook LinkedIn Twitch Email Updates Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center SDKs & Tools .NET on AWS Python on AWS Java on AWS PHP on AWS JavaScript on AWS Help Contact Us File a Support Ticket AWS re:Post Knowledge Center AWS Support Overview Get Expert Help AWS Accessibility Legal English Back to top Amazon is an Equal Opportunity Employer: Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation / Age. x facebook linkedin instagram twitch youtube podcasts email Privacy Site terms Cookie Preferences © 2025, Amazon Web Services, Inc. or its affiliates. All rights reserved. | 2026-01-13T09:29:13 |
https://aws.amazon.com/blogs/big-data/access-databricks-unity-catalog-data-using-catalog-federation-in-the-aws-glue-data-catalog/#Comments | Access Databricks Unity Catalog data using catalog federation in the AWS Glue Data Catalog | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Access Databricks Unity Catalog data using catalog federation in the AWS Glue Data Catalog by Srividya Parthasarathy and Venkat Viswanathan on 12 JAN 2026 in Advanced (300) , Amazon SageMaker , AWS Glue , AWS Lake Formation , Technical How-to Permalink Comments Share AWS has launched the catalog federation capability, enabling direct access to Apache Iceberg tables managed in Databricks Unity Catalog through the AWS Glue Data Catalog . With this integration, you can discover and query Unity Catalog data in Iceberg format using an Iceberg REST API endpoint, while maintaining granular access controls through AWS Lake Formation . This approach significantly reduces operational overhead for managing catalog synchronization and associated costs by alleviating the need to replicate or duplicate datasets between platforms. In this post, we demonstrate how to set up catalog federation between the Glue Data Catalog and Databricks Unity Catalog, enabling data querying using AWS analytics services. Use cases and key benefits This federation capability is particularly valuable if you run multiple data platforms, because you can maintain your existing Iceberg catalog investments while using AWS analytics services. Catalog federation supports read operations and provides the following benefits: Interoperability – You can enable interoperability across different data platforms and tools through Iceberg REST APIs while preserving the value of your established technology investments. Cross-platform analytics – You can connect AWS analytics tools ( Amazon Athena , Amazon Redshift , Apache Spark) to query Iceberg and UniForm tables stored in Databricks Unity Catalog. It supports Databricks on AWS integration with the AWS Glue Iceberg REST Catalog for metadata retrieval, while using Lake Formation for permission management. Metadata management – The solution avoids manual catalog synchronization by making Databricks Unity Catalog databases and tables discoverable within the Data Catalog. You can implement unified governance through Lake Formation for fine-grained access control across federated catalog resources. Solution overview The solution uses catalog federation in the Data Catalog to integrate with Databricks Unity Catalog. The federated catalog created in AWS Glue mirrors the catalog objects in Databricks Unity Catalog and supports OAuth-based authentication. The solution is represented in the following diagram. The integration involves three high-level steps: Set up an integration principal in Databricks Unity Catalog and provide required read access on catalog resources to this principal. Enable OAuth-based authentication for the integration principal. Set up catalog federation to Databricks Unity Catalog in the Glue Data Catalog: Create a federated catalog in the Data Catalog using an AWS Glue connection. Create an AWS Glue connection that uses the credentials of the integration principal (in Step 1) to connect to Databricks Unity Catalog. Configure an AWS Identity and Access Management (IAM) role with permission to Amazon Simple Storage Service (Amazon S3) locations where the Iceberg table data resides. In a cross-account scenario, make sure the bucket policy grants required access to this IAM role. Discover Iceberg tables in federated catalogs using Lake Formation or AWS Glue APIs. During query operations, Lake Formation manages fine-grained permissions on federated resources and credential vending for access to the underlying data. In the following sections, we walk through the steps to integrate the Glue Data Catalog with Databricks Unity Catalog on AWS. Prerequisites To follow along with the solution presented in this post, you must have the following prerequisites: Databricks Workspace (on AWS) with Databricks Unity Catalog configured. An IAM role that is a Lake Formation data lake administrator in your AWS account. A data lake administrator is an IAM principal that can register S3 locations, access the Data Catalog, grant Lake Formation permissions to other users, and view AWS CloudTrail logs. See Create a data lake administrator for more information. Configure Databricks Unity Catalog for external access Catalog federation to a Databricks Unity Catalog uses the OAuth2 credentials of a Databricks service principal configured in the workspace admin settings. This authentication mechanism allows the Data Catalog to access the metadata of various objects (such as catalogs, databases, and tables) within Databricks Unity Catalog, based on the privileges associated with the service principal. For proper functionality, grant the service principal with the necessary permissions (read permission on catalog, schema, and tables) to read the metadata of these objects and allow access from external engines. Next, catalog federation enables discovery and query of Iceberg tables in your Databricks Unity Catalog. For reading delta tables, enable UniForm on a Delta Lake table in Databricks to generate Iceberg metadata. For more information, refer to Read Delta tables with Iceberg clients . Follow the Databricks tutorial and documentation to create the service principal and associated privileges in your Databricks workspace. For this post, we use a service principal named integrationprincipal that is configured with required permissions (SELECT, USE CATALOG, USE SCHEMA) on Databricks Unity Catalog objects and will be used for authentication to catalog instance. Catalog federation supports OAuth2 authentication, so enable OAuth for the service principal and note down the client_id and client_secret for later use. Set up Data Catalog federation with Databricks Unity Catalog Now that you have service principal access for Databricks Unity Catalog, you can set up catalog federation in the Data Catalog. To do so, you create an AWS Secrets Manager secret and create an IAM role for catalog federation. Create secret Complete the following steps to create a secret: Sign in to the AWS Management Console using an IAM role with access to Secrets Manager. On the Secrets Manager console, choose Store a new secret and Other type of secret . Set the key-value pair: Key: USER_MANAGED_CLIENT_APPLICATION_CLIENT_SECRET Value: The client secret noted earlier Choose Next . Enter a name for your secret (for this post, we use dbx ). Choose Store . Create IAM role for catalog federation As the catalog owner of a federated catalog in the Data Catalog, you can use Lake Formation to implement comprehensive access controls, including table filters, column filters, and row filters, as well as tag-based access for your data teams. Lake Formation requires an IAM role with permissions to access the underlying S3 locations of your external catalog. In this step, you create an IAM role that enables the AWS Glue connection to access Secrets Manager, optional virtual private cloud (VPC) configurations, and Lake Formation to manage credential vending for the S3 bucket and prefix: Secrets Manager access – The AWS Glue connection requires permissions to retrieve secret values from Secrets Manager for OAuth tokens stored for your Databricks Unity service connection. VPC access (optional) – When using VPC endpoints to restrict connectivity to your Databricks Unity account, the AWS Glue connection needs permissions to describe and utilize VPC network interfaces. This configuration provides secure, controlled access to both your stored credentials and network resources while maintaining proper isolation through VPC endpoints. S3 bucket and AWS KMS key permission – The AWS Glue connection requires Amazon S3 permissions to read certificates if used in the connection setup. Additionally, Lake Formation requires read permissions on the bucket and prefix where the remote catalog table data resides. If the data is encrypted using an AWS Key Management Service (AWS KMS) key, additional AWS KMS permissions are required. Complete the following steps: Create an IAM role called LFDataAccessRole with the following policies: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret" ], "Resource": [ "<secrets manager ARN>" ] }, { "Effect": "Allow", "Action": [ "ec2:CreateNetworkInterface", "ec2:DeleteNetworkInterface", "ec2:DescribeNetworkInterfaces" ], "Resource": "*", "Condition": { "ArnEquals": { "ec2:Vpc": "arn:aws:ec2:region:account-id:vpc/<vpc-id>", "ec2:Subnet": [ "arn:aws:ec2:region:account-id:subnet/<subnet-id>" ] } } }, { # Required when using custom cert to sign requests. "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3 :::<bucketname>/<certpath>" ] }, { # Required when using customer managed encryption key for s3 "Effect": "Allow", "Action": [ "kms:decrypt", "kms:encrypt" ], "Resource": [ "<kmsKey>" ] } ] } Configure the role with the following trust policy: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": ["glue.amazonaws.com","lakeformation.amazonaws.com"] }, "Action": "sts:AssumeRole" } ] } Create federated catalog in Data Catalog AWS Glue supports the DATABRICKSICEBERGRESTCATALOG connection type for connecting the Data Catalog with managed Databricks Unity Catalog. This AWS Glue connector supports OAuth2 authentication for discovering metadata in Databricks Unity Catalog. Complete the following steps to create the federated catalog: Sign in to the console as a data lake admin. On the Lake Formation console, choose Catalogs in the navigation pane. Choose Create catalog . For Name , enter a name for your catalog. For Catalog name in Databricks , enter the name of a catalog existing in Databricks Unity Catalog. For Connection name , enter a name for the AWS Glue connection. For Workspace URL , enter the Unity Iceberg REST API URL (in format https:// <workspace-url> /cloud.databricks.com ). For Authentication , provide the following information: For Authentication type , choose OAuth2 . Alternatively, you can choose Custom authentication . For Custom authentication , an access token is created, refreshed, and managed by the customer’s application or system and stored using Secrets Manager. For Token URL , enter the token authentication server URL. For OAuth Client ID , enter the client_id for integrationprincipal . For OAuth Secret , enter the secret ARN that you created in the previous step. Alternatively, you can provide the client_secret directly. For Token URL parameter map scope , provide the API scope supported. If you have AWS PrivateLink set up or a proxy set up, you can provide network details under Settings for network configurations . For Register Glue connection with Lake Formation , choose the IAM role ( LFDataAccessRole ) created earlier to manage data access using Lake Formation. When the setup is done using AWS Command Line Interface (AWS CLI) commands, you have options to create two separate IAM roles: IAM role with policies to access network and secrets, which AWS Glue assumes to manage authentication IAM role with access to the S3 bucket, which Lake Formation assumes to manage credential vending for data access On the console, this setup is simplified with a single role having combined policies. For more details, refer to Federate to Databricks Unity Catalog . To test the connection, choose Run test . You can proceed to create the catalog. After you create the catalog, you can see the databases and tables in Databricks Unity Catalog listed under the federated catalog. You can implement fine-grained access control on the tables by applying row and column filters using Lake Formation. The following video shows the catalog federation setup with Databricks Unity Catalog. Discover and query the data using Athena In this post, we show how to use the Athena query editor to discover and query the Databricks Unity Catalog tables. On the Athena console, run the following query to access the federated table: SELECT * FROM "customerschema"."person" limit 10; The following video demonstrates querying the federated table from Athena. If you use the Amazon Redshift query engine, you must create a resource link on the federated database and grant permission on the resource link to the user or role. This database resource link is automounted under awsdatacatalog based on the permission granted for the user or role and available for querying. For instructions, refer to Creating resource links. Clean up To clean up your resources, complete the following steps: Delete the catalog and namespace in Databricks Unity Catalog for this post. Drop the resources in the Data Catalog and Lake Formation created for this post. Delete the IAM roles and S3 buckets used for this post. Delete any VPC and KMS keys if used for this post. Conclusion In this post, we explored the key elements of catalog federation and its architectural design, illustrating the interaction between the AWS Glue Data Catalog and Databricks Unity Catalog through centralized authorization and credential distribution for protected data access. By removing the requirement for complicated synchronization workflows, catalog federation makes it possible to query Iceberg data on Amazon S3 directly at its source using AWS analytics services with data governance across multi-catalog platforms. Try out the solution for your own use case, and share your feedback and questions in the comments. About the Authors Srividya Parthasarathy Srividya is a Senior Big Data Architect on the AWS Lake Formation team. She works with the product team and customers to build robust features and solutions for their analytical data platform. She enjoys building data mesh solutions and sharing them with the community. Venkatavaradhan (Venkat) Viswanathan Venkat” is a Global Partner Solutions Architect at Amazon Web Services. Venkat is a Technology Strategy Leader in Data, AI, ML, Generative AI, and Advanced Analytics. Venkat is a Global SME for Databricks and helps AWS customers design, build, secure, and optimize Databricks workloads on AWS. Loading comments… Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow Twitter Facebook LinkedIn Twitch Email Updates Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center SDKs & Tools .NET on AWS Python on AWS Java on AWS PHP on AWS JavaScript on AWS Help Contact Us File a Support Ticket AWS re:Post Knowledge Center AWS Support Overview Get Expert Help AWS Accessibility Legal English Back to top Amazon is an Equal Opportunity Employer: Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation / Age. x facebook linkedin instagram twitch youtube podcasts <path d="M14.8571 13.1432V6.28606C14.6667 6.50035 14.4613 6.69678 14.2411 6.87535C12.6458 8.10154 11.378 9.10749 10.4375 9.89321C10.1339 10.1492 9.88691 10.3486 9.69643 10.4914C9.50595 10.6343 9.24851 10.7786 8.92411 10.9245C8.5997 11.0703 8.29464 11.1432 8.00893 11.1432H8H7.99107C7.70536 11.1432 7.4003 11.0703 7.07589 10.9245C6.75149 10.7786 6.49405 10.6343 6.30357 10.4914C6.1131 10.3486 5.86607 10.1492 5.5625 9.89321C4.62202 9.10749 3.35417 8.10154 1.75893 6.87535C1.53869 6.69678 1.33333 6.50035 1.14286 6.28606V13.1432C1.14286 13.2206 1.17113 13.2876 1.22768 13.3441C1.28423 13.4006 1.35119 13.4289 1.42857 13.4289H14.5714C14.6488 13.4289 14.7158 13.4006 14.7723 13.3441C14.8289 13.2876 14.8571 13.2206 14.8571 13.1432ZM14.8571 3.75928C14.8571 3.74737 14.8571 3.71463 14.8571 3.6 | 2026-01-13T09:29:13 |
https://aws.amazon.com/blogs/big-data/category/analytics/page/2/ | Analytics | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Category: Analytics Modernize Apache Spark workflows using Spark Connect on Amazon EMR on Amazon EC2 by Philippe Wanner and Ege Oguzman on 18 DEC 2025 in Advanced (300) , Amazon EC2 , Amazon EMR , Technical How-to Permalink Comments Share In this post, we demonstrate how to implement Apache Spark Connect on Amazon EMR on Amazon Elastic Compute Cloud (Amazon EC2) to build decoupled data processing applications. We show how to set up and configure Spark Connect securely, so you can develop and test Spark applications locally while executing them on remote Amazon EMR clusters. How Taxbit achieved cost savings and faster processing times using Amazon S3 Tables by Larry Christensen , Derek Ziehl , Pranjal Gururani , and Washim Nawaz on 18 DEC 2025 in Amazon S3 Tables , Analytics , Customer Solutions , Intermediate (200) Permalink Comments Share In this post, we discuss how Taxbit partnered with Amazon Web Services (AWS) to streamline their crypto tax analytics solution using Amazon S3 Tables, achieving 82% cost savings and five times faster processing times. Create and update Apache Iceberg tables with partitions in the AWS Glue Data Catalog using the AWS SDK and AWS CloudFormation by Aarthi Srinivasan and Pratik Das on 18 DEC 2025 in Advanced (300) , AWS Glue , Learning Levels , Technical How-to Permalink Comments Share In this post, we show how to create and update Iceberg tables with partitions in the Data Catalog using the AWS SDK and AWS CloudFormation. Power data ingestion into Splunk using Amazon Data Firehose by Tarik Makota , Mitali Sheth , Roy Arsan , and Yashika Jain on 17 DEC 2025 in Amazon Data Firehose , Amazon Kinesis , Intermediate (200) , Technical How-to Permalink Comments Share With Kinesis Data Firehose, customers can use a fully managed, reliable, and scalable data streaming solution to Splunk. In this post, we tell you a bit more about the Kinesis Data Firehose and Splunk integration. We also show you how to ingest large amounts of data into Splunk using Kinesis Data Firehose. Best practices for querying Apache Iceberg data with Amazon Redshift by Anusha Challa , Jonathan Katz , and Mohammed Alkateb on 17 DEC 2025 in Amazon Redshift , Amazon S3 Tables , Best Practices Permalink Comments Share In this post, we discuss the best practices that you can follow while querying Apache Iceberg data with Amazon Redshift IPv6 addressing with Amazon Redshift by Srini Ponnada , Zirui Hua , Niranjan Kulkarni , Sandeep Adwankar , Sumanth Punyamurthula , and Yanzhu Ji on 17 DEC 2025 in Advanced (300) , Amazon Redshift , Technical How-to Permalink Comments Share As we witness the gradual transition from IPv4 to IPv6, AWS continues to expand its support for dual-stack networking across its service portfolio. In this post, we show how you can migrate your Amazon Redshift Serverless workgroup from IPv4-only to dual-stack mode, so you can make your data warehouse future ready. Reference guide for building a self-service analytics solution with Amazon SageMaker by Navnit Shukla , Ayan Majumder , and Karan Edikala on 16 DEC 2025 in Advanced (300) , Amazon SageMaker Data & AI Governance , Technical How-to Permalink Comments Share In this post, we show how to use Amazon SageMaker Catalog to publish data from multiple sources, including Amazon S3, Amazon Redshift, and Snowflake. This approach enables self-service access while ensuring robust data governance and metadata management. Introducing the Apache Spark troubleshooting agent for Amazon EMR and AWS Glue by Jake Zych , Andrew Kim , Maheedhar Reddy Chappidi , Arunav Gupta , Jeremy Samuel , Muhammad Ali Gulzar , Mohit Saxena , Mukul Prasad , Kartik Panjabi , Shubham Mehta , Vishal Kajjam , Vidyashankar Sivakumar , and Wei Tang on 15 DEC 2025 in Advanced (300) , Amazon EMR , AWS Glue , Kiro , Technical How-to Permalink Comments Share In this post, we show you how the Apache Spark troubleshooting agent helps analyze Apache Spark issues by providing detailed root causes and actionable recommendations. You’ll learn how to streamline your troubleshooting workflow by integrating this agent with your existing monitoring solutions across Amazon EMR and AWS Glue. Introducing Apache Spark upgrade agent for Amazon EMR by Keerthi Chadalavada , McCall Peltier , Rajendra Gujja , Bo Li , Malinda Malwala , Mohit Saxena , Mukul Prasad , Vaibhav Naik , Pradeep Patel , Shubham Mehta , and XiaoRun Yu on 15 DEC 2025 in Advanced (300) , Amazon EMR , Kiro , Technical How-to Permalink Comments Share In this post, you learn how to assess your existing Amazon EMR Spark applications, use the Spark upgrade agent directly from the Kiro IDE, upgrade a sample e-commerce order analytics Spark application project (including build configs, source code, tests, and data quality validation), and review code changes before rolling them out through your CI/CD pipeline. Accelerate Apache Hive read and write on Amazon EMR using enhanced S3A by Ramesh Kandasamy , Giovanni Matteo Fumarola , Himanshu Mishra , Paramvir Singh , and Anmol Sundaram on 15 DEC 2025 in Amazon EMR , Analytics , Announcements , Intermediate (200) Permalink Comments Share In this post, we demonstrate how Apache Hive on Amazon EMR 7.10 delivers significant performance improvements for both read and write operations on Amazon S3. ← Older posts Newer posts → Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center SDKs & Tools .NET on AWS <a data-rg-n="Link" href="/developer/language/python/?nc1=f_dr" data-rigel-analytics="{"name":"Link","properties":{"size":1}}" class="rgft | 2026-01-13T09:29:13 |
https://ko-kr.facebook.com/login/?next=https%3A%2F%2Fwww.facebook.com%2Fshare_channel%2F%3Ftype%3Dreshare%26amp%253Bamp%253Blink%3Dhttps%253A%252F%252Fdev.to%252Fvoxel51%252Fcomputer-vision-meetup-rgb-x-model-development-exploring-four-channel-ml-workflows-5hne%26amp%253Bamp%253Bapp_id%3D966242223397117%26amp%253Bamp%253Bsource_surface%3Dexternal_reshare%26amp%253Bamp%253Bdisplay%26amp%253Bamp%253Bhashtag | Facebook Facebook 이메일 또는 휴대폰 비밀번호 계정을 잊으셨나요? 새 계정 만들기 일시적으로 차단됨 일시적으로 차단됨 회원님의 이 기능 사용 속도가 너무 빠른 것 같습니다. 이 기능 사용에서 일시적으로 차단되었습니다. Back 한국어 English (US) Tiếng Việt Bahasa Indonesia ภาษาไทย Español 中文(简体) 日本語 Português (Brasil) Français (France) Deutsch 가입하기 로그인 Messenger Facebook Lite 동영상 Meta Pay Meta 스토어 Meta Quest Ray-Ban Meta Meta AI Meta AI 콘텐츠 더 보기 Instagram Threads 투표 정보 센터 개인정보처리방침 개인정보 보호 센터 정보 광고 만들기 페이지 만들기 개발자 채용 정보 쿠키 AdChoices 이용 약관 고객 센터 연락처 업로드 및 비사용자 설정 활동 로그 Meta © 2026 | 2026-01-13T09:29:13 |
https://aws.amazon.com/blogs/big-data/category/management-tools/amazon-cloudwatch/ | Amazon CloudWatch | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Category: Amazon CloudWatch Amazon OpenSearch Serverless monitoring: A CloudWatch setup guide by Urmila Iyer and Parth Shah on 24 SEP 2025 in Advanced (300) , Amazon CloudWatch , Amazon OpenSearch Service , Monitoring and observability , Serverless , Technical How-to Permalink Comments Share In this post, we explore commonly used Amazon CloudWatch metrics and alarms for OpenSearch Serverless, walking through the process of selecting relevant metrics, setting appropriate thresholds, and configuring alerts. This guide will provide you with a comprehensive monitoring strategy that complements the serverless nature of your OpenSearch deployment while maintaining full operational visibility. Automate and orchestrate Amazon EMR jobs using AWS Step Functions and Amazon EventBridge by Senthil Kamala Rathinam and Shashidhar Makkapati on 15 SEP 2025 in Advanced (300) , Amazon CloudWatch , Amazon EC2 , Amazon EMR , Amazon EventBridge , Analytics , AWS Step Functions , Technical How-to Permalink Comments Share In this post, we discuss how to build a fully automated, scheduled Spark processing pipeline using Amazon EMR on EC2, orchestrated with Step Functions and triggered by EventBridge. We walk through how to deploy this solution using AWS CloudFormation, processes COVID-19 public dataset data in Amazon Simple Storage Service (Amazon S3), and store the aggregated results in Amazon S3. Amazon EMR Serverless observability, Part 1: Monitor Amazon EMR Serverless workers in near real time using Amazon CloudWatch by Kashif Khan and Veena Vasudevan on 27 SEP 2024 in Amazon CloudWatch , Amazon EMR , Analytics , Monitoring and observability Permalink Comments Share We have launched job worker metrics in Amazon CloudWatch for EMR Serverless. This feature allows you to monitor vCPUs, memory, ephemeral storage, and disk I/O allocation and usage metrics at an aggregate worker level for your Spark and Hive jobs. This post is part of a series about EMR Serverless observability. In this post, we discuss how to use these CloudWatch metrics to monitor EMR Serverless workers in near real time. Create a customizable cross-company log lake for compliance, Part I: Business Background by Colin Carson and Sean O’Sullivan on 01 AUG 2024 in Advanced (300) , Amazon CloudWatch , AWS CloudTrail , AWS Glue , AWS Systems Manager , Compliance , Security, Identity, & Compliance , Technical How-to Permalink Comments Share As builders, sometimes you want to dissect a customer experience, find problems, and figure out ways to make it better. That means going a layer down to mix and match primitives together to get more comprehensive features and more customization, flexibility, and freedom. In this post, we introduce Log Lake, a do-it-yourself data lake based on logs from CloudWatch and AWS CloudTrail. Deliver Amazon CloudWatch logs to Amazon OpenSearch Serverless by Balaji Mohan , Muthu Pitchaimani , and Souvik Bose on 31 JUL 2024 in Amazon CloudWatch , Amazon OpenSearch Service , Serverless , Technical How-to Permalink Comments Share In this blog post, we will show how to use Amazon OpenSearch Ingestion to deliver CloudWatch logs to OpenSearch Serverless in near real-time. We outline a mechanism to connect a Lambda subscription filter with OpenSearch Ingestion and deliver logs to OpenSearch Serverless without explicitly needing a separate subscription filter for it. Disaster recovery strategies for Amazon MWAA – Part 1 by Parnab Basak , Chandan Rupakheti , Vinod Jayendra , and Rupesh Tiwari on 16 JAN 2024 in Amazon CloudWatch , Amazon EventBridge , Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , Amazon Simple Storage Service (S3) , Architecture , AWS Lambda , AWS Step Functions , Intermediate (200) , Technical How-to Permalink Comments Share In the dynamic world of cloud computing, ensuring the resilience and availability of critical applications is paramount. Disaster recovery (DR) is the process by which an organization anticipates and addresses technology-related disasters. For organizations implementing critical workload orchestration using Amazon Managed Workflows for Apache Airflow (Amazon MWAA), it is crucial to have a DR plan […] Enable metric-based and scheduled scaling for Amazon Managed Service for Apache Flink by Francisco Morillo and Deepthi Mohan on 10 JAN 2024 in Amazon CloudWatch , Amazon EventBridge , Amazon Managed Service for Apache Flink , AWS Lambda , AWS Step Functions , Best Practices , Technical How-to Permalink Comments Share Thousands of developers use Apache Flink to build streaming applications to transform and analyze data in real time. Apache Flink is an open source framework and engine for processing data streams. It’s highly available and scalable, delivering high throughput and low latency for the most demanding stream-processing applications. Monitoring and scaling your applications is critical […] Monitor Apache Spark applications on Amazon EMR with Amazon Cloudwatch by Le Clue Lubbe on 30 AUG 2023 in Amazon CloudWatch , Amazon EMR , How-To , Intermediate (200) , Technical How-to Permalink Comments Share To improve a Spark application’s efficiency, it’s essential to monitor its performance and behavior. In this post, we demonstrate how to publish detailed Spark metrics from Amazon EMR to Amazon CloudWatch. This will give you the ability to identify bottlenecks while optimizing resource utilization. Monitor data pipelines in a serverless data lake by Virendhar Sivaraman and Vivek Shrivastava on 09 AUG 2023 in Amazon Athena , Amazon CloudWatch , Amazon EventBridge , Amazon Simple Notification Service (SNS) , Amazon Simple Storage Service (S3) , AWS Glue , AWS Lambda , Intermediate (200) , Technical How-to Permalink Comments Share AWS serverless services, including but not limited to AWS Lambda, AWS Glue, AWS Fargate, Amazon EventBridge, Amazon Athena, Amazon Simple Notification Service (Amazon SNS), Amazon Simple Queue Service (Amazon SQS), and Amazon Simple Storage Service (Amazon S3), have become the building blocks for any serverless data lake, providing key mechanisms to ingest and transform data […] Centralize near-real-time governance through alerts on Amazon Redshift data warehouses for sensitive queries by Rajdip Chaudhuri and Dhiraj Thakur on 29 JUN 2023 in Advanced (300) , Amazon Athena , Amazon CloudWatch , Amazon Quick Sight , Amazon Redshift , AWS Lambda , Technical How-to Permalink Comments Share Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud that delivers powerful and secure insights on all your data with the best price-performance. With Amazon Redshift, you can analyze your data to derive holistic insights about your business and your customers. In many organizations, one or multiple Amazon Redshift data warehouses […] ← Older posts Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? <a data-rg-n="Link" href="/what-is/?nc1=f_cc" data-rigel-analytics="{"name":"Link","properties":{"size | 2026-01-13T09:29:13 |
https://aws.amazon.com/blogs/big-data/use-amazon-sagemaker-custom-tags-for-project-resource-governance-and-cost-tracking/ | Use Amazon SageMaker custom tags for project resource governance and cost tracking | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Use Amazon SageMaker custom tags for project resource governance and cost tracking by David Victoria , Ahan Malli , and Rohit Srikanta on 08 JAN 2026 in Advanced (300) , Amazon SageMaker , Amazon SageMaker Unified Studio , Technical How-to Permalink Comments Share Amazon SageMaker announced a new feature that you can use to add custom tags to resources created through an Amazon SageMaker Unified Studio project. This helps you enforce tagging standards that conform to your organization’s service control policies (SCPs) and helps enable cost tracking reporting practices on resources created across the organization. As a SageMaker administrator, you can configure a project profile with tag configurations that will be pushed down to projects that currently use or will use that project profile. The project profile is set up to pass either required key and value tag pairings or pass the key of the tag with a default value that can be modified during project creation. All tags passed to the project will result in the resources created by that project being tagged. This provides you with a governance mechanism that enforces that project resources have the expected tags across all projects of the domain. The first release of custom tags for project resources is supported through an application programming interface (API), through Amazon DataZone SDKs. In this post, we look at use cases for custom tags and how to use the AWS Command Line Interface (AWS CLI) to add tags to project resources. What we hear from customers As customers continue to build and collaborate using AWS tools for model development, generative AI, data processing, and SQL analytics, they see the need to bring control and visibility into the resources being created. To support connectivity to these AWS tools from SageMaker Unified Studio projects, many different types of resources across AWS services need to be created. These resources are created through AWS CloudFormation stacks (through project environment deployment) by the Amazon SageMaker service. From customers we hear the following use cases: Customers need to enforce that tagging practices conform to company policies through the use of AWS controls, such as SCPs, for resource creation. These controls block the creation of resources unless specific tags are placed on the resource. Customers can also start with policies to enforce that the correct tags are placed when resources are created with the additional goal of standardizing on resource reporting. By placing identifiable information on resources when created, they enforce consistency and completeness when performing cost attribution reporting and observability. Customer Swiss Life uses SageMaker as a single solution for cataloging, discovery, sharing, and governance of their enterprise data across business domains. They require all resources have a set of mandatory tags for their finance group to bill organizations across their company for the AWS resources created. “The launch of project resource tags for Amazon SageMaker allows us to bring visibility to the costs incurred across our accounts. With this capability we are able to meet the resource tagging guidelines of our company and have confidence in attributing costs across our multi-account setup for the resources created by Amazon SageMaker projects.” – Tim Kopacz, Software Developer at Swiss Life Prerequisites To get started with custom tags, you must have the following resources: A SageMaker Unified Studio domain. An AWS Identity and Access Management (IAM) entity with privileges to make AWS CLI calls to the domain. An IAM entity authorized to make changes to the domain IAM provisioning role. If SageMaker created this for you, it will be called AmazonSageMakerProvisioning-<accountId> . The provisioning role provisions and manages resources defined in the selected blueprints in your account. How to set up project resource tags The following steps outline how you can configure custom tags for your SageMaker Unified Studio project resources: (Optional) Update the SageMaker provisioning role to permit specific tag keys. Create a new project profile with project resource tags configured. Create a new project with project resource tags. Update an existing project with project resource tags. Validate that the resources are tagged. (Optional) Update a SageMaker provisioning role to permit tag key values The AmazonSageMakerProvisioning-<accountId> role has an AWS managed policy with condition aws:TagKeys allowing tags to be created by this role only if the tag key begins with AmazonDataZone . For this example, we will change the tag key to begin with different strings. Skip to Create a new project profile with project resource tags configured if you don’t need tag keys to have a different structure (such as begins with, contains, and so on) Open the AWS Management Console and go to IAM . In the navigation pane, choose Roles . In the list, choose AmazonSageMakerProvisioning- <accountId> . Choose the Permissions tab. Choose Add permissions , and then choose Create inline policy . Under Policy editor , select JSON . Enter the following policy. Add the strings under the condition aws:TagKeys . In this example, tag keys beginning with ACME or tag keys with the exact match of CostCenter will be created by the role. { "Version": "2012-10-17", "Statement": [ { "Sid": "CustomTagsUnTagPermissions", "Effect": "Allow", "Action": [ "codecommit:UntagResource", "iam:UntagRole", "logs:UntagResource", "athena:UntagResource", "redshift-serverless:UntagResource", "scheduler:UntagResource", "bedrock:UntagResource", "neptune-graph:UntagResource", "quicksight:UntagResource", "glue:UntagResource", "airflow:UntagResource", "secretsmanager:UntagResource", "lambda:UntagResource", "emr-serverless:UntagResource", "elasticmapreduce:RemoveTags", "sagemaker:DeleteTags", "ec2:DeleteTags" ], "Resource": "*", "Condition": { "StringEquals": { "aws:ResourceAccount": "${aws:PrincipalAccount}" }, "ForAllValues:StringLike": { "aws:TagKeys": [ "AmazonDataZone*", "ACME*", "CostCenter" ] }, "Null": { "aws:ResourceTag/AmazonDataZoneProject": "false" } } }, { "Sid": "CustomTagsTaggingPermissions", "Effect": "Allow", "Action": [ "cloudformation:TagResource", "codecommit:TagResource", "iam:TagRole", "glue:TagResource", "athena:TagResource", "lambda:TagResource", "redshift-serverless:TagResource", "logs:TagResource", "secretsmanager:TagResource", "sagemaker:AddTags", "emr-serverless:TagResource", "neptune-graph:TagResource", "bedrock:TagResource", "elasticmapreduce:AddTags", "airflow:TagResource", "scheduler:TagResource", "quicksight:TagResource", "emr-containers:TagResource", "logs:CreateLogGroup", "athena:CreateWorkGroup", "scheduler:CreateScheduleGroup", "cloudformation:CreateStack", "ec2:*" ], "Resource": "*", "Condition": { "ForAnyValue:StringLike": { "aws:TagKeys": [ "AmazonDataZone*", "ACME*", "CostCenter" ] }, "StringEquals": { "aws:ResourceAccount": "${aws:PrincipalAccount}" } } } ] } It’s possible to scope down the specific AWS service tag and un-tag permissions based on which blueprints or capabilities are being used. Create a new project profile with project resource tags configured Use the following steps to create a new SQL Analytics project profile with custom tags. The example uses AWS CLI commands. Open the AWS CloudShell console. Create a project profile using the following CLI command. The project-resource-tags parameter consists of key (tag key), value (tag value), and isValueEditable (boolean indicating if the tag value can be modified during project creation or update). The allow-custom-project-resource-tags parameter set to true permits the project creator to create additional key-value pairs. The key needs to conform to the inline policy of the AmazonSageMakerProvisioning-<accountId> role. The project-resource-tags-description parameter is a description field for project resource tags. The max character limit is 2,048. The description needs to be passed in every time create-project-profile or update-project-profile is called. aws datazone create-project-profile \ --name "SQL Analytics with Project Resource Tags" \ --description "Analyze your data in SageMaker Lakehouse using SQL" \ --domain-identifier "$DOMAIN_ID" \ --region "$REGION" \ --status ENABLED \ --project-resource-tags '[ { "key": "ACME-Application", "value": "SageMaker", "isValueEditable": false }, { "key": "CostCenter", "value": "123", "isValueEditable": true } ]' \ --allow-custom-project-resource-tags \ --environment-configurations '[ { "name": "Tooling", "description": "Configuration for the Tooling Environment", "environmentBlueprintId": "", "deploymentMode": "ON_CREATE", "deploymentOrder": 0, "awsAccount": { "awsAccountId": "$ACCOUNT" }, "awsRegion": { "regionName": "$REGION" }, "configurationParameters": { "parameterOverrides": [ { "name": "enableSpaces", "value": "false", "isEditable": false }, { "name": "maxEbsVolumeSize", "isEditable": false }, { "name": "idleTimeoutInMinutes", "isEditable": false }, { "name": "lifecycleManagement", "isEditable": false }, { "name": "enableNetworkIsolation", "isEditable": false } ] } }, { "name": "Lakehouse Database", "description": "Creates databases in Amazon SageMaker Lakehouse for storing tables in S3 and Amazon Athena resources for your SQL workloads", "environmentBlueprintId": "", "deploymentMode": "ON_CREATE", "deploymentOrder": 1, "awsAccount": { "awsAccountId": "$ACCOUNT" }, "awsRegion": { "regionName": "$REGION" }, "configurationParameters": { "parameterOverrides": [ { "name": "glueDbName", "value": "glue_db", "isEditable": true } ] } }, { "name": "OnDemand RedshiftServerless", "description": "Enables you to create an additional Amazon Redshift Serverless workgroup for your SQL workloads", "environmentBlueprintId": "", "deploymentMode": "ON_DEMAND", "awsAccount": { "awsAccountId": "$ACCOUNT" }, "awsRegion": { "regionName": "$REGION" }, "configurationParameters": { "parameterOverrides": [ { "name": "redshiftDbName", "value": "dev", "isEditable": true }, { "name": "redshiftMaxCapacity", "value": "512", "isEditable": true }, { "name": "redshiftWorkgroupName", "value": "redshift-serverless-workgroup", "isEditable": true }, { "name": "redshiftBaseCapacity", "value": "128", "isEditable": true }, { "name": "connectionName", "value": "redshift.serverless", "isEditable": true }, { "name": "connectToRMSCatalog", "value": "false", "isEditable": false } ] } }, { "name": "OnDemand Catalog for Redshift Managed Storage", "description": "Enables you to create additional catalogs in Amazon SageMaker Lakehouse for storing data in Redshift Managed Storage", "environmentBlueprintId": "", "deploymentMode": "ON_DEMAND", "awsAccount": { "awsAccountId": "$ACCOUNT" }, "awsRegion": { "regionName": "$REGION" }, "configurationParameters": { "parameterOverrides": [ { "name": "catalogName", "isEditable": true }, { "name": "catalogDescription", "value": "RMS catalog", "isEditable": true } ] } } ]' This project profile will have the tag ACME-Application = SageMaker placed on all projects associated to the project profile and cannot be modified by the project creator. The tag CostCenter = 123 can have the value modified by the project creator because the isValueEditable property is set to true . Grant permissions for users to use the project profile during project creation. In the Authorization section of the project profile set either Selected users or groups or Allow all users and groups . The use of the allow-custom-project-resource-tags parameter means the project creator can add their own tags (key-value pair). The key must conform to the condition check in the policy of the provisioning role ( AmazonSageMakerProvisioning-<accountId> ). If the allow-custom-project-resource-tags parameter is changed to false after a project created tags, tags created by the project will be removed during the next project update. Updates to the project profile Updates to project resource tags are possible through the update-project-profile command. The command will replace all values in the project-resource-tags section so be sure to include the exhaustive set of tags. Updates to the project profile are reflected in projects after running the update-project command or when a new project is created using the project profile. The following example adds a new tag, ACME-BusinessUnit = Retail . There are three ways to work with the project-resource-tags parameter when updating the project profile. Passing a non-empty list of project resource tags will replace the tags currently configured on the project profile. Passing an empty list of project resource tags will clear out all previously configured tags: --project-resource-tags '[]' Not including the project resource tag parameter will keep previously configured tags as-is. aws datazone update-project-profile \ --domain-identifier "$DOMAIN_ID" \ --identifier "$PROJECT_PROFILE_ID" \ --region "$REGION" \ --project-resource-tags '[ { "key": "ACME-Application", "value": "SageMaker", "isValueEditable": false }, { "key": "CostCenter", "value": "123", "isValueEditable": true }, { "key": "ACME-BusinessUnit", "value": "Retail", "isValueEditable": false } ]' Create a new project with project resource tags The following steps walk you through creating a new project that inherits tags from the project profile and lets the project creator modify one of the tag values. Create a project using the following example CLI command. Modify the CostCenter tag value using the --resource-tags parameter. Tags configured on the project profile where the isValueEditable attribute is false will be pushed to the project automatically. aws datazone create-project \ --domain-identifier "$DOMAIN_ID" \ --region "$REGION" \ --name "$PROJECT_NAME" \ --description "New project with tags" \ --project-profile-id "$PROJECT_PROFILE_ID" \ --resource-tags '{ "CostCenter": "456" }' Update existing project with project resource tags For existing projects associated to the project profile, you must update the project for the new tags to be applied. Update the project using the following example CLI command. In this scenario, an editable value needs to be updated and a new tag added. Tag CostCenter will have its default value overwritten as “789” and the new ACME-Department = Finance tag will be added. aws datazone update-project \ --domain-identifier "$DOMAIN_ID" \ --identifier "$PROJECT_ID" \ --project-profile-version "latest" \ --region "$REGION" \ --resource-tags '{ "CostCenter": "789", "ACME-Department": "Finance" }' Project level tags (those not configured from the project profile) need to be passed during project update to be preserved. For tags with isValueEditable = true configured from the project profile, any override previously set needs to be applied or the value will revert to the default from the project profile. Validating resources are tagged Validate that tags are placed correctly. An example resource that is created by the project is the project IAM role. Viewing the tags for this role should show the tags configured from the project profile. Open SageMaker Unified Studio to get the project role from the Project details section of the project. The role name begins with datazone_usr_role_ . Open the IAM console . In the navigation pane, choose Roles . Search for the project IAM role. Select the Tags tab. Conclusion In this post, we discussed tagging related use cases from customers and walked through getting started with custom tags in Amazon SageMaker to place tags on the resources created by the project. By giving administrators a way to configure project profiles with standardized tag configurations, you can now help ensure consistent tagging practices across all SageMaker Unified Studio projects while maintaining compliance with SCPs. This feature addresses two critical customer needs: enforcing organizational tagging standards through automated governance mechanisms and enabling accurate cost attribution reporting across multi-service deployments. To learn more, visit Amazon SageMaker , then get started with Project resource tags . About the authors David Victoria David is a Senior Technical Product Manager with Amazon SageMaker at AWS. He focuses on improving administration and governance capabilities needed for customers to support their analytics systems. He is passionate about helping customers realize the most value from their data in a secure, governed manner. Rohit Srikanta Rohit is a Senior Software Engineer at AWS. He works on building and scaling services within Amazon SageMaker. He focuses on developing robust and scalable distributed systems and is passionate about solving complex engineering challenges to deliver maximum customer value. Ahan Malli Ahan is a Software Development Engineer at AWS. He works on the core data and governance layer behind Amazon SageMaker. He’s passionate about building scalable distributed systems and streamlining developer workflows. When he’s not coding, you can find him traveling or hiking Pacific Northwest trails. Loading comments… Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow Twitter Facebook LinkedIn Twitch Email Updates Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center SDKs & Tools .NET on AWS Python on AWS Java on AWS PHP on AWS JavaScript on AWS Help Contact Us File a Support Ticket AWS re:Post Knowledge Center AWS Support Overview Get Expert Help AWS Accessibility Legal English Back to top Amazon is an Equal Opportunity Employer: Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation / Age. x facebook linkedin <path d="M4.68673 0.0559501C3.83553 0.0961101 3.25425 0.23195 2.74609 0.43163C2.22017 0.63659 1.77441 0.91163 1.3308 | 2026-01-13T09:29:13 |
https://aws.amazon.com/blogs/big-data/category/learning-levels/intermediate-200/ | Intermediate (200) | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Category: Intermediate (200) AWS analytics at re:Invent 2025: Unifying Data, AI, and governance at scale by Larry Weber on 07 JAN 2026 in Amazon EMR , Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , Amazon OpenSearch Service , Amazon Redshift , Amazon SageMaker Data & AI Governance , Amazon SageMaker Unified Studio , Analytics , AWS Glue , AWS Lake Formation , AWS re:Invent , Intermediate (200) Permalink Comments Share re:Invent 2025 showcased the bold Amazon Web Services (AWS) vision for the future of analytics, one where data warehouses, data lakes, and AI development converge into a seamless, open, intelligent platform, with Apache Iceberg compatibility at its core. Across over 18 major announcements spanning three weeks, AWS demonstrated how organizations can break down data silos, […] Amazon EMR Serverless eliminates local storage provisioning, reducing data processing costs by up to 20% by Karthik Prabhakar , Matt Tolton , Neil Mukerje , and Ravi Kumar Singh on 06 JAN 2026 in Amazon EMR , Analytics , Announcements , Intermediate (200) , Serverless Permalink Comments Share In this post, you’ll learn how Amazon EMR Serverless eliminates the need to configure local disk storage for Apache Spark workloads through a new serverless storage capability. We explain how this feature automatically handles shuffle operations, reduces data processing costs by up to 20%, prevents job failures from disk capacity constraints, and enables elastic scaling by decoupling storage from compute. How Taxbit achieved cost savings and faster processing times using Amazon S3 Tables by Larry Christensen , Derek Ziehl , Pranjal Gururani , and Washim Nawaz on 18 DEC 2025 in Amazon S3 Tables , Analytics , Customer Solutions , Intermediate (200) Permalink Comments Share In this post, we discuss how Taxbit partnered with Amazon Web Services (AWS) to streamline their crypto tax analytics solution using Amazon S3 Tables, achieving 82% cost savings and five times faster processing times. Power data ingestion into Splunk using Amazon Data Firehose by Tarik Makota , Mitali Sheth , Roy Arsan , and Yashika Jain on 17 DEC 2025 in Amazon Data Firehose , Amazon Kinesis , Intermediate (200) , Technical How-to Permalink Comments Share With Kinesis Data Firehose, customers can use a fully managed, reliable, and scalable data streaming solution to Splunk. In this post, we tell you a bit more about the Kinesis Data Firehose and Splunk integration. We also show you how to ingest large amounts of data into Splunk using Kinesis Data Firehose. Accelerate Apache Hive read and write on Amazon EMR using enhanced S3A by Ramesh Kandasamy , Giovanni Matteo Fumarola , Himanshu Mishra , Paramvir Singh , and Anmol Sundaram on 15 DEC 2025 in Amazon EMR , Analytics , Announcements , Intermediate (200) Permalink Comments Share In this post, we demonstrate how Apache Hive on Amazon EMR 7.10 delivers significant performance improvements for both read and write operations on Amazon S3. Introducing AWS Glue 5.1 for Apache Spark by Chiho Sugimoto , Bo Li , Kartik Panjabi , Peter Manastyrny , Noritaka Sekiyama , and Peter Tsai on 09 DEC 2025 in Announcements , AWS Glue , Intermediate (200) Permalink Comments Share AWS recently announced Glue 5.1, a new version of AWS Glue that accelerates data integration workloads in AWS. AWS Glue 5.1 upgrades the Spark engines to Apache Spark 3.5.6, giving you newer Spark release along with the newer dependent libraries so you can develop, run, and scale your data integration workloads and get insights faster. In this post, we describe what’s new in AWS Glue 5.1, key highlights on Spark and related libraries, and how to get started on AWS Glue 5.1. Achieve 2x faster data lake query performance with Apache Iceberg on Amazon Redshift by Kalaiselvi Kamaraj , Aamer Shah , Fabian Nagel , Ravi Animi , and Stefan Gromoll on 26 NOV 2025 in Amazon Redshift , Announcements , Intermediate (200) Permalink Comments Share In 2025, Amazon Redshift delivered several performance optimizations that improved query performance over twofold for Iceberg workloads on Amazon Redshift Serverless, delivering exceptional performance and cost-effectiveness for your data lake workloads. In this post, we describe some of the optimizations that led to these performance gains. Accelerate data lake operations with Apache Iceberg V3 deletion vectors and row lineage by Ron Ortloff on 26 NOV 2025 in Amazon EMR , Amazon SageMaker , Amazon Simple Storage Service (S3) , Announcements , AWS Glue , Intermediate (200) , Technical How-to Permalink Comments Share In this post, we walk you through the new capabilities in Iceberg V3, explain how deletion vectors and row lineage address these challenges, explore real-world use cases across industries, and provide practical guidance on implementing Iceberg V3 features across AWS analytics, catalog, and storage services. How Octus achieved 85% infrastructure cost reduction with zero downtime migration to Amazon OpenSearch Service by Vaibhav Sabharwal , Andre Kurait , Harmandeep Sethi, Serhii Shevchenko, Govind Bajaj, Virendra Shinde , and Brian Presley on 26 NOV 2025 in Amazon OpenSearch Service , Customer Solutions , Intermediate (200) Permalink Comments Share This post highlights how Octus migrated its Elasticsearch workloads running on Elastic Cloud to Amazon OpenSearch Service. The journey traces Octus’s shift from managing multiple systems to adopting a cost-efficient solution powered by OpenSearch Service. Introducing Cluster Insights: Unified monitoring dashboard for Amazon OpenSearch Service clusters by Siddhant Gupta , Gagan Juneja , Jinhwan Hyon , and Varunsrivathsa Venkatesha on 21 NOV 2025 in Amazon OpenSearch Service , Announcements , Intermediate (200) Permalink Comments Share This blog will guide you through setting up and using Cluster Insights, including key features and metrics. By the conclusion, you’ll understand how to use Cluster insights to recognize and address performance and resiliency issues within your OpenSearch Service clusters. ← Older posts Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library <a data-rg-n="Link" href="/architecture/?nc1=f_cc" data-rigel-analytics="{"name":"Link" | 2026-01-13T09:29:13 |
https://aws.amazon.com/blogs/big-data/navigating-architectural-choices-for-a-lakehouse-using-amazon-sagemaker/ | Navigating architectural choices for a lakehouse using Amazon SageMaker | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Navigating architectural choices for a lakehouse using Amazon SageMaker by Lakshmi Nair and Saman Irfan on 12 JAN 2026 in Amazon SageMaker Data & AI Governance , Amazon SageMaker Lakehouse , Amazon SageMaker Unified Studio , Analytics Permalink Comments Share Organizations today are using data more than ever to drive decision-making and innovation. Because they work with petabytes of information, they have traditionally gravitated towards two distinct paradigms—data lakes and data warehouses. While each paradigm excels at specific use cases, they often create unintended barriers between the data assets. Data lakes are often built on object storage such as Amazon Simple Storage Service (Amazon S3) , which provide flexibility by supporting diverse data formats and schema-on-read capabilities. This enables multi-engine access where various processing frameworks (such as Apache Spark , Trino , and Presto ) can query the same data. On the other hand, data warehouses (such as Amazon Redshift ) excel in areas such as ACID (atomicity, consistency, isolation and durability) compliance, performance optimization, and straightforward deployment, making them suitable for structured and complex queries. As data volumes grow and analytics needs become more complex, organizations seek to bridge these silos and use the strengths of both paradigms. This is where the concept of lakehouse architecture is applied, offering a unified approach to data management and analytics. Over time, several distinct lakehouse approaches have emerged. In this post, we show you how to evaluate and choose the right lakehouse pattern for your needs. The data lake centric lakehouse approach begins with the scalability, cost-effectiveness, and flexibility of a traditional data lake built on object storage. The goal is to add a layer of transactional capabilities and data management traditionally found in databases, primarily through open table formats (such as Apache Hudi , Delta Lake , or Apache Iceberg ). While open table formats have made significant strides by introducing ACID guarantees for single-table operations in data lakes, implementing multi-table transactions with complex referential integrity constraints and joins remains challenging. The fundamental nature of querying petabytes of files on object storage, often through distributed query engines, can result in slow interactive queries at high concurrency when compared to a highly optimized, indexed, and materialized data warehouse. Open table formats introduce compaction and indexing, but the full suite of intelligent storage optimizations found in highly mature, proprietary data warehouses is still evolving in data lake-centric architecture. The data warehouse centric lakehouse approach offers robust analytical capabilities but has significant interoperability challenges. Though data warehouses provide JAVA Database Connectivity (JDBC) and Open Database Connectivity (ODBC) drivers for external access, the underlying data remains in proprietary formats, making it difficult for external tools or services to directly access it without complex extract, transform, and load (ETL) or API layers. This can lead to data duplication and latency. A data warehouse architecture might support reading open table formats, but its ability to write to them or participate in their transactional layers can be limited. This restricts true interoperability and can create shadow data silos. On AWS, you can build a modern, open lakehouse architecture to achieve unified access to both data warehouses and data lakes. By using this approach, you can build sophisticated analytics, machine learning (ML), and generative AI applications while maintaining a single source of truth for their data. You don’t have to choose between a data lake or data warehouse. You can use existing investments and preserve the strengths of both paradigms while eliminating their respective weaknesses. The lakehouse architecture on AWS embraces open table formats such as Apache Hudi, Delta Lake, and Apache Iceberg. You can accelerate your lakehouse journey with the next generation of Amazon SageMaker , which delivers an integrated experience for analytics and AI with unified access to data. SageMaker is built on an open lakehouse architecture that is fully compatible with Apache Iceberg. By extending support for Apache Iceberg REST APIs, SageMaker significantly adds interoperability and accessibility across various Apache Iceberg-compatible query engines and tools. At the core of this architecture is a metadata management layer built on AWS Glue Data Catalog and AWS Lake Formation , which provide unified governance and centralized access control. Foundations of the Amazon SageMaker lakehouse architecture The lakehouse architecture of Amazon SageMaker has four main components that work together to create a unified data platform. Flexible storage to adapt to the workload patterns and requirements Technical catalog that serves as a single source of truth for all metadata Integrated permission management with fine-grained access control across all data assets Open access framework built on Apache Iceberg REST APIs for universal compatibility Catalogs and permissions When building an open lakehouse, the catalog—your central repository of metadata—is a critical component for data discovery and governance. There are two types of catalogs in the lakehouse architecture of Amazon SageMaker: managed catalogs and federated catalogs. Managed catalog refers to when the metadata is managed by the lakehouse, and the data is stored in a general purpose S3 bucket. Federated catalog refers to mounting or connecting to external or existing data sources so you can query data from data sources such as Amazon Redshift , Snowflake, and Amazon DynamoDB without explicitly moving the data. For more information, see Data connections in the lakehouse architecture of Amazon SageMaker . You can use an AWS Glue crawler to automatically discover and register this metadata in Data Catalog . Data Catalog stores the schema and table metadata of your data assets, effectively turning files into logical tables. After your data is cataloged, the next challenge is controlling who can access it. While you could use complex S3 bucket policies for every folder, this approach is difficult to manage and scale. Lake Formation provides a centralized database-style permissions model on the Data Catalog, giving you the flexibility to grant or revoke fine-grained access at row, column, and cell levels for individual users or roles. Open access with Apache Iceberg REST APIs The lakehouse architecture described in the preceding section and shown in the following figure also uses the AWS Glue Iceberg REST catalog through the service endpoint, which provides OSS compatibility, enabling increased interoperability for managing Iceberg table metadata across Spark and other open source analytics engines. You can choose the appropriate API based on table format and use case requirements. In this post, we explore various lakehouse architecture patterns, focusing on how to optimally use data lake and data warehouse to create robust, scalable, and performance-driven data solutions. Bringing data into your lakehouse on AWS When building a lakehouse architecture, you can choose from three distinct patterns to access and integrate your data, each offering unique advantages for different use cases. Traditional ETL is the classic method of extracting data, transforming it and loading it into your lakehouse. When to use it: You need complex transformations and require highly curated and optimized data sets for downstream applications for better performance You need to perform historical data migrations You need data quality enforcement and standardization at scale You need highly governed curated data in a lakehouse Zero-ETL is a modern architectural pattern where data automatically and continuously replicates from a source system to lakehouse with minimal or no manual intervention or custom code. Behind the scenes, the pattern uses change data capture (CDC) to automatically stream all new inserts, updates, and deletes from the source to the target. This architectural pattern is effective when the source system maintains a high degree of data cleanliness and structure, minimizing the need for heavy pre-load transformations, or when data refinement and aggregation can occur at the target end within lakehouse. Zero-ETL replicates data with minimal delay, and the transformation logic is performed on the target end closer to where the insights are generated by shifting it to a more efficient, post-load phase. When to use it: You need to reduce operational complexity and gain flexible control over data replication for both near real-time and batch use cases. You need limited customization. While zero-ETL implies minimal work, some light transformations might still be required on the replicated data. You need to minimize the need for specialized ETL expertise. You need to maintain data freshness without processing delays and reduce risk of data inconsistencies. Zero-ETL facilitates faster time-to-insight. Data federation (no-movement approach) is a method that enables querying and combining data from multiple disparate sources without physically moving or copying it into a single centralized location. This query-in-place approach allows the query engine to connect directly to the external source systems, delegate and execute queries, and combine results on the fly for presentation to the user. The effectiveness of this architecture pattern depends on three key factors: network latency between systems, source system performance capabilities, and the query engine’s ability to push down predicates to optimize query execution. This no-movement approach can significantly reduce data duplication and storage costs while providing real-time access to source data. When to use it: You need to query the source system directly to use operational analytics. You don’t want to duplicate data to save on storage space and associated costs within your Lakehouse. You’re willing to trade some query performance and governance for immediate data availability and one-time analysis of live data. You don’t need to frequently query the data. Understanding the storage layer of your lakehouse on AWS Now that you’ve seen different ways to get data into a lakehouse, the next question is where to store the data. As shown in the following figure, you can architect a modern open lakehouse on AWS by storing the data in a data lake (Amazon S3 or Amazon S3 Tables ) or data warehouse ( Redshift Managed Storage ), so you can optimize for both flexibility and performance based on your specific workload requirements. A modern lakehouse isn’t a single storage technology but a strategic combination of them. The decision of where and how to store your data impacts everything from the speed of your dashboards to the efficiency of your ML models. You must consider not only the initial cost of storage but also the long-term costs of data retrieval, the latency required by your users, and the governance necessary to maintain a single source of truth. In this section, we delve into architectural patterns for the data lake and the data warehouse and provide a clear framework for when to use each storage pattern. While they have historically been seen as competing architectures, the modern and open lakehouse approach uses both to create a single, powerful data platform. General purpose S3 A general purpose S3 bucket in Amazon S3 is the standard, foundational bucket type used for storing objects. It provides flexibility so that you can store your data in its native format without a rigid upfront schema. Because of the ability of an S3 bucket to decouple storage from compute, you can store the data in a highly scalable location, while a variety of query engines can access and process it independently. This means that you can choose the right tool for the job without having to move or duplicate the data. You can store petabytes of data without ever having to provision or manage storage capacity, and its tiered storage classes provide significant cost savings by automatically moving less-frequently accessed data to more affordable storage. The existing Data Catalog functions as a managed catalog. It’s identified by the AWS account number, which means there is no migration needed for existing Data Catalogs; they’re already available in the lakehouse and become the default catalog for the new data, as shown in the following figure. A foundational data lake on general purpose S3 is highly efficient for append-only workloads. However, its file-based nature lacks the transactional guarantees of a traditional database. This is where you can use the support of open-source transactional table formats such as Apache Hudi, Delta Lake, and Apache Iceberg. With these table formats, you can implement multi-version concurrency control, allowing multiple readers and writers to operate simultaneously without conflicts. They provide snapshot isolation, so that readers see consistent views of data even during write operations. A typical medallion architecture pattern with Apache Iceberg is depicted in the following figure. When building a lakehouse on AWS with Apache Iceberg, customers can choose between two primary approaches for storing their data on Amazon S3: General purpose S3 buckets with self-managed Iceberg or using the fully managed S3 Tables. Each path has distinct advantages, and the right choice depends on your specific needs for control, performance, and operational overhead. General purpose S3 with Self-managed Iceberg Using general purpose S3 buckets with self-managed Iceberg is a traditional approach where you store both data and Iceberg metadata files in standard S3 buckets. With this option, you maintain full control but are responsible for managing the complete Iceberg table lifecycle, including essential maintenance tasks such as compaction and garbage collection. When to use it: Maximum control: This approach provides complete control over the entire data life cycle. You can fine-tune every aspect of table maintenance, such as defining your own compaction schedules and strategies, which can be crucial for specific high-performance workloads or to optimize costs. Flexibility and customization: It is ideal for organizations with strong in-house data engineering expertise that need to integrate with a wider range of open-source tools and custom scripts. You can use Amazon EMR or Apache Spark to manage the table operations. Lower upfront costs: You pay only for Amazon S3 storage, API requests, and the compute resources you use for maintenance. This can be more cost-effective for smaller or less-frequent workloads where continuous, automated optimization isn’t necessary. Note: The query performance depends entirely on your optimization strategy. Without continuous, scheduled jobs for compaction, performance can degrade over time as data gets fragmented. You must monitor these jobs to ensure efficient querying. S3 Tables S3 Tables provides S3 storage that’s optimized for analytic workloads and provides Apache Iceberg compatibility to store tabular data at scale. You can integrate S3 table buckets and tables with Data Catalog and register the catalog as a Lake Formation data location from the Lake Formation console or using service APIs, as shown in the following figure. This catalog will be registered and mounted as a federated lakehouse catalog. When to use it: Simplified operations: S3 Tables automatically handles table maintenance tasks such as compaction, snapshot management and orphan file cleanup in the background. This automation eliminates the need to build and manage custom maintenance jobs, significantly reducing your operational overhead. Automated optimization: S3 Tables provides built-in automatic optimizations that improve query performance. These optimizations include background processes such as file compaction to address the small files problem and data layout optimizations specific to tabular data. However, this automation trades flexibility for convenience. Because you can’t control the timing or method of compaction operations, workloads with specific performance requirements might experience varying query performance. Focus on data usage: S3 Tables reduces the engineering overhead and shifts the focus to data consumption, data governance and value creation. Simplified entry to open table formats: It’s suitable for teams who are new to the concept of Apache Iceberg but want to use transactional capabilities on data lake. No external catalog: Suitable for smaller teams who don’t want to manage an external catalog. Redshift managed storage While the data lake serves as the central source of truth for all your data, it’s not the most suitable data store for every job. For the most demanding business intelligence and reporting workloads, the data lake’s open and flexible nature can introduce performance unpredictability. To help ensure the desired performance, consider transitioning a curated subset of your data from the data lake to a data warehouse for the following reasons: High concurrency BI and reporting: When hundreds of business users are concurrently running complex queries on live dashboards, a data warehouse is specifically optimized to handle these workloads with predictable, sub-second query latency. Predictable performance SLAs: – For critical business processes that require data to be delivered at a guaranteed speed, such as financial reporting or end-of-day sales analysis, a data warehouse provides consistent performance. Complex SQL workloads: While data lakes are powerful, they can struggle with highly complex queries involving numerous joins and massive aggregations. A data warehouse is purpose-built to run these relational workloads efficiently. The lakehouse architecture on AWS supports Redshift Managed Storage (RMS), a storage option provided by Amazon Redshift, a fully managed, petabyte-scale data warehouse service in the cloud. RMS storage supports the automatic table optimization offered in Amazon Redshift such as built-in query optimizations for data warehousing workloads, automated materialized views , and AI-driven optimizations and scaling for frequently running workloads. Federated RMS catalog: Onboard existing Amazon Redshift data warehouses to lakehouse Implementing a federated catalog with existing Amazon Redshift data warehouses creates a metadata-only integration that requires no data movement. This approach lets you extend your established Amazon Redshift investments into a modern open lakehouse framework while maintaining compatibility with existing workflows. Amazon Redshift uses a hierarchical data organization structure: Cluster level : Starts with a namespace Database level : Contains multiple databases Schema level : Organizes tables within databases When you register your existing Amazon Redshift provisioned or serverless namespaces as a federated catalog in Data Catalog, this hierarchy maps directly into the lakehouse metadata layer. The lakehouse implementation on AWS supports multiple catalogs using a dynamic hierarchy to organize and map the underlying storage metadata. After you register a namespace, the federated catalog automatically mounts across all Amazon Redshift data warehouses in your AWS Region and account. During this process, Amazon Redshift internally creates external databases that correspond to data shares. This mechanism remains completely abstracted from end users. By using federated catalogs, you can create and use immediate visibility and accessibility across your data ecosystem. Permissions on the federated catalogs can be managed by Lake Formation for both same account and cross account access. The real capability of federated catalogs emerges when accessing Amazon Redshift-managed storage from external AWS engines such as Amazon Athena , Amazon EMR , or open source Spark. Because Amazon Redshift uses proprietary block-based storage that only Amazon Redshift engines can read natively, AWS automatically provisions a service-managed Amazon Redshift Serverless instance in the background. This service-managed instance acts as a translation layer between external engines and Amazon Redshift managed storage. AWS establishes automatic data shares between your registered federated catalog and the service-managed Amazon Redshift Serverless instance to enable secure, efficient data access. AWS also creates a service-managed Amazon S3 bucket in the background for data transfer. When an external engine such as Athena submits queries against Amazon Redshift federated catalog, Lake Formation handles the credential vending by providing the temporary credentials to the requesting service. The query executes through the service-managed Amazon Redshift Serverless, which accesses data through automatically established data shares, processes results, offloads them to a service-managed Amazon S3 staging area, and then returns results to the original requesting engine. To track the compute cost of the federated catalog of existing Amazon Redshift warehouse, use the following tag. aws:redshift-serverless:LakehouseManagedWorkgroup value: "True" To activate the AWS generated cost allocation tags for billing insight, follow the activation instructions . You can also view the computational cost of the resources in AWS Billing . When to use it: Existing Amazon Redshift investments: Federated catalogs are designed for organizations with existing Amazon Redshift deployments who want to use their data across multiple services without migration. Cross-service data sharing: – Implement so teams can share existing data in an Amazon Redshift data warehouse across different warehouses and centralize their permissions. Enterprise integration requirements: This approach is suitable for organizations that need to integrate with established data governance. It also maintains compatibility with current workflows while adding lakehouse capabilities. Infrastructure control and pricing: – You can retain full control over compute capacity for their existing warehouses for predictable workloads. You can optimize compute capacity, choose between on-demand and reserved capacity pricing, and fine-tune performance parameters. This provides cost predictability and performance control for consistent workloads. When implementing lakehouse architecture with multiple catalog types, selecting the appropriate query engine is crucial for both performance and cost optimization. This post focuses on the storage foundation of lakehouse, however for critical workloads involving extensive Amazon Redshift data operations, consider executing queries within Amazon Redshift or using Spark when possible. Complex joins spanning multiple Amazon Redshift tables through external engines might result in higher compute costs if the engines don’t support full predicate push-down. Other use-cases Build a multi-warehouse architecture Amazon Redshift supports data sharing , which you can use to share live data between source and target Amazon Redshift clusters. By using data sharing, you can share live data without creating copies or moving data, enabling uses cases such as workload isolation (hub and spoke architecture) and cross group collaboration (data mesh architecture). Without a lakehouse architecture, you must create an explicit data share between source and target Amazon Redshift clusters. While managing these data shares in small deployments is relatively straightforward, it becomes complex in data mesh architectures. The lakehouse architecture addresses this challenge so customers can publish their existing Amazon Redshift warehouses as federated catalogs. These federated catalogs are automatically mounted and made available as external databases in other consumer Amazon Redshift warehouses within the same account and Region. By using this approach, you can maintain a single copy of data and use multiple data warehouses to query it, eliminating the need to create and manage multiple data shares and scale with workload isolation. The permission management becomes centralized through Lake Formation, streamlining governance across the entire multi-warehouse environment. Near real-time analytics on petabytes of transactional data with no pipeline management: Zero-ETL integrations seamlessly replicate transactional data from OLTP data sources to Amazon Redshift, general purpose S3 (with self-managed Iceberg) or S3 Tables. This approach eliminates the need to maintain complex ETL pipelines, reducing the number of moving parts in your data architecture and potential points of failure. Business users can analyze fresh operational data immediately rather than working with stale data from the last ETL run. See Aurora zero-ETL integrations for a list of OLTP data sources that can be replicated to an existing Amazon Redshift warehouse. See Zero-ETL integrations for information about other supported data sources that can be replicated to an existing Amazon Redshift warehouse, general purpose S3 with self-managed Iceberg, and S3 Tables. Conclusion A lakehouse architecture isn’t about choosing between a data lake and a data warehouse. Instead, it’s an approach to interoperability where both frameworks coexist and serve different purposes within a unified data architecture. By understanding fundamental storage patterns, implementing effective catalog strategies, and using native storage capabilities, you can build scalable, high-performance data architectures that support both your current analytics needs and future innovation. For more information, see The lakehouse architecture of Amazon SageMaker . About the authors Lakshmi Nair Lakshmi is a Senior Analytics Specialist Solutions Architect at AWS. She specializes in designing advanced analytics systems across industries. She focuses on crafting cloud-based data platforms, enabling real-time streaming, big data processing, and robust data governance. Saman Irfan Saman is a Senior Specialist Solutions Architect at Amazon Web Services, based in Berlin, Germany. Saman is passionate about helping organizations modernize their data architectures and unlock the full potential of their data to drive innovation and business transformation. Outside of work, she enjoys spending time with her family, watching TV series, and staying updated with the latest advancements in technology. Loading comments… Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow Twitter Facebook LinkedIn Twitch Email Updates Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center SDKs & Tools .NET on AWS Python on AWS Java on AWS PHP on AWS JavaScript on AWS Help Contact Us File a Support Ticket AWS re:Post Knowledge Center AWS Support Overview Get Expert Help AWS Accessibility Legal English | 2026-01-13T09:29:13 |
https://aws.amazon.com/blogs/big-data/category/application-integration/amazon-managed-workflows-for-apache-airflow-amazon-mwaa/ | Amazon Managed Workflows for Apache Airflow (Amazon MWAA) | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Category: Amazon Managed Workflows for Apache Airflow (Amazon MWAA) AWS analytics at re:Invent 2025: Unifying Data, AI, and governance at scale by Larry Weber on 07 JAN 2026 in Amazon EMR , Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , Amazon OpenSearch Service , Amazon Redshift , Amazon SageMaker Data & AI Governance , Amazon SageMaker Unified Studio , Analytics , AWS Glue , AWS Lake Formation , AWS re:Invent , Intermediate (200) Permalink Comments Share re:Invent 2025 showcased the bold Amazon Web Services (AWS) vision for the future of analytics, one where data warehouses, data lakes, and AI development converge into a seamless, open, intelligent platform, with Apache Iceberg compatibility at its core. Across over 18 major announcements spanning three weeks, AWS demonstrated how organizations can break down data silos, […] Building scalable AWS Lake Formation governed data lakes with dbt and Amazon Managed Workflows for Apache Airflow by Abhilasha Agarwal and Muralidhar Reddy on 06 JAN 2026 in Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , AWS Lake Formation , Expert (400) , Technical How-to Permalink Comments Share Organizations often struggle with building scalable and maintainable data lakes—especially when handling complex data transformations, enforcing data quality, and monitoring compliance with established governance. Traditional approaches typically involve custom scripts and disparate tools, which can increase operational overhead and complicate access control. A scalable, integrated approach is needed to simplify these processes, improve data reliability, […] Introducing Amazon MWAA Serverless by John Jackson on 17 NOV 2025 in Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , Announcements , Intermediate (200) , Serverless , Technical How-to Permalink Comments Share Today, AWS announced Amazon Managed Workflows for Apache Airflow (MWAA) Serverless. This is a new deployment option for MWAA that eliminates the operational overhead of managing Apache Airflow environments while optimizing costs through serverless scaling. In this post, we demonstrate how to use MWAA Serverless to build and deploy scalable workflow automation solutions. Best practices for migrating from Apache Airflow 2.x to Apache Airflow 3.x on Amazon MWAA by Anurag Srivastava , Ankit Sahu , Kamen Sharlandjiev , Mike Ellis , Jeetendra Vaidya , and Venu Thangalapally on 07 OCT 2025 in Advanced (300) , Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , Best Practices Permalink Comments Share Apache Airflow 3.x on Amazon MWAA introduces architectural improvements such as API-based task execution that provides enhanced security and isolation. This migration presents an opportunity to embrace next-generation workflow orchestration capabilities while providing business continuity. This post provides best practices and a streamlined approach to successfully navigate this critical migration, providing minimal disruption to your mission-critical data pipelines while maximizing the enhanced capabilities of Airflow 3. Introducing Apache Airflow 3 on Amazon MWAA: New features and capabilities by Anurag Srivastava , Ankit Sahu , Kamen Sharlandjiev , Sriharsh Adari , Mohammad Sabeel , and Satya Chikkala on 01 OCT 2025 in Advanced (300) , Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , Announcements Permalink Comments Share AWS announced the general availability of Apache Airflow 3 on Amazon Managed Workflows for Apache Airflow (Amazon MWAA). This release transforms how organizations use Apache Airflow to orchestrate data pipelines and business processes in the cloud, bringing enhanced security, improved performance, and modern workflow orchestration capabilities to Amazon MWAA customers. This post explores the features of Airflow 3 on Amazon MWAA and outlines enhancements that improve your workflow orchestration capabilities Use Apache Airflow workflows to orchestrate data processing on Amazon SageMaker Unified Studio by Vinod Jayendra , Kamen Sharlandjiev , Sean Bjurstrom , and Suba Palanisamy on 22 SEP 2025 in Advanced (300) , Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , Amazon SageMaker Unified Studio , Application Integration , Technical How-to Permalink Comments Share Orchestrating machine learning pipelines is complex, especially when data processing, training, and deployment span multiple services and tools. In this post, we walk through a hands-on, end-to-end example of developing, testing, and running a machine learning (ML) pipeline using workflow capabilities in Amazon SageMaker, accessed through the Amazon SageMaker Unified Studio experience. These workflows are powered by Amazon Managed Workflows for Apache Airflow. Build data pipelines with dbt in Amazon Redshift using Amazon MWAA and Cosmos by Cindy Li , Akhil B , Harshana Nanayakkara , and Joao Palma on 13 AUG 2025 in Advanced (300) , Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , Amazon Redshift , Technical How-to Permalink Comments Share In this post, we explore a streamlined, configuration-driven approach to orchestrate dbt Core jobs using Amazon Managed Workflows for Apache Airflow (Amazon MWAA) and Cosmos, an open source package. These jobs run transformations on Amazon Redshift. With this setup, teams can collaborate effectively while maintaining data quality, operational efficiency, and observability. Best practices for upgrading Amazon MWAA V1.x to V2.x by Anurag Srivastava , Chandan Rupakheti , Sriharsh Adari , and Venu Thangalapally on 02 JUN 2025 in Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , Best Practices Permalink Comments Share In this post, we explore best practices for upgrading your Amazon MWAA environment and provide a step-by-step guide to seamlessly transition to the latest version. How LaunchDarkly migrated to Amazon MWAA to achieve efficiency and scale by Asena Uyar, Dean Verhey and Daniel Lopes on 16 MAY 2025 in Amazon EC2 Container Service , Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , Customer Solutions , Intermediate (200) Permalink Comments Share In this post, we explore how LaunchDarkly scaled the internal analytics platform up to 14,000 tasks per day, with minimal increase in costs, after migrating from another vendor-managed Apache Airflow solution to AWS, using Amazon Managed Workflows for Apache Airflow (Amazon MWAA) and Amazon Elastic Container Service (Amazon ECS). Build end-to-end Apache Spark pipelines with Amazon MWAA, Batch Processing Gateway, and Amazon EMR on EKS clusters by Avinash Desireddy and Suvojit Dasgupta on 01 MAY 2025 in Amazon EMR on EKS , Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , AWS Big Data , Intermediate (200) , Open Source Permalink Comments Share This post shows how to enhance the multi-cluster solution by integrating Amazon Managed Workflows for Apache Airflow (Amazon MWAA) with BPG. By using Amazon MWAA, we add job scheduling and orchestration capabilities, enabling you to build a comprehensive end-to-end Spark-based data processing pipeline. ← Older posts Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases <h2 data-rg-n="Titl | 2026-01-13T09:29:13 |
https://aws.amazon.com/blogs/big-data/navigating-architectural-choices-for-a-lakehouse-using-amazon-sagemaker/#Comments | Navigating architectural choices for a lakehouse using Amazon SageMaker | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Navigating architectural choices for a lakehouse using Amazon SageMaker by Lakshmi Nair and Saman Irfan on 12 JAN 2026 in Amazon SageMaker Data & AI Governance , Amazon SageMaker Lakehouse , Amazon SageMaker Unified Studio , Analytics Permalink Comments Share Organizations today are using data more than ever to drive decision-making and innovation. Because they work with petabytes of information, they have traditionally gravitated towards two distinct paradigms—data lakes and data warehouses. While each paradigm excels at specific use cases, they often create unintended barriers between the data assets. Data lakes are often built on object storage such as Amazon Simple Storage Service (Amazon S3) , which provide flexibility by supporting diverse data formats and schema-on-read capabilities. This enables multi-engine access where various processing frameworks (such as Apache Spark , Trino , and Presto ) can query the same data. On the other hand, data warehouses (such as Amazon Redshift ) excel in areas such as ACID (atomicity, consistency, isolation and durability) compliance, performance optimization, and straightforward deployment, making them suitable for structured and complex queries. As data volumes grow and analytics needs become more complex, organizations seek to bridge these silos and use the strengths of both paradigms. This is where the concept of lakehouse architecture is applied, offering a unified approach to data management and analytics. Over time, several distinct lakehouse approaches have emerged. In this post, we show you how to evaluate and choose the right lakehouse pattern for your needs. The data lake centric lakehouse approach begins with the scalability, cost-effectiveness, and flexibility of a traditional data lake built on object storage. The goal is to add a layer of transactional capabilities and data management traditionally found in databases, primarily through open table formats (such as Apache Hudi , Delta Lake , or Apache Iceberg ). While open table formats have made significant strides by introducing ACID guarantees for single-table operations in data lakes, implementing multi-table transactions with complex referential integrity constraints and joins remains challenging. The fundamental nature of querying petabytes of files on object storage, often through distributed query engines, can result in slow interactive queries at high concurrency when compared to a highly optimized, indexed, and materialized data warehouse. Open table formats introduce compaction and indexing, but the full suite of intelligent storage optimizations found in highly mature, proprietary data warehouses is still evolving in data lake-centric architecture. The data warehouse centric lakehouse approach offers robust analytical capabilities but has significant interoperability challenges. Though data warehouses provide JAVA Database Connectivity (JDBC) and Open Database Connectivity (ODBC) drivers for external access, the underlying data remains in proprietary formats, making it difficult for external tools or services to directly access it without complex extract, transform, and load (ETL) or API layers. This can lead to data duplication and latency. A data warehouse architecture might support reading open table formats, but its ability to write to them or participate in their transactional layers can be limited. This restricts true interoperability and can create shadow data silos. On AWS, you can build a modern, open lakehouse architecture to achieve unified access to both data warehouses and data lakes. By using this approach, you can build sophisticated analytics, machine learning (ML), and generative AI applications while maintaining a single source of truth for their data. You don’t have to choose between a data lake or data warehouse. You can use existing investments and preserve the strengths of both paradigms while eliminating their respective weaknesses. The lakehouse architecture on AWS embraces open table formats such as Apache Hudi, Delta Lake, and Apache Iceberg. You can accelerate your lakehouse journey with the next generation of Amazon SageMaker , which delivers an integrated experience for analytics and AI with unified access to data. SageMaker is built on an open lakehouse architecture that is fully compatible with Apache Iceberg. By extending support for Apache Iceberg REST APIs, SageMaker significantly adds interoperability and accessibility across various Apache Iceberg-compatible query engines and tools. At the core of this architecture is a metadata management layer built on AWS Glue Data Catalog and AWS Lake Formation , which provide unified governance and centralized access control. Foundations of the Amazon SageMaker lakehouse architecture The lakehouse architecture of Amazon SageMaker has four main components that work together to create a unified data platform. Flexible storage to adapt to the workload patterns and requirements Technical catalog that serves as a single source of truth for all metadata Integrated permission management with fine-grained access control across all data assets Open access framework built on Apache Iceberg REST APIs for universal compatibility Catalogs and permissions When building an open lakehouse, the catalog—your central repository of metadata—is a critical component for data discovery and governance. There are two types of catalogs in the lakehouse architecture of Amazon SageMaker: managed catalogs and federated catalogs. Managed catalog refers to when the metadata is managed by the lakehouse, and the data is stored in a general purpose S3 bucket. Federated catalog refers to mounting or connecting to external or existing data sources so you can query data from data sources such as Amazon Redshift , Snowflake, and Amazon DynamoDB without explicitly moving the data. For more information, see Data connections in the lakehouse architecture of Amazon SageMaker . You can use an AWS Glue crawler to automatically discover and register this metadata in Data Catalog . Data Catalog stores the schema and table metadata of your data assets, effectively turning files into logical tables. After your data is cataloged, the next challenge is controlling who can access it. While you could use complex S3 bucket policies for every folder, this approach is difficult to manage and scale. Lake Formation provides a centralized database-style permissions model on the Data Catalog, giving you the flexibility to grant or revoke fine-grained access at row, column, and cell levels for individual users or roles. Open access with Apache Iceberg REST APIs The lakehouse architecture described in the preceding section and shown in the following figure also uses the AWS Glue Iceberg REST catalog through the service endpoint, which provides OSS compatibility, enabling increased interoperability for managing Iceberg table metadata across Spark and other open source analytics engines. You can choose the appropriate API based on table format and use case requirements. In this post, we explore various lakehouse architecture patterns, focusing on how to optimally use data lake and data warehouse to create robust, scalable, and performance-driven data solutions. Bringing data into your lakehouse on AWS When building a lakehouse architecture, you can choose from three distinct patterns to access and integrate your data, each offering unique advantages for different use cases. Traditional ETL is the classic method of extracting data, transforming it and loading it into your lakehouse. When to use it: You need complex transformations and require highly curated and optimized data sets for downstream applications for better performance You need to perform historical data migrations You need data quality enforcement and standardization at scale You need highly governed curated data in a lakehouse Zero-ETL is a modern architectural pattern where data automatically and continuously replicates from a source system to lakehouse with minimal or no manual intervention or custom code. Behind the scenes, the pattern uses change data capture (CDC) to automatically stream all new inserts, updates, and deletes from the source to the target. This architectural pattern is effective when the source system maintains a high degree of data cleanliness and structure, minimizing the need for heavy pre-load transformations, or when data refinement and aggregation can occur at the target end within lakehouse. Zero-ETL replicates data with minimal delay, and the transformation logic is performed on the target end closer to where the insights are generated by shifting it to a more efficient, post-load phase. When to use it: You need to reduce operational complexity and gain flexible control over data replication for both near real-time and batch use cases. You need limited customization. While zero-ETL implies minimal work, some light transformations might still be required on the replicated data. You need to minimize the need for specialized ETL expertise. You need to maintain data freshness without processing delays and reduce risk of data inconsistencies. Zero-ETL facilitates faster time-to-insight. Data federation (no-movement approach) is a method that enables querying and combining data from multiple disparate sources without physically moving or copying it into a single centralized location. This query-in-place approach allows the query engine to connect directly to the external source systems, delegate and execute queries, and combine results on the fly for presentation to the user. The effectiveness of this architecture pattern depends on three key factors: network latency between systems, source system performance capabilities, and the query engine’s ability to push down predicates to optimize query execution. This no-movement approach can significantly reduce data duplication and storage costs while providing real-time access to source data. When to use it: You need to query the source system directly to use operational analytics. You don’t want to duplicate data to save on storage space and associated costs within your Lakehouse. You’re willing to trade some query performance and governance for immediate data availability and one-time analysis of live data. You don’t need to frequently query the data. Understanding the storage layer of your lakehouse on AWS Now that you’ve seen different ways to get data into a lakehouse, the next question is where to store the data. As shown in the following figure, you can architect a modern open lakehouse on AWS by storing the data in a data lake (Amazon S3 or Amazon S3 Tables ) or data warehouse ( Redshift Managed Storage ), so you can optimize for both flexibility and performance based on your specific workload requirements. A modern lakehouse isn’t a single storage technology but a strategic combination of them. The decision of where and how to store your data impacts everything from the speed of your dashboards to the efficiency of your ML models. You must consider not only the initial cost of storage but also the long-term costs of data retrieval, the latency required by your users, and the governance necessary to maintain a single source of truth. In this section, we delve into architectural patterns for the data lake and the data warehouse and provide a clear framework for when to use each storage pattern. While they have historically been seen as competing architectures, the modern and open lakehouse approach uses both to create a single, powerful data platform. General purpose S3 A general purpose S3 bucket in Amazon S3 is the standard, foundational bucket type used for storing objects. It provides flexibility so that you can store your data in its native format without a rigid upfront schema. Because of the ability of an S3 bucket to decouple storage from compute, you can store the data in a highly scalable location, while a variety of query engines can access and process it independently. This means that you can choose the right tool for the job without having to move or duplicate the data. You can store petabytes of data without ever having to provision or manage storage capacity, and its tiered storage classes provide significant cost savings by automatically moving less-frequently accessed data to more affordable storage. The existing Data Catalog functions as a managed catalog. It’s identified by the AWS account number, which means there is no migration needed for existing Data Catalogs; they’re already available in the lakehouse and become the default catalog for the new data, as shown in the following figure. A foundational data lake on general purpose S3 is highly efficient for append-only workloads. However, its file-based nature lacks the transactional guarantees of a traditional database. This is where you can use the support of open-source transactional table formats such as Apache Hudi, Delta Lake, and Apache Iceberg. With these table formats, you can implement multi-version concurrency control, allowing multiple readers and writers to operate simultaneously without conflicts. They provide snapshot isolation, so that readers see consistent views of data even during write operations. A typical medallion architecture pattern with Apache Iceberg is depicted in the following figure. When building a lakehouse on AWS with Apache Iceberg, customers can choose between two primary approaches for storing their data on Amazon S3: General purpose S3 buckets with self-managed Iceberg or using the fully managed S3 Tables. Each path has distinct advantages, and the right choice depends on your specific needs for control, performance, and operational overhead. General purpose S3 with Self-managed Iceberg Using general purpose S3 buckets with self-managed Iceberg is a traditional approach where you store both data and Iceberg metadata files in standard S3 buckets. With this option, you maintain full control but are responsible for managing the complete Iceberg table lifecycle, including essential maintenance tasks such as compaction and garbage collection. When to use it: Maximum control: This approach provides complete control over the entire data life cycle. You can fine-tune every aspect of table maintenance, such as defining your own compaction schedules and strategies, which can be crucial for specific high-performance workloads or to optimize costs. Flexibility and customization: It is ideal for organizations with strong in-house data engineering expertise that need to integrate with a wider range of open-source tools and custom scripts. You can use Amazon EMR or Apache Spark to manage the table operations. Lower upfront costs: You pay only for Amazon S3 storage, API requests, and the compute resources you use for maintenance. This can be more cost-effective for smaller or less-frequent workloads where continuous, automated optimization isn’t necessary. Note: The query performance depends entirely on your optimization strategy. Without continuous, scheduled jobs for compaction, performance can degrade over time as data gets fragmented. You must monitor these jobs to ensure efficient querying. S3 Tables S3 Tables provides S3 storage that’s optimized for analytic workloads and provides Apache Iceberg compatibility to store tabular data at scale. You can integrate S3 table buckets and tables with Data Catalog and register the catalog as a Lake Formation data location from the Lake Formation console or using service APIs, as shown in the following figure. This catalog will be registered and mounted as a federated lakehouse catalog. When to use it: Simplified operations: S3 Tables automatically handles table maintenance tasks such as compaction, snapshot management and orphan file cleanup in the background. This automation eliminates the need to build and manage custom maintenance jobs, significantly reducing your operational overhead. Automated optimization: S3 Tables provides built-in automatic optimizations that improve query performance. These optimizations include background processes such as file compaction to address the small files problem and data layout optimizations specific to tabular data. However, this automation trades flexibility for convenience. Because you can’t control the timing or method of compaction operations, workloads with specific performance requirements might experience varying query performance. Focus on data usage: S3 Tables reduces the engineering overhead and shifts the focus to data consumption, data governance and value creation. Simplified entry to open table formats: It’s suitable for teams who are new to the concept of Apache Iceberg but want to use transactional capabilities on data lake. No external catalog: Suitable for smaller teams who don’t want to manage an external catalog. Redshift managed storage While the data lake serves as the central source of truth for all your data, it’s not the most suitable data store for every job. For the most demanding business intelligence and reporting workloads, the data lake’s open and flexible nature can introduce performance unpredictability. To help ensure the desired performance, consider transitioning a curated subset of your data from the data lake to a data warehouse for the following reasons: High concurrency BI and reporting: When hundreds of business users are concurrently running complex queries on live dashboards, a data warehouse is specifically optimized to handle these workloads with predictable, sub-second query latency. Predictable performance SLAs: – For critical business processes that require data to be delivered at a guaranteed speed, such as financial reporting or end-of-day sales analysis, a data warehouse provides consistent performance. Complex SQL workloads: While data lakes are powerful, they can struggle with highly complex queries involving numerous joins and massive aggregations. A data warehouse is purpose-built to run these relational workloads efficiently. The lakehouse architecture on AWS supports Redshift Managed Storage (RMS), a storage option provided by Amazon Redshift, a fully managed, petabyte-scale data warehouse service in the cloud. RMS storage supports the automatic table optimization offered in Amazon Redshift such as built-in query optimizations for data warehousing workloads, automated materialized views , and AI-driven optimizations and scaling for frequently running workloads. Federated RMS catalog: Onboard existing Amazon Redshift data warehouses to lakehouse Implementing a federated catalog with existing Amazon Redshift data warehouses creates a metadata-only integration that requires no data movement. This approach lets you extend your established Amazon Redshift investments into a modern open lakehouse framework while maintaining compatibility with existing workflows. Amazon Redshift uses a hierarchical data organization structure: Cluster level : Starts with a namespace Database level : Contains multiple databases Schema level : Organizes tables within databases When you register your existing Amazon Redshift provisioned or serverless namespaces as a federated catalog in Data Catalog, this hierarchy maps directly into the lakehouse metadata layer. The lakehouse implementation on AWS supports multiple catalogs using a dynamic hierarchy to organize and map the underlying storage metadata. After you register a namespace, the federated catalog automatically mounts across all Amazon Redshift data warehouses in your AWS Region and account. During this process, Amazon Redshift internally creates external databases that correspond to data shares. This mechanism remains completely abstracted from end users. By using federated catalogs, you can create and use immediate visibility and accessibility across your data ecosystem. Permissions on the federated catalogs can be managed by Lake Formation for both same account and cross account access. The real capability of federated catalogs emerges when accessing Amazon Redshift-managed storage from external AWS engines such as Amazon Athena , Amazon EMR , or open source Spark. Because Amazon Redshift uses proprietary block-based storage that only Amazon Redshift engines can read natively, AWS automatically provisions a service-managed Amazon Redshift Serverless instance in the background. This service-managed instance acts as a translation layer between external engines and Amazon Redshift managed storage. AWS establishes automatic data shares between your registered federated catalog and the service-managed Amazon Redshift Serverless instance to enable secure, efficient data access. AWS also creates a service-managed Amazon S3 bucket in the background for data transfer. When an external engine such as Athena submits queries against Amazon Redshift federated catalog, Lake Formation handles the credential vending by providing the temporary credentials to the requesting service. The query executes through the service-managed Amazon Redshift Serverless, which accesses data through automatically established data shares, processes results, offloads them to a service-managed Amazon S3 staging area, and then returns results to the original requesting engine. To track the compute cost of the federated catalog of existing Amazon Redshift warehouse, use the following tag. aws:redshift-serverless:LakehouseManagedWorkgroup value: "True" To activate the AWS generated cost allocation tags for billing insight, follow the activation instructions . You can also view the computational cost of the resources in AWS Billing . When to use it: Existing Amazon Redshift investments: Federated catalogs are designed for organizations with existing Amazon Redshift deployments who want to use their data across multiple services without migration. Cross-service data sharing: – Implement so teams can share existing data in an Amazon Redshift data warehouse across different warehouses and centralize their permissions. Enterprise integration requirements: This approach is suitable for organizations that need to integrate with established data governance. It also maintains compatibility with current workflows while adding lakehouse capabilities. Infrastructure control and pricing: – You can retain full control over compute capacity for their existing warehouses for predictable workloads. You can optimize compute capacity, choose between on-demand and reserved capacity pricing, and fine-tune performance parameters. This provides cost predictability and performance control for consistent workloads. When implementing lakehouse architecture with multiple catalog types, selecting the appropriate query engine is crucial for both performance and cost optimization. This post focuses on the storage foundation of lakehouse, however for critical workloads involving extensive Amazon Redshift data operations, consider executing queries within Amazon Redshift or using Spark when possible. Complex joins spanning multiple Amazon Redshift tables through external engines might result in higher compute costs if the engines don’t support full predicate push-down. Other use-cases Build a multi-warehouse architecture Amazon Redshift supports data sharing , which you can use to share live data between source and target Amazon Redshift clusters. By using data sharing, you can share live data without creating copies or moving data, enabling uses cases such as workload isolation (hub and spoke architecture) and cross group collaboration (data mesh architecture). Without a lakehouse architecture, you must create an explicit data share between source and target Amazon Redshift clusters. While managing these data shares in small deployments is relatively straightforward, it becomes complex in data mesh architectures. The lakehouse architecture addresses this challenge so customers can publish their existing Amazon Redshift warehouses as federated catalogs. These federated catalogs are automatically mounted and made available as external databases in other consumer Amazon Redshift warehouses within the same account and Region. By using this approach, you can maintain a single copy of data and use multiple data warehouses to query it, eliminating the need to create and manage multiple data shares and scale with workload isolation. The permission management becomes centralized through Lake Formation, streamlining governance across the entire multi-warehouse environment. Near real-time analytics on petabytes of transactional data with no pipeline management: Zero-ETL integrations seamlessly replicate transactional data from OLTP data sources to Amazon Redshift, general purpose S3 (with self-managed Iceberg) or S3 Tables. This approach eliminates the need to maintain complex ETL pipelines, reducing the number of moving parts in your data architecture and potential points of failure. Business users can analyze fresh operational data immediately rather than working with stale data from the last ETL run. See Aurora zero-ETL integrations for a list of OLTP data sources that can be replicated to an existing Amazon Redshift warehouse. See Zero-ETL integrations for information about other supported data sources that can be replicated to an existing Amazon Redshift warehouse, general purpose S3 with self-managed Iceberg, and S3 Tables. Conclusion A lakehouse architecture isn’t about choosing between a data lake and a data warehouse. Instead, it’s an approach to interoperability where both frameworks coexist and serve different purposes within a unified data architecture. By understanding fundamental storage patterns, implementing effective catalog strategies, and using native storage capabilities, you can build scalable, high-performance data architectures that support both your current analytics needs and future innovation. For more information, see The lakehouse architecture of Amazon SageMaker . About the authors Lakshmi Nair Lakshmi is a Senior Analytics Specialist Solutions Architect at AWS. She specializes in designing advanced analytics systems across industries. She focuses on crafting cloud-based data platforms, enabling real-time streaming, big data processing, and robust data governance. Saman Irfan Saman is a Senior Specialist Solutions Architect at Amazon Web Services, based in Berlin, Germany. Saman is passionate about helping organizations modernize their data architectures and unlock the full potential of their data to drive innovation and business transformation. Outside of work, she enjoys spending time with her family, watching TV series, and staying updated with the latest advancements in technology. Loading comments… Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow Twitter Facebook LinkedIn Twitch Email Updates Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center SDKs & Tools .NET on AWS Python on AWS Java on AWS PHP on AWS JavaScript on AWS Help Contact Us File a Support Ticket AWS re:Post Knowledge Center AWS Support Overview Get Expert Help AWS Accessibility Legal English | 2026-01-13T09:29:13 |
https://aws.amazon.com/blogs/big-data/navigating-architectural-choices-for-a-lakehouse-using-amazon-sagemaker/ | Navigating architectural choices for a lakehouse using Amazon SageMaker | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Navigating architectural choices for a lakehouse using Amazon SageMaker by Lakshmi Nair and Saman Irfan on 12 JAN 2026 in Amazon SageMaker Data & AI Governance , Amazon SageMaker Lakehouse , Amazon SageMaker Unified Studio , Analytics Permalink Comments Share Organizations today are using data more than ever to drive decision-making and innovation. Because they work with petabytes of information, they have traditionally gravitated towards two distinct paradigms—data lakes and data warehouses. While each paradigm excels at specific use cases, they often create unintended barriers between the data assets. Data lakes are often built on object storage such as Amazon Simple Storage Service (Amazon S3) , which provide flexibility by supporting diverse data formats and schema-on-read capabilities. This enables multi-engine access where various processing frameworks (such as Apache Spark , Trino , and Presto ) can query the same data. On the other hand, data warehouses (such as Amazon Redshift ) excel in areas such as ACID (atomicity, consistency, isolation and durability) compliance, performance optimization, and straightforward deployment, making them suitable for structured and complex queries. As data volumes grow and analytics needs become more complex, organizations seek to bridge these silos and use the strengths of both paradigms. This is where the concept of lakehouse architecture is applied, offering a unified approach to data management and analytics. Over time, several distinct lakehouse approaches have emerged. In this post, we show you how to evaluate and choose the right lakehouse pattern for your needs. The data lake centric lakehouse approach begins with the scalability, cost-effectiveness, and flexibility of a traditional data lake built on object storage. The goal is to add a layer of transactional capabilities and data management traditionally found in databases, primarily through open table formats (such as Apache Hudi , Delta Lake , or Apache Iceberg ). While open table formats have made significant strides by introducing ACID guarantees for single-table operations in data lakes, implementing multi-table transactions with complex referential integrity constraints and joins remains challenging. The fundamental nature of querying petabytes of files on object storage, often through distributed query engines, can result in slow interactive queries at high concurrency when compared to a highly optimized, indexed, and materialized data warehouse. Open table formats introduce compaction and indexing, but the full suite of intelligent storage optimizations found in highly mature, proprietary data warehouses is still evolving in data lake-centric architecture. The data warehouse centric lakehouse approach offers robust analytical capabilities but has significant interoperability challenges. Though data warehouses provide JAVA Database Connectivity (JDBC) and Open Database Connectivity (ODBC) drivers for external access, the underlying data remains in proprietary formats, making it difficult for external tools or services to directly access it without complex extract, transform, and load (ETL) or API layers. This can lead to data duplication and latency. A data warehouse architecture might support reading open table formats, but its ability to write to them or participate in their transactional layers can be limited. This restricts true interoperability and can create shadow data silos. On AWS, you can build a modern, open lakehouse architecture to achieve unified access to both data warehouses and data lakes. By using this approach, you can build sophisticated analytics, machine learning (ML), and generative AI applications while maintaining a single source of truth for their data. You don’t have to choose between a data lake or data warehouse. You can use existing investments and preserve the strengths of both paradigms while eliminating their respective weaknesses. The lakehouse architecture on AWS embraces open table formats such as Apache Hudi, Delta Lake, and Apache Iceberg. You can accelerate your lakehouse journey with the next generation of Amazon SageMaker , which delivers an integrated experience for analytics and AI with unified access to data. SageMaker is built on an open lakehouse architecture that is fully compatible with Apache Iceberg. By extending support for Apache Iceberg REST APIs, SageMaker significantly adds interoperability and accessibility across various Apache Iceberg-compatible query engines and tools. At the core of this architecture is a metadata management layer built on AWS Glue Data Catalog and AWS Lake Formation , which provide unified governance and centralized access control. Foundations of the Amazon SageMaker lakehouse architecture The lakehouse architecture of Amazon SageMaker has four main components that work together to create a unified data platform. Flexible storage to adapt to the workload patterns and requirements Technical catalog that serves as a single source of truth for all metadata Integrated permission management with fine-grained access control across all data assets Open access framework built on Apache Iceberg REST APIs for universal compatibility Catalogs and permissions When building an open lakehouse, the catalog—your central repository of metadata—is a critical component for data discovery and governance. There are two types of catalogs in the lakehouse architecture of Amazon SageMaker: managed catalogs and federated catalogs. Managed catalog refers to when the metadata is managed by the lakehouse, and the data is stored in a general purpose S3 bucket. Federated catalog refers to mounting or connecting to external or existing data sources so you can query data from data sources such as Amazon Redshift , Snowflake, and Amazon DynamoDB without explicitly moving the data. For more information, see Data connections in the lakehouse architecture of Amazon SageMaker . You can use an AWS Glue crawler to automatically discover and register this metadata in Data Catalog . Data Catalog stores the schema and table metadata of your data assets, effectively turning files into logical tables. After your data is cataloged, the next challenge is controlling who can access it. While you could use complex S3 bucket policies for every folder, this approach is difficult to manage and scale. Lake Formation provides a centralized database-style permissions model on the Data Catalog, giving you the flexibility to grant or revoke fine-grained access at row, column, and cell levels for individual users or roles. Open access with Apache Iceberg REST APIs The lakehouse architecture described in the preceding section and shown in the following figure also uses the AWS Glue Iceberg REST catalog through the service endpoint, which provides OSS compatibility, enabling increased interoperability for managing Iceberg table metadata across Spark and other open source analytics engines. You can choose the appropriate API based on table format and use case requirements. In this post, we explore various lakehouse architecture patterns, focusing on how to optimally use data lake and data warehouse to create robust, scalable, and performance-driven data solutions. Bringing data into your lakehouse on AWS When building a lakehouse architecture, you can choose from three distinct patterns to access and integrate your data, each offering unique advantages for different use cases. Traditional ETL is the classic method of extracting data, transforming it and loading it into your lakehouse. When to use it: You need complex transformations and require highly curated and optimized data sets for downstream applications for better performance You need to perform historical data migrations You need data quality enforcement and standardization at scale You need highly governed curated data in a lakehouse Zero-ETL is a modern architectural pattern where data automatically and continuously replicates from a source system to lakehouse with minimal or no manual intervention or custom code. Behind the scenes, the pattern uses change data capture (CDC) to automatically stream all new inserts, updates, and deletes from the source to the target. This architectural pattern is effective when the source system maintains a high degree of data cleanliness and structure, minimizing the need for heavy pre-load transformations, or when data refinement and aggregation can occur at the target end within lakehouse. Zero-ETL replicates data with minimal delay, and the transformation logic is performed on the target end closer to where the insights are generated by shifting it to a more efficient, post-load phase. When to use it: You need to reduce operational complexity and gain flexible control over data replication for both near real-time and batch use cases. You need limited customization. While zero-ETL implies minimal work, some light transformations might still be required on the replicated data. You need to minimize the need for specialized ETL expertise. You need to maintain data freshness without processing delays and reduce risk of data inconsistencies. Zero-ETL facilitates faster time-to-insight. Data federation (no-movement approach) is a method that enables querying and combining data from multiple disparate sources without physically moving or copying it into a single centralized location. This query-in-place approach allows the query engine to connect directly to the external source systems, delegate and execute queries, and combine results on the fly for presentation to the user. The effectiveness of this architecture pattern depends on three key factors: network latency between systems, source system performance capabilities, and the query engine’s ability to push down predicates to optimize query execution. This no-movement approach can significantly reduce data duplication and storage costs while providing real-time access to source data. When to use it: You need to query the source system directly to use operational analytics. You don’t want to duplicate data to save on storage space and associated costs within your Lakehouse. You’re willing to trade some query performance and governance for immediate data availability and one-time analysis of live data. You don’t need to frequently query the data. Understanding the storage layer of your lakehouse on AWS Now that you’ve seen different ways to get data into a lakehouse, the next question is where to store the data. As shown in the following figure, you can architect a modern open lakehouse on AWS by storing the data in a data lake (Amazon S3 or Amazon S3 Tables ) or data warehouse ( Redshift Managed Storage ), so you can optimize for both flexibility and performance based on your specific workload requirements. A modern lakehouse isn’t a single storage technology but a strategic combination of them. The decision of where and how to store your data impacts everything from the speed of your dashboards to the efficiency of your ML models. You must consider not only the initial cost of storage but also the long-term costs of data retrieval, the latency required by your users, and the governance necessary to maintain a single source of truth. In this section, we delve into architectural patterns for the data lake and the data warehouse and provide a clear framework for when to use each storage pattern. While they have historically been seen as competing architectures, the modern and open lakehouse approach uses both to create a single, powerful data platform. General purpose S3 A general purpose S3 bucket in Amazon S3 is the standard, foundational bucket type used for storing objects. It provides flexibility so that you can store your data in its native format without a rigid upfront schema. Because of the ability of an S3 bucket to decouple storage from compute, you can store the data in a highly scalable location, while a variety of query engines can access and process it independently. This means that you can choose the right tool for the job without having to move or duplicate the data. You can store petabytes of data without ever having to provision or manage storage capacity, and its tiered storage classes provide significant cost savings by automatically moving less-frequently accessed data to more affordable storage. The existing Data Catalog functions as a managed catalog. It’s identified by the AWS account number, which means there is no migration needed for existing Data Catalogs; they’re already available in the lakehouse and become the default catalog for the new data, as shown in the following figure. A foundational data lake on general purpose S3 is highly efficient for append-only workloads. However, its file-based nature lacks the transactional guarantees of a traditional database. This is where you can use the support of open-source transactional table formats such as Apache Hudi, Delta Lake, and Apache Iceberg. With these table formats, you can implement multi-version concurrency control, allowing multiple readers and writers to operate simultaneously without conflicts. They provide snapshot isolation, so that readers see consistent views of data even during write operations. A typical medallion architecture pattern with Apache Iceberg is depicted in the following figure. When building a lakehouse on AWS with Apache Iceberg, customers can choose between two primary approaches for storing their data on Amazon S3: General purpose S3 buckets with self-managed Iceberg or using the fully managed S3 Tables. Each path has distinct advantages, and the right choice depends on your specific needs for control, performance, and operational overhead. General purpose S3 with Self-managed Iceberg Using general purpose S3 buckets with self-managed Iceberg is a traditional approach where you store both data and Iceberg metadata files in standard S3 buckets. With this option, you maintain full control but are responsible for managing the complete Iceberg table lifecycle, including essential maintenance tasks such as compaction and garbage collection. When to use it: Maximum control: This approach provides complete control over the entire data life cycle. You can fine-tune every aspect of table maintenance, such as defining your own compaction schedules and strategies, which can be crucial for specific high-performance workloads or to optimize costs. Flexibility and customization: It is ideal for organizations with strong in-house data engineering expertise that need to integrate with a wider range of open-source tools and custom scripts. You can use Amazon EMR or Apache Spark to manage the table operations. Lower upfront costs: You pay only for Amazon S3 storage, API requests, and the compute resources you use for maintenance. This can be more cost-effective for smaller or less-frequent workloads where continuous, automated optimization isn’t necessary. Note: The query performance depends entirely on your optimization strategy. Without continuous, scheduled jobs for compaction, performance can degrade over time as data gets fragmented. You must monitor these jobs to ensure efficient querying. S3 Tables S3 Tables provides S3 storage that’s optimized for analytic workloads and provides Apache Iceberg compatibility to store tabular data at scale. You can integrate S3 table buckets and tables with Data Catalog and register the catalog as a Lake Formation data location from the Lake Formation console or using service APIs, as shown in the following figure. This catalog will be registered and mounted as a federated lakehouse catalog. When to use it: Simplified operations: S3 Tables automatically handles table maintenance tasks such as compaction, snapshot management and orphan file cleanup in the background. This automation eliminates the need to build and manage custom maintenance jobs, significantly reducing your operational overhead. Automated optimization: S3 Tables provides built-in automatic optimizations that improve query performance. These optimizations include background processes such as file compaction to address the small files problem and data layout optimizations specific to tabular data. However, this automation trades flexibility for convenience. Because you can’t control the timing or method of compaction operations, workloads with specific performance requirements might experience varying query performance. Focus on data usage: S3 Tables reduces the engineering overhead and shifts the focus to data consumption, data governance and value creation. Simplified entry to open table formats: It’s suitable for teams who are new to the concept of Apache Iceberg but want to use transactional capabilities on data lake. No external catalog: Suitable for smaller teams who don’t want to manage an external catalog. Redshift managed storage While the data lake serves as the central source of truth for all your data, it’s not the most suitable data store for every job. For the most demanding business intelligence and reporting workloads, the data lake’s open and flexible nature can introduce performance unpredictability. To help ensure the desired performance, consider transitioning a curated subset of your data from the data lake to a data warehouse for the following reasons: High concurrency BI and reporting: When hundreds of business users are concurrently running complex queries on live dashboards, a data warehouse is specifically optimized to handle these workloads with predictable, sub-second query latency. Predictable performance SLAs: – For critical business processes that require data to be delivered at a guaranteed speed, such as financial reporting or end-of-day sales analysis, a data warehouse provides consistent performance. Complex SQL workloads: While data lakes are powerful, they can struggle with highly complex queries involving numerous joins and massive aggregations. A data warehouse is purpose-built to run these relational workloads efficiently. The lakehouse architecture on AWS supports Redshift Managed Storage (RMS), a storage option provided by Amazon Redshift, a fully managed, petabyte-scale data warehouse service in the cloud. RMS storage supports the automatic table optimization offered in Amazon Redshift such as built-in query optimizations for data warehousing workloads, automated materialized views , and AI-driven optimizations and scaling for frequently running workloads. Federated RMS catalog: Onboard existing Amazon Redshift data warehouses to lakehouse Implementing a federated catalog with existing Amazon Redshift data warehouses creates a metadata-only integration that requires no data movement. This approach lets you extend your established Amazon Redshift investments into a modern open lakehouse framework while maintaining compatibility with existing workflows. Amazon Redshift uses a hierarchical data organization structure: Cluster level : Starts with a namespace Database level : Contains multiple databases Schema level : Organizes tables within databases When you register your existing Amazon Redshift provisioned or serverless namespaces as a federated catalog in Data Catalog, this hierarchy maps directly into the lakehouse metadata layer. The lakehouse implementation on AWS supports multiple catalogs using a dynamic hierarchy to organize and map the underlying storage metadata. After you register a namespace, the federated catalog automatically mounts across all Amazon Redshift data warehouses in your AWS Region and account. During this process, Amazon Redshift internally creates external databases that correspond to data shares. This mechanism remains completely abstracted from end users. By using federated catalogs, you can create and use immediate visibility and accessibility across your data ecosystem. Permissions on the federated catalogs can be managed by Lake Formation for both same account and cross account access. The real capability of federated catalogs emerges when accessing Amazon Redshift-managed storage from external AWS engines such as Amazon Athena , Amazon EMR , or open source Spark. Because Amazon Redshift uses proprietary block-based storage that only Amazon Redshift engines can read natively, AWS automatically provisions a service-managed Amazon Redshift Serverless instance in the background. This service-managed instance acts as a translation layer between external engines and Amazon Redshift managed storage. AWS establishes automatic data shares between your registered federated catalog and the service-managed Amazon Redshift Serverless instance to enable secure, efficient data access. AWS also creates a service-managed Amazon S3 bucket in the background for data transfer. When an external engine such as Athena submits queries against Amazon Redshift federated catalog, Lake Formation handles the credential vending by providing the temporary credentials to the requesting service. The query executes through the service-managed Amazon Redshift Serverless, which accesses data through automatically established data shares, processes results, offloads them to a service-managed Amazon S3 staging area, and then returns results to the original requesting engine. To track the compute cost of the federated catalog of existing Amazon Redshift warehouse, use the following tag. aws:redshift-serverless:LakehouseManagedWorkgroup value: "True" To activate the AWS generated cost allocation tags for billing insight, follow the activation instructions . You can also view the computational cost of the resources in AWS Billing . When to use it: Existing Amazon Redshift investments: Federated catalogs are designed for organizations with existing Amazon Redshift deployments who want to use their data across multiple services without migration. Cross-service data sharing: – Implement so teams can share existing data in an Amazon Redshift data warehouse across different warehouses and centralize their permissions. Enterprise integration requirements: This approach is suitable for organizations that need to integrate with established data governance. It also maintains compatibility with current workflows while adding lakehouse capabilities. Infrastructure control and pricing: – You can retain full control over compute capacity for their existing warehouses for predictable workloads. You can optimize compute capacity, choose between on-demand and reserved capacity pricing, and fine-tune performance parameters. This provides cost predictability and performance control for consistent workloads. When implementing lakehouse architecture with multiple catalog types, selecting the appropriate query engine is crucial for both performance and cost optimization. This post focuses on the storage foundation of lakehouse, however for critical workloads involving extensive Amazon Redshift data operations, consider executing queries within Amazon Redshift or using Spark when possible. Complex joins spanning multiple Amazon Redshift tables through external engines might result in higher compute costs if the engines don’t support full predicate push-down. Other use-cases Build a multi-warehouse architecture Amazon Redshift supports data sharing , which you can use to share live data between source and target Amazon Redshift clusters. By using data sharing, you can share live data without creating copies or moving data, enabling uses cases such as workload isolation (hub and spoke architecture) and cross group collaboration (data mesh architecture). Without a lakehouse architecture, you must create an explicit data share between source and target Amazon Redshift clusters. While managing these data shares in small deployments is relatively straightforward, it becomes complex in data mesh architectures. The lakehouse architecture addresses this challenge so customers can publish their existing Amazon Redshift warehouses as federated catalogs. These federated catalogs are automatically mounted and made available as external databases in other consumer Amazon Redshift warehouses within the same account and Region. By using this approach, you can maintain a single copy of data and use multiple data warehouses to query it, eliminating the need to create and manage multiple data shares and scale with workload isolation. The permission management becomes centralized through Lake Formation, streamlining governance across the entire multi-warehouse environment. Near real-time analytics on petabytes of transactional data with no pipeline management: Zero-ETL integrations seamlessly replicate transactional data from OLTP data sources to Amazon Redshift, general purpose S3 (with self-managed Iceberg) or S3 Tables. This approach eliminates the need to maintain complex ETL pipelines, reducing the number of moving parts in your data architecture and potential points of failure. Business users can analyze fresh operational data immediately rather than working with stale data from the last ETL run. See Aurora zero-ETL integrations for a list of OLTP data sources that can be replicated to an existing Amazon Redshift warehouse. See Zero-ETL integrations for information about other supported data sources that can be replicated to an existing Amazon Redshift warehouse, general purpose S3 with self-managed Iceberg, and S3 Tables. Conclusion A lakehouse architecture isn’t about choosing between a data lake and a data warehouse. Instead, it’s an approach to interoperability where both frameworks coexist and serve different purposes within a unified data architecture. By understanding fundamental storage patterns, implementing effective catalog strategies, and using native storage capabilities, you can build scalable, high-performance data architectures that support both your current analytics needs and future innovation. For more information, see The lakehouse architecture of Amazon SageMaker . About the authors Lakshmi Nair Lakshmi is a Senior Analytics Specialist Solutions Architect at AWS. She specializes in designing advanced analytics systems across industries. She focuses on crafting cloud-based data platforms, enabling real-time streaming, big data processing, and robust data governance. Saman Irfan Saman is a Senior Specialist Solutions Architect at Amazon Web Services, based in Berlin, Germany. Saman is passionate about helping organizations modernize their data architectures and unlock the full potential of their data to drive innovation and business transformation. Outside of work, she enjoys spending time with her family, watching TV series, and staying updated with the latest advancements in technology. Loading comments… Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow Twitter Facebook LinkedIn Twitch Email Updates Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center SDKs & Tools .NET on AWS Python on AWS Java on AWS PHP on AWS JavaScript on AWS Help Contact Us File a Support Ticket AWS re:Post Knowledge Center AWS Support Overview Get Expert Help AWS Accessibility Legal English | 2026-01-13T09:29:13 |
https://aws.amazon.com/blogs/big-data/access-databricks-unity-catalog-data-using-catalog-federation-in-the-aws-glue-data-catalog/ | Access Databricks Unity Catalog data using catalog federation in the AWS Glue Data Catalog | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Access Databricks Unity Catalog data using catalog federation in the AWS Glue Data Catalog by Srividya Parthasarathy and Venkat Viswanathan on 12 JAN 2026 in Advanced (300) , Amazon SageMaker , AWS Glue , AWS Lake Formation , Technical How-to Permalink Comments Share AWS has launched the catalog federation capability, enabling direct access to Apache Iceberg tables managed in Databricks Unity Catalog through the AWS Glue Data Catalog . With this integration, you can discover and query Unity Catalog data in Iceberg format using an Iceberg REST API endpoint, while maintaining granular access controls through AWS Lake Formation . This approach significantly reduces operational overhead for managing catalog synchronization and associated costs by alleviating the need to replicate or duplicate datasets between platforms. In this post, we demonstrate how to set up catalog federation between the Glue Data Catalog and Databricks Unity Catalog, enabling data querying using AWS analytics services. Use cases and key benefits This federation capability is particularly valuable if you run multiple data platforms, because you can maintain your existing Iceberg catalog investments while using AWS analytics services. Catalog federation supports read operations and provides the following benefits: Interoperability – You can enable interoperability across different data platforms and tools through Iceberg REST APIs while preserving the value of your established technology investments. Cross-platform analytics – You can connect AWS analytics tools ( Amazon Athena , Amazon Redshift , Apache Spark) to query Iceberg and UniForm tables stored in Databricks Unity Catalog. It supports Databricks on AWS integration with the AWS Glue Iceberg REST Catalog for metadata retrieval, while using Lake Formation for permission management. Metadata management – The solution avoids manual catalog synchronization by making Databricks Unity Catalog databases and tables discoverable within the Data Catalog. You can implement unified governance through Lake Formation for fine-grained access control across federated catalog resources. Solution overview The solution uses catalog federation in the Data Catalog to integrate with Databricks Unity Catalog. The federated catalog created in AWS Glue mirrors the catalog objects in Databricks Unity Catalog and supports OAuth-based authentication. The solution is represented in the following diagram. The integration involves three high-level steps: Set up an integration principal in Databricks Unity Catalog and provide required read access on catalog resources to this principal. Enable OAuth-based authentication for the integration principal. Set up catalog federation to Databricks Unity Catalog in the Glue Data Catalog: Create a federated catalog in the Data Catalog using an AWS Glue connection. Create an AWS Glue connection that uses the credentials of the integration principal (in Step 1) to connect to Databricks Unity Catalog. Configure an AWS Identity and Access Management (IAM) role with permission to Amazon Simple Storage Service (Amazon S3) locations where the Iceberg table data resides. In a cross-account scenario, make sure the bucket policy grants required access to this IAM role. Discover Iceberg tables in federated catalogs using Lake Formation or AWS Glue APIs. During query operations, Lake Formation manages fine-grained permissions on federated resources and credential vending for access to the underlying data. In the following sections, we walk through the steps to integrate the Glue Data Catalog with Databricks Unity Catalog on AWS. Prerequisites To follow along with the solution presented in this post, you must have the following prerequisites: Databricks Workspace (on AWS) with Databricks Unity Catalog configured. An IAM role that is a Lake Formation data lake administrator in your AWS account. A data lake administrator is an IAM principal that can register S3 locations, access the Data Catalog, grant Lake Formation permissions to other users, and view AWS CloudTrail logs. See Create a data lake administrator for more information. Configure Databricks Unity Catalog for external access Catalog federation to a Databricks Unity Catalog uses the OAuth2 credentials of a Databricks service principal configured in the workspace admin settings. This authentication mechanism allows the Data Catalog to access the metadata of various objects (such as catalogs, databases, and tables) within Databricks Unity Catalog, based on the privileges associated with the service principal. For proper functionality, grant the service principal with the necessary permissions (read permission on catalog, schema, and tables) to read the metadata of these objects and allow access from external engines. Next, catalog federation enables discovery and query of Iceberg tables in your Databricks Unity Catalog. For reading delta tables, enable UniForm on a Delta Lake table in Databricks to generate Iceberg metadata. For more information, refer to Read Delta tables with Iceberg clients . Follow the Databricks tutorial and documentation to create the service principal and associated privileges in your Databricks workspace. For this post, we use a service principal named integrationprincipal that is configured with required permissions (SELECT, USE CATALOG, USE SCHEMA) on Databricks Unity Catalog objects and will be used for authentication to catalog instance. Catalog federation supports OAuth2 authentication, so enable OAuth for the service principal and note down the client_id and client_secret for later use. Set up Data Catalog federation with Databricks Unity Catalog Now that you have service principal access for Databricks Unity Catalog, you can set up catalog federation in the Data Catalog. To do so, you create an AWS Secrets Manager secret and create an IAM role for catalog federation. Create secret Complete the following steps to create a secret: Sign in to the AWS Management Console using an IAM role with access to Secrets Manager. On the Secrets Manager console, choose Store a new secret and Other type of secret . Set the key-value pair: Key: USER_MANAGED_CLIENT_APPLICATION_CLIENT_SECRET Value: The client secret noted earlier Choose Next . Enter a name for your secret (for this post, we use dbx ). Choose Store . Create IAM role for catalog federation As the catalog owner of a federated catalog in the Data Catalog, you can use Lake Formation to implement comprehensive access controls, including table filters, column filters, and row filters, as well as tag-based access for your data teams. Lake Formation requires an IAM role with permissions to access the underlying S3 locations of your external catalog. In this step, you create an IAM role that enables the AWS Glue connection to access Secrets Manager, optional virtual private cloud (VPC) configurations, and Lake Formation to manage credential vending for the S3 bucket and prefix: Secrets Manager access – The AWS Glue connection requires permissions to retrieve secret values from Secrets Manager for OAuth tokens stored for your Databricks Unity service connection. VPC access (optional) – When using VPC endpoints to restrict connectivity to your Databricks Unity account, the AWS Glue connection needs permissions to describe and utilize VPC network interfaces. This configuration provides secure, controlled access to both your stored credentials and network resources while maintaining proper isolation through VPC endpoints. S3 bucket and AWS KMS key permission – The AWS Glue connection requires Amazon S3 permissions to read certificates if used in the connection setup. Additionally, Lake Formation requires read permissions on the bucket and prefix where the remote catalog table data resides. If the data is encrypted using an AWS Key Management Service (AWS KMS) key, additional AWS KMS permissions are required. Complete the following steps: Create an IAM role called LFDataAccessRole with the following policies: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret" ], "Resource": [ "<secrets manager ARN>" ] }, { "Effect": "Allow", "Action": [ "ec2:CreateNetworkInterface", "ec2:DeleteNetworkInterface", "ec2:DescribeNetworkInterfaces" ], "Resource": "*", "Condition": { "ArnEquals": { "ec2:Vpc": "arn:aws:ec2:region:account-id:vpc/<vpc-id>", "ec2:Subnet": [ "arn:aws:ec2:region:account-id:subnet/<subnet-id>" ] } } }, { # Required when using custom cert to sign requests. "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3 :::<bucketname>/<certpath>" ] }, { # Required when using customer managed encryption key for s3 "Effect": "Allow", "Action": [ "kms:decrypt", "kms:encrypt" ], "Resource": [ "<kmsKey>" ] } ] } Configure the role with the following trust policy: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": ["glue.amazonaws.com","lakeformation.amazonaws.com"] }, "Action": "sts:AssumeRole" } ] } Create federated catalog in Data Catalog AWS Glue supports the DATABRICKSICEBERGRESTCATALOG connection type for connecting the Data Catalog with managed Databricks Unity Catalog. This AWS Glue connector supports OAuth2 authentication for discovering metadata in Databricks Unity Catalog. Complete the following steps to create the federated catalog: Sign in to the console as a data lake admin. On the Lake Formation console, choose Catalogs in the navigation pane. Choose Create catalog . For Name , enter a name for your catalog. For Catalog name in Databricks , enter the name of a catalog existing in Databricks Unity Catalog. For Connection name , enter a name for the AWS Glue connection. For Workspace URL , enter the Unity Iceberg REST API URL (in format https:// <workspace-url> /cloud.databricks.com ). For Authentication , provide the following information: For Authentication type , choose OAuth2 . Alternatively, you can choose Custom authentication . For Custom authentication , an access token is created, refreshed, and managed by the customer’s application or system and stored using Secrets Manager. For Token URL , enter the token authentication server URL. For OAuth Client ID , enter the client_id for integrationprincipal . For OAuth Secret , enter the secret ARN that you created in the previous step. Alternatively, you can provide the client_secret directly. For Token URL parameter map scope , provide the API scope supported. If you have AWS PrivateLink set up or a proxy set up, you can provide network details under Settings for network configurations . For Register Glue connection with Lake Formation , choose the IAM role ( LFDataAccessRole ) created earlier to manage data access using Lake Formation. When the setup is done using AWS Command Line Interface (AWS CLI) commands, you have options to create two separate IAM roles: IAM role with policies to access network and secrets, which AWS Glue assumes to manage authentication IAM role with access to the S3 bucket, which Lake Formation assumes to manage credential vending for data access On the console, this setup is simplified with a single role having combined policies. For more details, refer to Federate to Databricks Unity Catalog . To test the connection, choose Run test . You can proceed to create the catalog. After you create the catalog, you can see the databases and tables in Databricks Unity Catalog listed under the federated catalog. You can implement fine-grained access control on the tables by applying row and column filters using Lake Formation. The following video shows the catalog federation setup with Databricks Unity Catalog. Discover and query the data using Athena In this post, we show how to use the Athena query editor to discover and query the Databricks Unity Catalog tables. On the Athena console, run the following query to access the federated table: SELECT * FROM "customerschema"."person" limit 10; The following video demonstrates querying the federated table from Athena. If you use the Amazon Redshift query engine, you must create a resource link on the federated database and grant permission on the resource link to the user or role. This database resource link is automounted under awsdatacatalog based on the permission granted for the user or role and available for querying. For instructions, refer to Creating resource links. Clean up To clean up your resources, complete the following steps: Delete the catalog and namespace in Databricks Unity Catalog for this post. Drop the resources in the Data Catalog and Lake Formation created for this post. Delete the IAM roles and S3 buckets used for this post. Delete any VPC and KMS keys if used for this post. Conclusion In this post, we explored the key elements of catalog federation and its architectural design, illustrating the interaction between the AWS Glue Data Catalog and Databricks Unity Catalog through centralized authorization and credential distribution for protected data access. By removing the requirement for complicated synchronization workflows, catalog federation makes it possible to query Iceberg data on Amazon S3 directly at its source using AWS analytics services with data governance across multi-catalog platforms. Try out the solution for your own use case, and share your feedback and questions in the comments. About the Authors Srividya Parthasarathy Srividya is a Senior Big Data Architect on the AWS Lake Formation team. She works with the product team and customers to build robust features and solutions for their analytical data platform. She enjoys building data mesh solutions and sharing them with the community. Venkatavaradhan (Venkat) Viswanathan Venkat” is a Global Partner Solutions Architect at Amazon Web Services. Venkat is a Technology Strategy Leader in Data, AI, ML, Generative AI, and Advanced Analytics. Venkat is a Global SME for Databricks and helps AWS customers design, build, secure, and optimize Databricks workloads on AWS. Loading comments… Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow Twitter Facebook LinkedIn Twitch Email Updates Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center SDKs & Tools .NET on AWS Python on AWS Java on AWS PHP on AWS JavaScript on AWS Help Contact Us File a Support Ticket AWS re:Post Knowledge Center AWS Support Overview Get Expert Help AWS Accessibility Legal English Back to top Amazon is an Equal Opportunity Employer: Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation / Age. x facebook linkedin instagram twitch youtube podcasts <path d="M14.8571 13.1432V6.28606C14.6667 6.50035 14.4613 6.69678 14.2411 6.87535C12.6458 8.10154 11.378 9.10749 10.4375 9.89321C10.1339 10.1492 9.88691 10.3486 9.69643 10.4914C9.50595 10.6343 9.24851 10.7786 8.92411 10.9245C8.5997 11.0703 8.29464 11.1432 8.00893 11.1432H8H7.99107C7.70536 11.1432 7.4003 11.0703 7.07589 10.9245C6.75149 10.7786 6.49405 10.6343 6.30357 10.4914C6.1131 10.3486 5.86607 10.1492 5.5625 9.89321C4.62202 9.10749 3.35417 8.10154 1.75893 6.87535C1.53869 6.69678 1.33333 6.50035 1.14286 6.28606V13.1432C1.14286 13.2206 1.17113 13.2876 1.22768 13.3441C1.28423 13.4006 1.35119 13.4289 1.42857 13.4289H14.5714C14.6488 13.4289 14.7158 13.4006 14.7723 13.3441C14.8289 13.2876 14.8571 13.2206 14.8571 13.1432ZM14.8571 3.75928C14.8571 3.74737 14.8571 3.71463 14.8571 3.6 | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/cargo/appendix/glossary.html#cargo | Appendix: Glossary - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Glossary Artifact An artifact is the file or set of files created as a result of the compilation process. This includes linkable libraries, executable binaries, and generated documentation. Cargo Cargo is the Rust package manager , and the primary topic of this book. Cargo.lock See lock file . Cargo.toml See manifest . Crate A Rust crate is either a library or an executable program, referred to as either a library crate or a binary crate , respectively. Every target defined for a Cargo package is a crate . Loosely, the term crate may refer to either the source code of the target or to the compiled artifact that the target produces. It may also refer to a compressed package fetched from a registry . The source code for a given crate may be subdivided into modules . Edition A Rust edition is a developmental landmark of the Rust language. The edition of a package is specified in the Cargo.toml manifest , and individual targets can specify which edition they use. See the Edition Guide for more information. Feature The meaning of feature depends on the context: A feature is a named flag which allows for conditional compilation. A feature can refer to an optional dependency, or an arbitrary name defined in a Cargo.toml manifest that can be checked within source code. Cargo has unstable feature flags which can be used to enable experimental behavior of Cargo itself. The Rust compiler and Rustdoc have their own unstable feature flags (see The Unstable Book and The Rustdoc Book ). CPU targets have target features which specify capabilities of a CPU. Index The index is the searchable list of crates in a registry . Lock file The Cargo.lock lock file is a file that captures the exact version of every dependency used in a workspace or package . It is automatically generated by Cargo. See Cargo.toml vs Cargo.lock . Manifest A manifest is a description of a package or a workspace in a file named Cargo.toml . A virtual manifest is a Cargo.toml file that only describes a workspace, and does not include a package. Member A member is a package that belongs to a workspace . Module Rust’s module system is used to organize code into logical units called modules , which provide isolated namespaces within the code. The source code for a given crate may be subdivided into one or more separate modules. This is usually done to organize the code into areas of related functionality or to control the visible scope (public/private) of symbols within the source (structs, functions, and so on). A Cargo.toml file is primarily concerned with the package it defines, its crates, and the packages of the crates on which they depend. Nevertheless, you will see the term “module” often when working with Rust, so you should understand its relationship to a given crate. Package A package is a collection of source files and a Cargo.toml manifest file which describes the package. A package has a name and version which is used for specifying dependencies between packages. A package contains multiple targets , each of which is a crate . The Cargo.toml file describes the type of the crates (binary or library) within the package, along with some metadata about each one — how each is to be built, what their direct dependencies are, etc., as described throughout this book. The package root is the directory where the package’s Cargo.toml manifest is located. (Compare with workspace root .) The package ID specification , or SPEC , is a string used to uniquely reference a specific version of a package from a specific source. Small to medium sized Rust projects will only need a single package, though it is common for them to have multiple crates. Larger projects may involve multiple packages, in which case Cargo workspaces can be used to manage common dependencies and other related metadata between the packages. Package manager Broadly speaking, a package manager is a program (or collection of related programs) in a software ecosystem that automates the process of obtaining, installing, and upgrading artifacts. Within a programming language ecosystem, a package manager is a developer-focused tool whose primary functionality is to download library artifacts and their dependencies from some central repository; this capability is often combined with the ability to perform software builds (by invoking the language-specific compiler). Cargo is the package manager within the Rust ecosystem. Cargo downloads your Rust package ’s dependencies ( artifacts known as crates ), compiles your packages, makes distributable packages, and (optionally) uploads them to crates.io , the Rust community’s package registry . Package registry See registry . Project Another name for a package . Registry A registry is a service that contains a collection of downloadable crates that can be installed or used as dependencies for a package . The default registry in the Rust ecosystem is crates.io . The registry has an index which contains a list of all crates, and tells Cargo how to download the crates that are needed. Source A source is a provider that contains crates that may be included as dependencies for a package . There are several kinds of sources: Registry source — See registry . Local registry source — A set of crates stored as compressed files on the filesystem. See Local Registry Sources . Directory source — A set of crates stored as uncompressed files on the filesystem. See Directory Sources . Path source — An individual package located on the filesystem (such as a path dependency ) or a set of multiple packages (such as path overrides ). Git source — Packages located in a git repository (such as a git dependency or git source ). See Source Replacement for more information. Spec See package ID specification . Target The meaning of the term target depends on the context: Cargo Target — Cargo packages consist of targets which correspond to artifacts that will be produced. Packages can have library, binary, example, test, and benchmark targets. The list of targets are configured in the Cargo.toml manifest , often inferred automatically by the directory layout of the source files. Target Directory — Cargo places built artifacts in the target directory. By default this is a directory named target at the workspace root, or the package root if not using a workspace. The directory may be changed with the --target-dir command-line option, the CARGO_TARGET_DIR environment variable , or the build.target-dir config option . For more information see the build cache documentation. Target Architecture — The OS and machine architecture for the built artifacts are typically referred to as a target . Target Triple — A triple is a specific format for specifying a target architecture. Triples may be referred to as a target triple which is the architecture for the artifact produced, and the host triple which is the architecture that the compiler is running on. The target triple can be specified with the --target command-line option or the build.target config option . The general format of the triple is <arch><sub>-<vendor>-<sys>-<abi> where: arch = The base CPU architecture, for example x86_64 , i686 , arm , thumb , mips , etc. sub = The CPU sub-architecture, for example arm has v7 , v7s , v5te , etc. vendor = The vendor, for example unknown , apple , pc , nvidia , etc. sys = The system name, for example linux , windows , darwin , etc. none is typically used for bare-metal without an OS. abi = The ABI, for example gnu , android , eabi , etc. Some parameters may be omitted. Run rustc --print target-list for a list of supported targets. Test Targets Cargo test targets generate binaries which help verify proper operation and correctness of code. There are two types of test artifacts: Unit test — A unit test is an executable binary compiled directly from a library or a binary target. It contains the entire contents of the library or binary code, and runs #[test] annotated functions, intended to verify individual units of code. Integration test target — An integration test target is an executable binary compiled from a test target which is a distinct crate whose source is located in the tests directory or specified by the [[test]] table in the Cargo.toml manifest . It is intended to only test the public API of a library, or execute a binary to verify its operation. Workspace A workspace is a collection of one or more packages that share common dependency resolution (with a shared Cargo.lock lock file ), output directory, and various settings such as profiles. A virtual workspace is a workspace where the root Cargo.toml manifest does not define a package, and only lists the workspace members . The workspace root is the directory where the workspace’s Cargo.toml manifest is located. (Compare with package root .) | 2026-01-13T09:29:13 |
https://aws.amazon.com/blogs/big-data/use-amazon-sagemaker-custom-tags-for-project-resource-governance-and-cost-tracking/ | Use Amazon SageMaker custom tags for project resource governance and cost tracking | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Use Amazon SageMaker custom tags for project resource governance and cost tracking by David Victoria , Ahan Malli , and Rohit Srikanta on 08 JAN 2026 in Advanced (300) , Amazon SageMaker , Amazon SageMaker Unified Studio , Technical How-to Permalink Comments Share Amazon SageMaker announced a new feature that you can use to add custom tags to resources created through an Amazon SageMaker Unified Studio project. This helps you enforce tagging standards that conform to your organization’s service control policies (SCPs) and helps enable cost tracking reporting practices on resources created across the organization. As a SageMaker administrator, you can configure a project profile with tag configurations that will be pushed down to projects that currently use or will use that project profile. The project profile is set up to pass either required key and value tag pairings or pass the key of the tag with a default value that can be modified during project creation. All tags passed to the project will result in the resources created by that project being tagged. This provides you with a governance mechanism that enforces that project resources have the expected tags across all projects of the domain. The first release of custom tags for project resources is supported through an application programming interface (API), through Amazon DataZone SDKs. In this post, we look at use cases for custom tags and how to use the AWS Command Line Interface (AWS CLI) to add tags to project resources. What we hear from customers As customers continue to build and collaborate using AWS tools for model development, generative AI, data processing, and SQL analytics, they see the need to bring control and visibility into the resources being created. To support connectivity to these AWS tools from SageMaker Unified Studio projects, many different types of resources across AWS services need to be created. These resources are created through AWS CloudFormation stacks (through project environment deployment) by the Amazon SageMaker service. From customers we hear the following use cases: Customers need to enforce that tagging practices conform to company policies through the use of AWS controls, such as SCPs, for resource creation. These controls block the creation of resources unless specific tags are placed on the resource. Customers can also start with policies to enforce that the correct tags are placed when resources are created with the additional goal of standardizing on resource reporting. By placing identifiable information on resources when created, they enforce consistency and completeness when performing cost attribution reporting and observability. Customer Swiss Life uses SageMaker as a single solution for cataloging, discovery, sharing, and governance of their enterprise data across business domains. They require all resources have a set of mandatory tags for their finance group to bill organizations across their company for the AWS resources created. “The launch of project resource tags for Amazon SageMaker allows us to bring visibility to the costs incurred across our accounts. With this capability we are able to meet the resource tagging guidelines of our company and have confidence in attributing costs across our multi-account setup for the resources created by Amazon SageMaker projects.” – Tim Kopacz, Software Developer at Swiss Life Prerequisites To get started with custom tags, you must have the following resources: A SageMaker Unified Studio domain. An AWS Identity and Access Management (IAM) entity with privileges to make AWS CLI calls to the domain. An IAM entity authorized to make changes to the domain IAM provisioning role. If SageMaker created this for you, it will be called AmazonSageMakerProvisioning-<accountId> . The provisioning role provisions and manages resources defined in the selected blueprints in your account. How to set up project resource tags The following steps outline how you can configure custom tags for your SageMaker Unified Studio project resources: (Optional) Update the SageMaker provisioning role to permit specific tag keys. Create a new project profile with project resource tags configured. Create a new project with project resource tags. Update an existing project with project resource tags. Validate that the resources are tagged. (Optional) Update a SageMaker provisioning role to permit tag key values The AmazonSageMakerProvisioning-<accountId> role has an AWS managed policy with condition aws:TagKeys allowing tags to be created by this role only if the tag key begins with AmazonDataZone . For this example, we will change the tag key to begin with different strings. Skip to Create a new project profile with project resource tags configured if you don’t need tag keys to have a different structure (such as begins with, contains, and so on) Open the AWS Management Console and go to IAM . In the navigation pane, choose Roles . In the list, choose AmazonSageMakerProvisioning- <accountId> . Choose the Permissions tab. Choose Add permissions , and then choose Create inline policy . Under Policy editor , select JSON . Enter the following policy. Add the strings under the condition aws:TagKeys . In this example, tag keys beginning with ACME or tag keys with the exact match of CostCenter will be created by the role. { "Version": "2012-10-17", "Statement": [ { "Sid": "CustomTagsUnTagPermissions", "Effect": "Allow", "Action": [ "codecommit:UntagResource", "iam:UntagRole", "logs:UntagResource", "athena:UntagResource", "redshift-serverless:UntagResource", "scheduler:UntagResource", "bedrock:UntagResource", "neptune-graph:UntagResource", "quicksight:UntagResource", "glue:UntagResource", "airflow:UntagResource", "secretsmanager:UntagResource", "lambda:UntagResource", "emr-serverless:UntagResource", "elasticmapreduce:RemoveTags", "sagemaker:DeleteTags", "ec2:DeleteTags" ], "Resource": "*", "Condition": { "StringEquals": { "aws:ResourceAccount": "${aws:PrincipalAccount}" }, "ForAllValues:StringLike": { "aws:TagKeys": [ "AmazonDataZone*", "ACME*", "CostCenter" ] }, "Null": { "aws:ResourceTag/AmazonDataZoneProject": "false" } } }, { "Sid": "CustomTagsTaggingPermissions", "Effect": "Allow", "Action": [ "cloudformation:TagResource", "codecommit:TagResource", "iam:TagRole", "glue:TagResource", "athena:TagResource", "lambda:TagResource", "redshift-serverless:TagResource", "logs:TagResource", "secretsmanager:TagResource", "sagemaker:AddTags", "emr-serverless:TagResource", "neptune-graph:TagResource", "bedrock:TagResource", "elasticmapreduce:AddTags", "airflow:TagResource", "scheduler:TagResource", "quicksight:TagResource", "emr-containers:TagResource", "logs:CreateLogGroup", "athena:CreateWorkGroup", "scheduler:CreateScheduleGroup", "cloudformation:CreateStack", "ec2:*" ], "Resource": "*", "Condition": { "ForAnyValue:StringLike": { "aws:TagKeys": [ "AmazonDataZone*", "ACME*", "CostCenter" ] }, "StringEquals": { "aws:ResourceAccount": "${aws:PrincipalAccount}" } } } ] } It’s possible to scope down the specific AWS service tag and un-tag permissions based on which blueprints or capabilities are being used. Create a new project profile with project resource tags configured Use the following steps to create a new SQL Analytics project profile with custom tags. The example uses AWS CLI commands. Open the AWS CloudShell console. Create a project profile using the following CLI command. The project-resource-tags parameter consists of key (tag key), value (tag value), and isValueEditable (boolean indicating if the tag value can be modified during project creation or update). The allow-custom-project-resource-tags parameter set to true permits the project creator to create additional key-value pairs. The key needs to conform to the inline policy of the AmazonSageMakerProvisioning-<accountId> role. The project-resource-tags-description parameter is a description field for project resource tags. The max character limit is 2,048. The description needs to be passed in every time create-project-profile or update-project-profile is called. aws datazone create-project-profile \ --name "SQL Analytics with Project Resource Tags" \ --description "Analyze your data in SageMaker Lakehouse using SQL" \ --domain-identifier "$DOMAIN_ID" \ --region "$REGION" \ --status ENABLED \ --project-resource-tags '[ { "key": "ACME-Application", "value": "SageMaker", "isValueEditable": false }, { "key": "CostCenter", "value": "123", "isValueEditable": true } ]' \ --allow-custom-project-resource-tags \ --environment-configurations '[ { "name": "Tooling", "description": "Configuration for the Tooling Environment", "environmentBlueprintId": "", "deploymentMode": "ON_CREATE", "deploymentOrder": 0, "awsAccount": { "awsAccountId": "$ACCOUNT" }, "awsRegion": { "regionName": "$REGION" }, "configurationParameters": { "parameterOverrides": [ { "name": "enableSpaces", "value": "false", "isEditable": false }, { "name": "maxEbsVolumeSize", "isEditable": false }, { "name": "idleTimeoutInMinutes", "isEditable": false }, { "name": "lifecycleManagement", "isEditable": false }, { "name": "enableNetworkIsolation", "isEditable": false } ] } }, { "name": "Lakehouse Database", "description": "Creates databases in Amazon SageMaker Lakehouse for storing tables in S3 and Amazon Athena resources for your SQL workloads", "environmentBlueprintId": "", "deploymentMode": "ON_CREATE", "deploymentOrder": 1, "awsAccount": { "awsAccountId": "$ACCOUNT" }, "awsRegion": { "regionName": "$REGION" }, "configurationParameters": { "parameterOverrides": [ { "name": "glueDbName", "value": "glue_db", "isEditable": true } ] } }, { "name": "OnDemand RedshiftServerless", "description": "Enables you to create an additional Amazon Redshift Serverless workgroup for your SQL workloads", "environmentBlueprintId": "", "deploymentMode": "ON_DEMAND", "awsAccount": { "awsAccountId": "$ACCOUNT" }, "awsRegion": { "regionName": "$REGION" }, "configurationParameters": { "parameterOverrides": [ { "name": "redshiftDbName", "value": "dev", "isEditable": true }, { "name": "redshiftMaxCapacity", "value": "512", "isEditable": true }, { "name": "redshiftWorkgroupName", "value": "redshift-serverless-workgroup", "isEditable": true }, { "name": "redshiftBaseCapacity", "value": "128", "isEditable": true }, { "name": "connectionName", "value": "redshift.serverless", "isEditable": true }, { "name": "connectToRMSCatalog", "value": "false", "isEditable": false } ] } }, { "name": "OnDemand Catalog for Redshift Managed Storage", "description": "Enables you to create additional catalogs in Amazon SageMaker Lakehouse for storing data in Redshift Managed Storage", "environmentBlueprintId": "", "deploymentMode": "ON_DEMAND", "awsAccount": { "awsAccountId": "$ACCOUNT" }, "awsRegion": { "regionName": "$REGION" }, "configurationParameters": { "parameterOverrides": [ { "name": "catalogName", "isEditable": true }, { "name": "catalogDescription", "value": "RMS catalog", "isEditable": true } ] } } ]' This project profile will have the tag ACME-Application = SageMaker placed on all projects associated to the project profile and cannot be modified by the project creator. The tag CostCenter = 123 can have the value modified by the project creator because the isValueEditable property is set to true . Grant permissions for users to use the project profile during project creation. In the Authorization section of the project profile set either Selected users or groups or Allow all users and groups . The use of the allow-custom-project-resource-tags parameter means the project creator can add their own tags (key-value pair). The key must conform to the condition check in the policy of the provisioning role ( AmazonSageMakerProvisioning-<accountId> ). If the allow-custom-project-resource-tags parameter is changed to false after a project created tags, tags created by the project will be removed during the next project update. Updates to the project profile Updates to project resource tags are possible through the update-project-profile command. The command will replace all values in the project-resource-tags section so be sure to include the exhaustive set of tags. Updates to the project profile are reflected in projects after running the update-project command or when a new project is created using the project profile. The following example adds a new tag, ACME-BusinessUnit = Retail . There are three ways to work with the project-resource-tags parameter when updating the project profile. Passing a non-empty list of project resource tags will replace the tags currently configured on the project profile. Passing an empty list of project resource tags will clear out all previously configured tags: --project-resource-tags '[]' Not including the project resource tag parameter will keep previously configured tags as-is. aws datazone update-project-profile \ --domain-identifier "$DOMAIN_ID" \ --identifier "$PROJECT_PROFILE_ID" \ --region "$REGION" \ --project-resource-tags '[ { "key": "ACME-Application", "value": "SageMaker", "isValueEditable": false }, { "key": "CostCenter", "value": "123", "isValueEditable": true }, { "key": "ACME-BusinessUnit", "value": "Retail", "isValueEditable": false } ]' Create a new project with project resource tags The following steps walk you through creating a new project that inherits tags from the project profile and lets the project creator modify one of the tag values. Create a project using the following example CLI command. Modify the CostCenter tag value using the --resource-tags parameter. Tags configured on the project profile where the isValueEditable attribute is false will be pushed to the project automatically. aws datazone create-project \ --domain-identifier "$DOMAIN_ID" \ --region "$REGION" \ --name "$PROJECT_NAME" \ --description "New project with tags" \ --project-profile-id "$PROJECT_PROFILE_ID" \ --resource-tags '{ "CostCenter": "456" }' Update existing project with project resource tags For existing projects associated to the project profile, you must update the project for the new tags to be applied. Update the project using the following example CLI command. In this scenario, an editable value needs to be updated and a new tag added. Tag CostCenter will have its default value overwritten as “789” and the new ACME-Department = Finance tag will be added. aws datazone update-project \ --domain-identifier "$DOMAIN_ID" \ --identifier "$PROJECT_ID" \ --project-profile-version "latest" \ --region "$REGION" \ --resource-tags '{ "CostCenter": "789", "ACME-Department": "Finance" }' Project level tags (those not configured from the project profile) need to be passed during project update to be preserved. For tags with isValueEditable = true configured from the project profile, any override previously set needs to be applied or the value will revert to the default from the project profile. Validating resources are tagged Validate that tags are placed correctly. An example resource that is created by the project is the project IAM role. Viewing the tags for this role should show the tags configured from the project profile. Open SageMaker Unified Studio to get the project role from the Project details section of the project. The role name begins with datazone_usr_role_ . Open the IAM console . In the navigation pane, choose Roles . Search for the project IAM role. Select the Tags tab. Conclusion In this post, we discussed tagging related use cases from customers and walked through getting started with custom tags in Amazon SageMaker to place tags on the resources created by the project. By giving administrators a way to configure project profiles with standardized tag configurations, you can now help ensure consistent tagging practices across all SageMaker Unified Studio projects while maintaining compliance with SCPs. This feature addresses two critical customer needs: enforcing organizational tagging standards through automated governance mechanisms and enabling accurate cost attribution reporting across multi-service deployments. To learn more, visit Amazon SageMaker , then get started with Project resource tags . About the authors David Victoria David is a Senior Technical Product Manager with Amazon SageMaker at AWS. He focuses on improving administration and governance capabilities needed for customers to support their analytics systems. He is passionate about helping customers realize the most value from their data in a secure, governed manner. Rohit Srikanta Rohit is a Senior Software Engineer at AWS. He works on building and scaling services within Amazon SageMaker. He focuses on developing robust and scalable distributed systems and is passionate about solving complex engineering challenges to deliver maximum customer value. Ahan Malli Ahan is a Software Development Engineer at AWS. He works on the core data and governance layer behind Amazon SageMaker. He’s passionate about building scalable distributed systems and streamlining developer workflows. When he’s not coding, you can find him traveling or hiking Pacific Northwest trails. Loading comments… Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow Twitter Facebook LinkedIn Twitch Email Updates Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center SDKs & Tools .NET on AWS Python on AWS Java on AWS PHP on AWS JavaScript on AWS Help Contact Us File a Support Ticket AWS re:Post Knowledge Center AWS Support Overview Get Expert Help AWS Accessibility Legal English Back to top Amazon is an Equal Opportunity Employer: Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation / Age. x facebook linkedin <path d="M4.68673 0.0559501C3.83553 0.0961101 3.25425 0.23195 2.74609 0.43163C2.22017 0.63659 1.77441 0.91163 1.3308 | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/cargo/reference/features.html#the-default-feature | Features - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Features Cargo “features” provide a mechanism to express conditional compilation and optional dependencies . A package defines a set of named features in the [features] table of Cargo.toml , and each feature can either be enabled or disabled. Features for the package being built can be enabled on the command-line with flags such as --features . Features for dependencies can be enabled in the dependency declaration in Cargo.toml . Note : New crates or versions published on crates.io are now limited to a maximum of 300 features. Exceptions are granted on a case-by-case basis. See this blog post for details. Participation in solution discussions is encouraged via the crates.io Zulip stream. See also the Features Examples chapter for some examples of how features can be used. The [features] section Features are defined in the [features] table in Cargo.toml . Each feature specifies an array of other features or optional dependencies that it enables. The following examples illustrate how features could be used for a 2D image processing library where support for different image formats can be optionally included: [features] # Defines a feature named `webp` that does not enable any other features. webp = [] With this feature defined, cfg expressions can be used to conditionally include code to support the requested feature at compile time. For example, inside lib.rs of the package could include this: #![allow(unused)] fn main() { // This conditionally includes a module which implements WEBP support. #[cfg(feature = "webp")] pub mod webp; } Cargo sets features in the package using the rustc --cfg flag , and code can test for their presence with the cfg attribute or the cfg macro . Features can list other features to enable. For example, the ICO image format can contain BMP and PNG images, so when it is enabled, it should make sure those other features are enabled, too: [features] bmp = [] png = [] ico = ["bmp", "png"] webp = [] Feature names may include characters from the Unicode XID standard (which includes most letters), and additionally allows starting with _ or digits 0 through 9 , and after the first character may also contain - , + , or . . Note : crates.io imposes additional constraints on feature name syntax that they must only be ASCII alphanumeric characters or _ , - , or + . The default feature By default, all features are disabled unless explicitly enabled. This can be changed by specifying the default feature: [features] default = ["ico", "webp"] bmp = [] png = [] ico = ["bmp", "png"] webp = [] When the package is built, the default feature is enabled which in turn enables the listed features. This behavior can be changed by: The --no-default-features command-line flag disables the default features of the package. The default-features = false option can be specified in a dependency declaration . Note : Be careful about choosing the default feature set. The default features are a convenience that make it easier to use a package without forcing the user to carefully select which features to enable for common use, but there are some drawbacks. Dependencies automatically enable default features unless default-features = false is specified. This can make it difficult to ensure that the default features are not enabled, especially for a dependency that appears multiple times in the dependency graph. Every package must ensure that default-features = false is specified to avoid enabling them. Another issue is that it can be a SemVer incompatible change to remove a feature from the default set, so you should be confident that you will keep those features. Optional dependencies Dependencies can be marked “optional”, which means they will not be compiled by default. For example, let’s say that our 2D image processing library uses an external package to handle GIF images. This can be expressed like this: [dependencies] gif = { version = "0.11.1", optional = true } By default, this optional dependency implicitly defines a feature that looks like this: [features] gif = ["dep:gif"] This means that this dependency will only be included if the gif feature is enabled. The same cfg(feature = "gif") syntax can be used in the code, and the dependency can be enabled just like any feature such as --features gif (see Command-line feature options below). In some cases, you may not want to expose a feature that has the same name as the optional dependency. For example, perhaps the optional dependency is an internal detail, or you want to group multiple optional dependencies together, or you just want to use a better name. If you specify the optional dependency with the dep: prefix anywhere in the [features] table, that disables the implicit feature. Note : The dep: syntax is only available starting with Rust 1.60. Previous versions can only use the implicit feature name. For example, let’s say in order to support the AVIF image format, our library needs two other dependencies to be enabled: [dependencies] ravif = { version = "0.6.3", optional = true } rgb = { version = "0.8.25", optional = true } [features] avif = ["dep:ravif", "dep:rgb"] In this example, the avif feature will enable the two listed dependencies. This also avoids creating the implicit ravif and rgb features, since we don’t want users to enable those individually as they are internal details to our crate. Note : Another way to optionally include a dependency is to use platform-specific dependencies . Instead of using features, these are conditional based on the target platform. Dependency features Features of dependencies can be enabled within the dependency declaration. The features key indicates which features to enable: [dependencies] # Enables the `derive` feature of serde. serde = { version = "1.0.118", features = ["derive"] } The default features can be disabled using default-features = false : [dependencies] flate2 = { version = "1.0.3", default-features = false, features = ["zlib-rs"] } Note : This may not ensure the default features are disabled. If another dependency includes flate2 without specifying default-features = false , then the default features will be enabled. See feature unification below for more details. Features of dependencies can also be enabled in the [features] table. The syntax is "package-name/feature-name" . For example: [dependencies] jpeg-decoder = { version = "0.1.20", default-features = false } [features] # Enables parallel processing support by enabling the "rayon" feature of jpeg-decoder. parallel = ["jpeg-decoder/rayon"] The "package-name/feature-name" syntax will also enable package-name if it is an optional dependency. Often this is not what you want. You can add a ? as in "package-name?/feature-name" which will only enable the given feature if something else enables the optional dependency. Note : The ? syntax is only available starting with Rust 1.60. For example, let’s say we have added some serialization support to our library, and it requires enabling a corresponding feature in some optional dependencies. That can be done like this: [dependencies] serde = { version = "1.0.133", optional = true } rgb = { version = "0.8.25", optional = true } [features] serde = ["dep:serde", "rgb?/serde"] In this example, enabling the serde feature will enable the serde dependency. It will also enable the serde feature for the rgb dependency, but only if something else has enabled the rgb dependency. Command-line feature options The following command-line flags can be used to control which features are enabled: --features FEATURES : Enables the listed features. Multiple features may be separated with commas or spaces. If using spaces, be sure to use quotes around all the features if running Cargo from a shell (such as --features "foo bar" ). If building multiple packages in a workspace , the package-name/feature-name syntax can be used to specify features for specific workspace members. --all-features : Activates all features of all packages selected on the command line. --no-default-features : Does not activate the default feature of the selected packages. NOTE : check the individual subcommand documentation for details. Not all flags are available for all subcommands. Feature unification Features are unique to the package that defines them. Enabling a feature on a package does not enable a feature of the same name on other packages. When a dependency is used by multiple packages, Cargo will use the union of all features enabled on that dependency when building it. This helps ensure that only a single copy of the dependency is used. See the features section of the resolver documentation for more details. For example, let’s look at the winapi package which uses a large number of features. If your package depends on a package foo which enables the “fileapi” and “handleapi” features of winapi , and another dependency bar which enables the “std” and “winnt” features of winapi , then winapi will be built with all four of those features enabled. A consequence of this is that features should be additive . That is, enabling a feature should not disable functionality, and it should usually be safe to enable any combination of features. A feature should not introduce a SemVer-incompatible change . For example, if you want to optionally support no_std environments, do not use a no_std feature. Instead, use a std feature that enables std . For example: #![allow(unused)] #![no_std] fn main() { #[cfg(feature = "std")] extern crate std; #[cfg(feature = "std")] pub fn function_that_requires_std() { // ... } } Mutually exclusive features There are rare cases where features may be mutually incompatible with one another. This should be avoided if at all possible, because it requires coordinating all uses of the package in the dependency graph to cooperate to avoid enabling them together. If it is not possible, consider adding a compile error to detect this scenario. For example: #[cfg(all(feature = "foo", feature = "bar"))] compile_error!("feature \"foo\" and feature \"bar\" cannot be enabled at the same time"); Instead of using mutually exclusive features, consider some other options: Split the functionality into separate packages. When there is a conflict, choose one feature over another . The cfg-if package can help with writing more complex cfg expressions. Architect the code to allow the features to be enabled concurrently, and use runtime options to control which is used. For example, use a config file, command-line argument, or environment variable to choose which behavior to enable. Inspecting resolved features In complex dependency graphs, it can sometimes be difficult to understand how different features get enabled on various packages. The cargo tree command offers several options to help inspect and visualize which features are enabled. Some options to try: cargo tree -e features : This will show features in the dependency graph. Each feature will appear showing which package enabled it. cargo tree -f "{p} {f}" : This is a more compact view that shows a comma-separated list of features enabled on each package. cargo tree -e features -i foo : This will invert the tree, showing how features flow into the given package “foo”. This can be useful because viewing the entire graph can be quite large and overwhelming. Use this when you are trying to figure out which features are enabled on a specific package and why. See the example at the bottom of the cargo tree page on how to read this. Feature resolver version 2 A different feature resolver can be specified with the resolver field in Cargo.toml , like this: [package] name = "my-package" version = "1.0.0" resolver = "2" See the resolver versions section for more detail on specifying resolver versions. The version "2" resolver avoids unifying features in a few situations where that unification can be unwanted. The exact situations are described in the resolver chapter , but in short, it avoids unifying in these situations: Features enabled on platform-specific dependencies for target architectures not currently being built are ignored. Build-dependencies and proc-macros do not share features with normal dependencies. Dev-dependencies do not activate features unless building a Cargo target that needs them (like tests or examples). Avoiding the unification is necessary for some situations. For example, if a build-dependency enables a std feature, and the same dependency is used as a normal dependency for a no_std environment, enabling std would break the build. However, one drawback is that this can increase build times because the dependency is built multiple times (each with different features). When using the version "2" resolver, it is recommended to check for dependencies that are built multiple times to reduce overall build time. If it is not required to build those duplicated packages with separate features, consider adding features to the features list in the dependency declaration so that the duplicates end up with the same features (and thus Cargo will build it only once). You can detect these duplicate dependencies with the cargo tree --duplicates command. It will show which packages are built multiple times; look for any entries listed with the same version. See Inspecting resolved features for more on fetching information on the resolved features. For build dependencies, this is not necessary if you are cross-compiling with the --target flag because build dependencies are always built separately from normal dependencies in that scenario. Resolver version 2 command-line flags The resolver = "2" setting also changes the behavior of the --features and --no-default-features command-line options . With version "1" , you can only enable features for the package in the current working directory. For example, in a workspace with packages foo and bar , and you are in the directory for package foo , and ran the command cargo build -p bar --features bar-feat , this would fail because the --features flag only allowed enabling features on foo . With resolver = "2" , the features flags allow enabling features for any of the packages selected on the command-line with -p and --workspace flags. For example: # This command is allowed with resolver = "2", regardless of which directory # you are in. cargo build -p foo -p bar --features foo-feat,bar-feat # This explicit equivalent works with any resolver version: cargo build -p foo -p bar --features foo/foo-feat,bar/bar-feat Additionally, with resolver = "1" , the --no-default-features flag only disables the default feature for the package in the current directory. With version “2”, it will disable the default features for all workspace members. Build scripts Build scripts can detect which features are enabled on the package by inspecting the CARGO_FEATURE_<name> environment variable, where <name> is the feature name converted to uppercase and - converted to _ . Required features The required-features field can be used to disable specific Cargo targets if a feature is not enabled. See the linked documentation for more details. SemVer compatibility Enabling a feature should not introduce a SemVer-incompatible change. For example, the feature shouldn’t change an existing API in a way that could break existing uses. More details about what changes are compatible can be found in the SemVer Compatibility chapter . Care should be taken when adding and removing feature definitions and optional dependencies, as these can sometimes be backwards-incompatible changes. More details can be found in the Cargo section of the SemVer Compatibility chapter. In short, follow these rules: The following is usually safe to do in a minor release: Add a new feature or optional dependency . Change the features used on a dependency . The following should usually not be done in a minor release: Remove a feature or optional dependency . Moving existing public code behind a feature . Remove a feature from a feature list . See the links for caveats and examples. Feature documentation and discovery You are encouraged to document which features are available in your package. This can be done by adding doc comments at the top of lib.rs . As an example, see the regex crate source , which when rendered can be viewed on docs.rs . If you have other documentation, such as a user guide, consider adding the documentation there (for example, see serde.rs ). If you have a binary project, consider documenting the features in the README or other documentation for the project (for example, see sccache ). Clearly documenting the features can set expectations about features that are considered “unstable” or otherwise shouldn’t be used. For example, if there is an optional dependency, but you don’t want users to explicitly list that optional dependency as a feature, exclude it from the documented list. Documentation published on docs.rs can use metadata in Cargo.toml to control which features are enabled when the documentation is built. See docs.rs metadata documentation for more details. Note : Rustdoc has experimental support for annotating the documentation to indicate which features are required to use certain APIs. See the doc_cfg documentation for more details. An example is the syn documentation , where you can see colored boxes which note which features are required to use it. Discovering features When features are documented in the library API, this can make it easier for your users to discover which features are available and what they do. If the feature documentation for a package isn’t readily available, you can look at the Cargo.toml file, but sometimes it can be hard to track it down. The crate page on crates.io has a link to the source repository if available. Tools like cargo vendor or cargo-clone-crate can be used to download the source and inspect it. Feature combinations Because features are a form of conditional compilation, they require an exponential number of configurations and test cases to be 100% covered. By default, tests, docs, and other tooling such as Clippy will only run with the default set of features. We encourage you to consider your strategy and tooling in regards to different feature combinations — Every project will have different requirements in conjunction with time, resources, and the cost-benefit of covering specific scenarios. Common configurations may be with / without default features, specific combinations of features, or all combinations of features. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/cargo/reference/features.html#feature-unification | Features - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Features Cargo “features” provide a mechanism to express conditional compilation and optional dependencies . A package defines a set of named features in the [features] table of Cargo.toml , and each feature can either be enabled or disabled. Features for the package being built can be enabled on the command-line with flags such as --features . Features for dependencies can be enabled in the dependency declaration in Cargo.toml . Note : New crates or versions published on crates.io are now limited to a maximum of 300 features. Exceptions are granted on a case-by-case basis. See this blog post for details. Participation in solution discussions is encouraged via the crates.io Zulip stream. See also the Features Examples chapter for some examples of how features can be used. The [features] section Features are defined in the [features] table in Cargo.toml . Each feature specifies an array of other features or optional dependencies that it enables. The following examples illustrate how features could be used for a 2D image processing library where support for different image formats can be optionally included: [features] # Defines a feature named `webp` that does not enable any other features. webp = [] With this feature defined, cfg expressions can be used to conditionally include code to support the requested feature at compile time. For example, inside lib.rs of the package could include this: #![allow(unused)] fn main() { // This conditionally includes a module which implements WEBP support. #[cfg(feature = "webp")] pub mod webp; } Cargo sets features in the package using the rustc --cfg flag , and code can test for their presence with the cfg attribute or the cfg macro . Features can list other features to enable. For example, the ICO image format can contain BMP and PNG images, so when it is enabled, it should make sure those other features are enabled, too: [features] bmp = [] png = [] ico = ["bmp", "png"] webp = [] Feature names may include characters from the Unicode XID standard (which includes most letters), and additionally allows starting with _ or digits 0 through 9 , and after the first character may also contain - , + , or . . Note : crates.io imposes additional constraints on feature name syntax that they must only be ASCII alphanumeric characters or _ , - , or + . The default feature By default, all features are disabled unless explicitly enabled. This can be changed by specifying the default feature: [features] default = ["ico", "webp"] bmp = [] png = [] ico = ["bmp", "png"] webp = [] When the package is built, the default feature is enabled which in turn enables the listed features. This behavior can be changed by: The --no-default-features command-line flag disables the default features of the package. The default-features = false option can be specified in a dependency declaration . Note : Be careful about choosing the default feature set. The default features are a convenience that make it easier to use a package without forcing the user to carefully select which features to enable for common use, but there are some drawbacks. Dependencies automatically enable default features unless default-features = false is specified. This can make it difficult to ensure that the default features are not enabled, especially for a dependency that appears multiple times in the dependency graph. Every package must ensure that default-features = false is specified to avoid enabling them. Another issue is that it can be a SemVer incompatible change to remove a feature from the default set, so you should be confident that you will keep those features. Optional dependencies Dependencies can be marked “optional”, which means they will not be compiled by default. For example, let’s say that our 2D image processing library uses an external package to handle GIF images. This can be expressed like this: [dependencies] gif = { version = "0.11.1", optional = true } By default, this optional dependency implicitly defines a feature that looks like this: [features] gif = ["dep:gif"] This means that this dependency will only be included if the gif feature is enabled. The same cfg(feature = "gif") syntax can be used in the code, and the dependency can be enabled just like any feature such as --features gif (see Command-line feature options below). In some cases, you may not want to expose a feature that has the same name as the optional dependency. For example, perhaps the optional dependency is an internal detail, or you want to group multiple optional dependencies together, or you just want to use a better name. If you specify the optional dependency with the dep: prefix anywhere in the [features] table, that disables the implicit feature. Note : The dep: syntax is only available starting with Rust 1.60. Previous versions can only use the implicit feature name. For example, let’s say in order to support the AVIF image format, our library needs two other dependencies to be enabled: [dependencies] ravif = { version = "0.6.3", optional = true } rgb = { version = "0.8.25", optional = true } [features] avif = ["dep:ravif", "dep:rgb"] In this example, the avif feature will enable the two listed dependencies. This also avoids creating the implicit ravif and rgb features, since we don’t want users to enable those individually as they are internal details to our crate. Note : Another way to optionally include a dependency is to use platform-specific dependencies . Instead of using features, these are conditional based on the target platform. Dependency features Features of dependencies can be enabled within the dependency declaration. The features key indicates which features to enable: [dependencies] # Enables the `derive` feature of serde. serde = { version = "1.0.118", features = ["derive"] } The default features can be disabled using default-features = false : [dependencies] flate2 = { version = "1.0.3", default-features = false, features = ["zlib-rs"] } Note : This may not ensure the default features are disabled. If another dependency includes flate2 without specifying default-features = false , then the default features will be enabled. See feature unification below for more details. Features of dependencies can also be enabled in the [features] table. The syntax is "package-name/feature-name" . For example: [dependencies] jpeg-decoder = { version = "0.1.20", default-features = false } [features] # Enables parallel processing support by enabling the "rayon" feature of jpeg-decoder. parallel = ["jpeg-decoder/rayon"] The "package-name/feature-name" syntax will also enable package-name if it is an optional dependency. Often this is not what you want. You can add a ? as in "package-name?/feature-name" which will only enable the given feature if something else enables the optional dependency. Note : The ? syntax is only available starting with Rust 1.60. For example, let’s say we have added some serialization support to our library, and it requires enabling a corresponding feature in some optional dependencies. That can be done like this: [dependencies] serde = { version = "1.0.133", optional = true } rgb = { version = "0.8.25", optional = true } [features] serde = ["dep:serde", "rgb?/serde"] In this example, enabling the serde feature will enable the serde dependency. It will also enable the serde feature for the rgb dependency, but only if something else has enabled the rgb dependency. Command-line feature options The following command-line flags can be used to control which features are enabled: --features FEATURES : Enables the listed features. Multiple features may be separated with commas or spaces. If using spaces, be sure to use quotes around all the features if running Cargo from a shell (such as --features "foo bar" ). If building multiple packages in a workspace , the package-name/feature-name syntax can be used to specify features for specific workspace members. --all-features : Activates all features of all packages selected on the command line. --no-default-features : Does not activate the default feature of the selected packages. NOTE : check the individual subcommand documentation for details. Not all flags are available for all subcommands. Feature unification Features are unique to the package that defines them. Enabling a feature on a package does not enable a feature of the same name on other packages. When a dependency is used by multiple packages, Cargo will use the union of all features enabled on that dependency when building it. This helps ensure that only a single copy of the dependency is used. See the features section of the resolver documentation for more details. For example, let’s look at the winapi package which uses a large number of features. If your package depends on a package foo which enables the “fileapi” and “handleapi” features of winapi , and another dependency bar which enables the “std” and “winnt” features of winapi , then winapi will be built with all four of those features enabled. A consequence of this is that features should be additive . That is, enabling a feature should not disable functionality, and it should usually be safe to enable any combination of features. A feature should not introduce a SemVer-incompatible change . For example, if you want to optionally support no_std environments, do not use a no_std feature. Instead, use a std feature that enables std . For example: #![allow(unused)] #![no_std] fn main() { #[cfg(feature = "std")] extern crate std; #[cfg(feature = "std")] pub fn function_that_requires_std() { // ... } } Mutually exclusive features There are rare cases where features may be mutually incompatible with one another. This should be avoided if at all possible, because it requires coordinating all uses of the package in the dependency graph to cooperate to avoid enabling them together. If it is not possible, consider adding a compile error to detect this scenario. For example: #[cfg(all(feature = "foo", feature = "bar"))] compile_error!("feature \"foo\" and feature \"bar\" cannot be enabled at the same time"); Instead of using mutually exclusive features, consider some other options: Split the functionality into separate packages. When there is a conflict, choose one feature over another . The cfg-if package can help with writing more complex cfg expressions. Architect the code to allow the features to be enabled concurrently, and use runtime options to control which is used. For example, use a config file, command-line argument, or environment variable to choose which behavior to enable. Inspecting resolved features In complex dependency graphs, it can sometimes be difficult to understand how different features get enabled on various packages. The cargo tree command offers several options to help inspect and visualize which features are enabled. Some options to try: cargo tree -e features : This will show features in the dependency graph. Each feature will appear showing which package enabled it. cargo tree -f "{p} {f}" : This is a more compact view that shows a comma-separated list of features enabled on each package. cargo tree -e features -i foo : This will invert the tree, showing how features flow into the given package “foo”. This can be useful because viewing the entire graph can be quite large and overwhelming. Use this when you are trying to figure out which features are enabled on a specific package and why. See the example at the bottom of the cargo tree page on how to read this. Feature resolver version 2 A different feature resolver can be specified with the resolver field in Cargo.toml , like this: [package] name = "my-package" version = "1.0.0" resolver = "2" See the resolver versions section for more detail on specifying resolver versions. The version "2" resolver avoids unifying features in a few situations where that unification can be unwanted. The exact situations are described in the resolver chapter , but in short, it avoids unifying in these situations: Features enabled on platform-specific dependencies for target architectures not currently being built are ignored. Build-dependencies and proc-macros do not share features with normal dependencies. Dev-dependencies do not activate features unless building a Cargo target that needs them (like tests or examples). Avoiding the unification is necessary for some situations. For example, if a build-dependency enables a std feature, and the same dependency is used as a normal dependency for a no_std environment, enabling std would break the build. However, one drawback is that this can increase build times because the dependency is built multiple times (each with different features). When using the version "2" resolver, it is recommended to check for dependencies that are built multiple times to reduce overall build time. If it is not required to build those duplicated packages with separate features, consider adding features to the features list in the dependency declaration so that the duplicates end up with the same features (and thus Cargo will build it only once). You can detect these duplicate dependencies with the cargo tree --duplicates command. It will show which packages are built multiple times; look for any entries listed with the same version. See Inspecting resolved features for more on fetching information on the resolved features. For build dependencies, this is not necessary if you are cross-compiling with the --target flag because build dependencies are always built separately from normal dependencies in that scenario. Resolver version 2 command-line flags The resolver = "2" setting also changes the behavior of the --features and --no-default-features command-line options . With version "1" , you can only enable features for the package in the current working directory. For example, in a workspace with packages foo and bar , and you are in the directory for package foo , and ran the command cargo build -p bar --features bar-feat , this would fail because the --features flag only allowed enabling features on foo . With resolver = "2" , the features flags allow enabling features for any of the packages selected on the command-line with -p and --workspace flags. For example: # This command is allowed with resolver = "2", regardless of which directory # you are in. cargo build -p foo -p bar --features foo-feat,bar-feat # This explicit equivalent works with any resolver version: cargo build -p foo -p bar --features foo/foo-feat,bar/bar-feat Additionally, with resolver = "1" , the --no-default-features flag only disables the default feature for the package in the current directory. With version “2”, it will disable the default features for all workspace members. Build scripts Build scripts can detect which features are enabled on the package by inspecting the CARGO_FEATURE_<name> environment variable, where <name> is the feature name converted to uppercase and - converted to _ . Required features The required-features field can be used to disable specific Cargo targets if a feature is not enabled. See the linked documentation for more details. SemVer compatibility Enabling a feature should not introduce a SemVer-incompatible change. For example, the feature shouldn’t change an existing API in a way that could break existing uses. More details about what changes are compatible can be found in the SemVer Compatibility chapter . Care should be taken when adding and removing feature definitions and optional dependencies, as these can sometimes be backwards-incompatible changes. More details can be found in the Cargo section of the SemVer Compatibility chapter. In short, follow these rules: The following is usually safe to do in a minor release: Add a new feature or optional dependency . Change the features used on a dependency . The following should usually not be done in a minor release: Remove a feature or optional dependency . Moving existing public code behind a feature . Remove a feature from a feature list . See the links for caveats and examples. Feature documentation and discovery You are encouraged to document which features are available in your package. This can be done by adding doc comments at the top of lib.rs . As an example, see the regex crate source , which when rendered can be viewed on docs.rs . If you have other documentation, such as a user guide, consider adding the documentation there (for example, see serde.rs ). If you have a binary project, consider documenting the features in the README or other documentation for the project (for example, see sccache ). Clearly documenting the features can set expectations about features that are considered “unstable” or otherwise shouldn’t be used. For example, if there is an optional dependency, but you don’t want users to explicitly list that optional dependency as a feature, exclude it from the documented list. Documentation published on docs.rs can use metadata in Cargo.toml to control which features are enabled when the documentation is built. See docs.rs metadata documentation for more details. Note : Rustdoc has experimental support for annotating the documentation to indicate which features are required to use certain APIs. See the doc_cfg documentation for more details. An example is the syn documentation , where you can see colored boxes which note which features are required to use it. Discovering features When features are documented in the library API, this can make it easier for your users to discover which features are available and what they do. If the feature documentation for a package isn’t readily available, you can look at the Cargo.toml file, but sometimes it can be hard to track it down. The crate page on crates.io has a link to the source repository if available. Tools like cargo vendor or cargo-clone-crate can be used to download the source and inspect it. Feature combinations Because features are a form of conditional compilation, they require an exponential number of configurations and test cases to be 100% covered. By default, tests, docs, and other tooling such as Clippy will only run with the default set of features. We encourage you to consider your strategy and tooling in regards to different feature combinations — Every project will have different requirements in conjunction with time, resources, and the cost-benefit of covering specific scenarios. Common configurations may be with / without default features, specific combinations of features, or all combinations of features. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/cargo/reference/features.html#semver-compatibility | Features - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Features Cargo “features” provide a mechanism to express conditional compilation and optional dependencies . A package defines a set of named features in the [features] table of Cargo.toml , and each feature can either be enabled or disabled. Features for the package being built can be enabled on the command-line with flags such as --features . Features for dependencies can be enabled in the dependency declaration in Cargo.toml . Note : New crates or versions published on crates.io are now limited to a maximum of 300 features. Exceptions are granted on a case-by-case basis. See this blog post for details. Participation in solution discussions is encouraged via the crates.io Zulip stream. See also the Features Examples chapter for some examples of how features can be used. The [features] section Features are defined in the [features] table in Cargo.toml . Each feature specifies an array of other features or optional dependencies that it enables. The following examples illustrate how features could be used for a 2D image processing library where support for different image formats can be optionally included: [features] # Defines a feature named `webp` that does not enable any other features. webp = [] With this feature defined, cfg expressions can be used to conditionally include code to support the requested feature at compile time. For example, inside lib.rs of the package could include this: #![allow(unused)] fn main() { // This conditionally includes a module which implements WEBP support. #[cfg(feature = "webp")] pub mod webp; } Cargo sets features in the package using the rustc --cfg flag , and code can test for their presence with the cfg attribute or the cfg macro . Features can list other features to enable. For example, the ICO image format can contain BMP and PNG images, so when it is enabled, it should make sure those other features are enabled, too: [features] bmp = [] png = [] ico = ["bmp", "png"] webp = [] Feature names may include characters from the Unicode XID standard (which includes most letters), and additionally allows starting with _ or digits 0 through 9 , and after the first character may also contain - , + , or . . Note : crates.io imposes additional constraints on feature name syntax that they must only be ASCII alphanumeric characters or _ , - , or + . The default feature By default, all features are disabled unless explicitly enabled. This can be changed by specifying the default feature: [features] default = ["ico", "webp"] bmp = [] png = [] ico = ["bmp", "png"] webp = [] When the package is built, the default feature is enabled which in turn enables the listed features. This behavior can be changed by: The --no-default-features command-line flag disables the default features of the package. The default-features = false option can be specified in a dependency declaration . Note : Be careful about choosing the default feature set. The default features are a convenience that make it easier to use a package without forcing the user to carefully select which features to enable for common use, but there are some drawbacks. Dependencies automatically enable default features unless default-features = false is specified. This can make it difficult to ensure that the default features are not enabled, especially for a dependency that appears multiple times in the dependency graph. Every package must ensure that default-features = false is specified to avoid enabling them. Another issue is that it can be a SemVer incompatible change to remove a feature from the default set, so you should be confident that you will keep those features. Optional dependencies Dependencies can be marked “optional”, which means they will not be compiled by default. For example, let’s say that our 2D image processing library uses an external package to handle GIF images. This can be expressed like this: [dependencies] gif = { version = "0.11.1", optional = true } By default, this optional dependency implicitly defines a feature that looks like this: [features] gif = ["dep:gif"] This means that this dependency will only be included if the gif feature is enabled. The same cfg(feature = "gif") syntax can be used in the code, and the dependency can be enabled just like any feature such as --features gif (see Command-line feature options below). In some cases, you may not want to expose a feature that has the same name as the optional dependency. For example, perhaps the optional dependency is an internal detail, or you want to group multiple optional dependencies together, or you just want to use a better name. If you specify the optional dependency with the dep: prefix anywhere in the [features] table, that disables the implicit feature. Note : The dep: syntax is only available starting with Rust 1.60. Previous versions can only use the implicit feature name. For example, let’s say in order to support the AVIF image format, our library needs two other dependencies to be enabled: [dependencies] ravif = { version = "0.6.3", optional = true } rgb = { version = "0.8.25", optional = true } [features] avif = ["dep:ravif", "dep:rgb"] In this example, the avif feature will enable the two listed dependencies. This also avoids creating the implicit ravif and rgb features, since we don’t want users to enable those individually as they are internal details to our crate. Note : Another way to optionally include a dependency is to use platform-specific dependencies . Instead of using features, these are conditional based on the target platform. Dependency features Features of dependencies can be enabled within the dependency declaration. The features key indicates which features to enable: [dependencies] # Enables the `derive` feature of serde. serde = { version = "1.0.118", features = ["derive"] } The default features can be disabled using default-features = false : [dependencies] flate2 = { version = "1.0.3", default-features = false, features = ["zlib-rs"] } Note : This may not ensure the default features are disabled. If another dependency includes flate2 without specifying default-features = false , then the default features will be enabled. See feature unification below for more details. Features of dependencies can also be enabled in the [features] table. The syntax is "package-name/feature-name" . For example: [dependencies] jpeg-decoder = { version = "0.1.20", default-features = false } [features] # Enables parallel processing support by enabling the "rayon" feature of jpeg-decoder. parallel = ["jpeg-decoder/rayon"] The "package-name/feature-name" syntax will also enable package-name if it is an optional dependency. Often this is not what you want. You can add a ? as in "package-name?/feature-name" which will only enable the given feature if something else enables the optional dependency. Note : The ? syntax is only available starting with Rust 1.60. For example, let’s say we have added some serialization support to our library, and it requires enabling a corresponding feature in some optional dependencies. That can be done like this: [dependencies] serde = { version = "1.0.133", optional = true } rgb = { version = "0.8.25", optional = true } [features] serde = ["dep:serde", "rgb?/serde"] In this example, enabling the serde feature will enable the serde dependency. It will also enable the serde feature for the rgb dependency, but only if something else has enabled the rgb dependency. Command-line feature options The following command-line flags can be used to control which features are enabled: --features FEATURES : Enables the listed features. Multiple features may be separated with commas or spaces. If using spaces, be sure to use quotes around all the features if running Cargo from a shell (such as --features "foo bar" ). If building multiple packages in a workspace , the package-name/feature-name syntax can be used to specify features for specific workspace members. --all-features : Activates all features of all packages selected on the command line. --no-default-features : Does not activate the default feature of the selected packages. NOTE : check the individual subcommand documentation for details. Not all flags are available for all subcommands. Feature unification Features are unique to the package that defines them. Enabling a feature on a package does not enable a feature of the same name on other packages. When a dependency is used by multiple packages, Cargo will use the union of all features enabled on that dependency when building it. This helps ensure that only a single copy of the dependency is used. See the features section of the resolver documentation for more details. For example, let’s look at the winapi package which uses a large number of features. If your package depends on a package foo which enables the “fileapi” and “handleapi” features of winapi , and another dependency bar which enables the “std” and “winnt” features of winapi , then winapi will be built with all four of those features enabled. A consequence of this is that features should be additive . That is, enabling a feature should not disable functionality, and it should usually be safe to enable any combination of features. A feature should not introduce a SemVer-incompatible change . For example, if you want to optionally support no_std environments, do not use a no_std feature. Instead, use a std feature that enables std . For example: #![allow(unused)] #![no_std] fn main() { #[cfg(feature = "std")] extern crate std; #[cfg(feature = "std")] pub fn function_that_requires_std() { // ... } } Mutually exclusive features There are rare cases where features may be mutually incompatible with one another. This should be avoided if at all possible, because it requires coordinating all uses of the package in the dependency graph to cooperate to avoid enabling them together. If it is not possible, consider adding a compile error to detect this scenario. For example: #[cfg(all(feature = "foo", feature = "bar"))] compile_error!("feature \"foo\" and feature \"bar\" cannot be enabled at the same time"); Instead of using mutually exclusive features, consider some other options: Split the functionality into separate packages. When there is a conflict, choose one feature over another . The cfg-if package can help with writing more complex cfg expressions. Architect the code to allow the features to be enabled concurrently, and use runtime options to control which is used. For example, use a config file, command-line argument, or environment variable to choose which behavior to enable. Inspecting resolved features In complex dependency graphs, it can sometimes be difficult to understand how different features get enabled on various packages. The cargo tree command offers several options to help inspect and visualize which features are enabled. Some options to try: cargo tree -e features : This will show features in the dependency graph. Each feature will appear showing which package enabled it. cargo tree -f "{p} {f}" : This is a more compact view that shows a comma-separated list of features enabled on each package. cargo tree -e features -i foo : This will invert the tree, showing how features flow into the given package “foo”. This can be useful because viewing the entire graph can be quite large and overwhelming. Use this when you are trying to figure out which features are enabled on a specific package and why. See the example at the bottom of the cargo tree page on how to read this. Feature resolver version 2 A different feature resolver can be specified with the resolver field in Cargo.toml , like this: [package] name = "my-package" version = "1.0.0" resolver = "2" See the resolver versions section for more detail on specifying resolver versions. The version "2" resolver avoids unifying features in a few situations where that unification can be unwanted. The exact situations are described in the resolver chapter , but in short, it avoids unifying in these situations: Features enabled on platform-specific dependencies for target architectures not currently being built are ignored. Build-dependencies and proc-macros do not share features with normal dependencies. Dev-dependencies do not activate features unless building a Cargo target that needs them (like tests or examples). Avoiding the unification is necessary for some situations. For example, if a build-dependency enables a std feature, and the same dependency is used as a normal dependency for a no_std environment, enabling std would break the build. However, one drawback is that this can increase build times because the dependency is built multiple times (each with different features). When using the version "2" resolver, it is recommended to check for dependencies that are built multiple times to reduce overall build time. If it is not required to build those duplicated packages with separate features, consider adding features to the features list in the dependency declaration so that the duplicates end up with the same features (and thus Cargo will build it only once). You can detect these duplicate dependencies with the cargo tree --duplicates command. It will show which packages are built multiple times; look for any entries listed with the same version. See Inspecting resolved features for more on fetching information on the resolved features. For build dependencies, this is not necessary if you are cross-compiling with the --target flag because build dependencies are always built separately from normal dependencies in that scenario. Resolver version 2 command-line flags The resolver = "2" setting also changes the behavior of the --features and --no-default-features command-line options . With version "1" , you can only enable features for the package in the current working directory. For example, in a workspace with packages foo and bar , and you are in the directory for package foo , and ran the command cargo build -p bar --features bar-feat , this would fail because the --features flag only allowed enabling features on foo . With resolver = "2" , the features flags allow enabling features for any of the packages selected on the command-line with -p and --workspace flags. For example: # This command is allowed with resolver = "2", regardless of which directory # you are in. cargo build -p foo -p bar --features foo-feat,bar-feat # This explicit equivalent works with any resolver version: cargo build -p foo -p bar --features foo/foo-feat,bar/bar-feat Additionally, with resolver = "1" , the --no-default-features flag only disables the default feature for the package in the current directory. With version “2”, it will disable the default features for all workspace members. Build scripts Build scripts can detect which features are enabled on the package by inspecting the CARGO_FEATURE_<name> environment variable, where <name> is the feature name converted to uppercase and - converted to _ . Required features The required-features field can be used to disable specific Cargo targets if a feature is not enabled. See the linked documentation for more details. SemVer compatibility Enabling a feature should not introduce a SemVer-incompatible change. For example, the feature shouldn’t change an existing API in a way that could break existing uses. More details about what changes are compatible can be found in the SemVer Compatibility chapter . Care should be taken when adding and removing feature definitions and optional dependencies, as these can sometimes be backwards-incompatible changes. More details can be found in the Cargo section of the SemVer Compatibility chapter. In short, follow these rules: The following is usually safe to do in a minor release: Add a new feature or optional dependency . Change the features used on a dependency . The following should usually not be done in a minor release: Remove a feature or optional dependency . Moving existing public code behind a feature . Remove a feature from a feature list . See the links for caveats and examples. Feature documentation and discovery You are encouraged to document which features are available in your package. This can be done by adding doc comments at the top of lib.rs . As an example, see the regex crate source , which when rendered can be viewed on docs.rs . If you have other documentation, such as a user guide, consider adding the documentation there (for example, see serde.rs ). If you have a binary project, consider documenting the features in the README or other documentation for the project (for example, see sccache ). Clearly documenting the features can set expectations about features that are considered “unstable” or otherwise shouldn’t be used. For example, if there is an optional dependency, but you don’t want users to explicitly list that optional dependency as a feature, exclude it from the documented list. Documentation published on docs.rs can use metadata in Cargo.toml to control which features are enabled when the documentation is built. See docs.rs metadata documentation for more details. Note : Rustdoc has experimental support for annotating the documentation to indicate which features are required to use certain APIs. See the doc_cfg documentation for more details. An example is the syn documentation , where you can see colored boxes which note which features are required to use it. Discovering features When features are documented in the library API, this can make it easier for your users to discover which features are available and what they do. If the feature documentation for a package isn’t readily available, you can look at the Cargo.toml file, but sometimes it can be hard to track it down. The crate page on crates.io has a link to the source repository if available. Tools like cargo vendor or cargo-clone-crate can be used to download the source and inspect it. Feature combinations Because features are a form of conditional compilation, they require an exponential number of configurations and test cases to be 100% covered. By default, tests, docs, and other tooling such as Clippy will only run with the default set of features. We encourage you to consider your strategy and tooling in regards to different feature combinations — Every project will have different requirements in conjunction with time, resources, and the cost-benefit of covering specific scenarios. Common configurations may be with / without default features, specific combinations of features, or all combinations of features. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/cargo/reference/manifest.html | The Manifest Format - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book The Manifest Format The Cargo.toml file for each package is called its manifest . It is written in the TOML format. It contains metadata that is needed to compile the package. Checkout the cargo locate-project section for more detail on how cargo finds the manifest file. Every manifest file consists of the following sections: cargo-features — Unstable, nightly-only features. [package] — Defines a package. name — The name of the package. version — The version of the package. authors — The authors of the package. edition — The Rust edition. rust-version — The minimal supported Rust version. description — A description of the package. documentation — URL of the package documentation. readme — Path to the package’s README file. homepage — URL of the package homepage. repository — URL of the package source repository. license — The package license. license-file — Path to the text of the license. keywords — Keywords for the package. categories — Categories of the package. workspace — Path to the workspace for the package. build — Path to the package build script. links — Name of the native library the package links with. exclude — Files to exclude when publishing. include — Files to include when publishing. publish — Can be used to prevent publishing the package. metadata — Extra settings for external tools. default-run — The default binary to run by cargo run . autolib — Disables library auto discovery. autobins — Disables binary auto discovery. autoexamples — Disables example auto discovery. autotests — Disables test auto discovery. autobenches — Disables bench auto discovery. resolver — Sets the dependency resolver to use. Target tables: (see configuration for settings) [lib] — Library target settings. [[bin]] — Binary target settings. [[example]] — Example target settings. [[test]] — Test target settings. [[bench]] — Benchmark target settings. Dependency tables: [dependencies] — Package library dependencies. [dev-dependencies] — Dependencies for examples, tests, and benchmarks. [build-dependencies] — Dependencies for build scripts. [target] — Platform-specific dependencies. [badges] — Badges to display on a registry. [features] — Conditional compilation features. [lints] — Configure linters for this package. [hints] — Provide hints for compiling this package. [patch] — Override dependencies. [replace] — Override dependencies (deprecated). [profile] — Compiler settings and optimizations. [workspace] — The workspace definition. The [package] section The first section in a Cargo.toml is [package] . [package] name = "hello_world" # the name of the package version = "0.1.0" # the current version, obeying semver The only field required by Cargo is name . If publishing to a registry, the registry may require additional fields. See the notes below and the publishing chapter for requirements for publishing to crates.io . The name field The package name is an identifier used to refer to the package. It is used when listed as a dependency in another package, and as the default name of inferred lib and bin targets. The name must use only alphanumeric characters or - or _ , and cannot be empty. Note that cargo new and cargo init impose some additional restrictions on the package name, such as enforcing that it is a valid Rust identifier and not a keyword. crates.io imposes even more restrictions, such as: Only ASCII characters are allowed. Do not use reserved names. Do not use special Windows names such as “nul”. Use a maximum of 64 characters of length. The version field The version field is formatted according to the SemVer specification: Versions must have three numeric parts, the major version, the minor version, and the patch version. A pre-release part can be added after a dash such as 1.0.0-alpha . The pre-release part may be separated with periods to distinguish separate components. Numeric components will use numeric comparison while everything else will be compared lexicographically. For example, 1.0.0-alpha.11 is higher than 1.0.0-alpha.4 . A metadata part can be added after a plus, such as 1.0.0+21AF26D3 . This is for informational purposes only and is generally ignored by Cargo. Cargo bakes in the concept of Semantic Versioning , so versions are considered compatible if their left-most non-zero major/minor/patch component is the same. See the Resolver chapter for more information on how Cargo uses versions to resolve dependencies. This field is optional and defaults to 0.0.0 . The field is required for publishing packages. MSRV: Before 1.75, this field was required The authors field Warning : This field is deprecated The optional authors field lists in an array the people or organizations that are considered the “authors” of the package. An optional email address may be included within angled brackets at the end of each author entry. [package] # ... authors = ["Graydon Hoare", "Fnu Lnu <no-reply@rust-lang.org>"] This field is surfaced in package metadata and in the CARGO_PKG_AUTHORS environment variable within build.rs for backwards compatibility. The edition field The edition key is an optional key that affects which Rust Edition your package is compiled with. Setting the edition key in [package] will affect all targets/crates in the package, including test suites, benchmarks, binaries, examples, etc. [package] # ... edition = '2024' Most manifests have the edition field filled in automatically by cargo new with the latest stable edition. By default cargo new creates a manifest with the 2024 edition currently. If the edition field is not present in Cargo.toml , then the 2015 edition is assumed for backwards compatibility. Note that all manifests created with cargo new will not use this historical fallback because they will have edition explicitly specified to a newer value. The rust-version field The rust-version field tells cargo what version of the Rust toolchain you support for your package. See the Rust version chapter for more detail. The description field The description is a short blurb about the package. crates.io will display this with your package. This should be plain text (not Markdown). [package] # ... description = "A short description of my package" Note : crates.io requires the description to be set. The documentation field The documentation field specifies a URL to a website hosting the crate’s documentation. If no URL is specified in the manifest file, crates.io will automatically link your crate to the corresponding docs.rs page when the documentation has been built and is available (see docs.rs queue ). [package] # ... documentation = "https://docs.rs/bitflags" The readme field The readme field should be the path to a file in the package root (relative to this Cargo.toml ) that contains general information about the package. This file will be transferred to the registry when you publish. crates.io will interpret it as Markdown and render it on the crate’s page. [package] # ... readme = "README.md" If no value is specified for this field, and a file named README.md , README.txt or README exists in the package root, then the name of that file will be used. You can suppress this behavior by setting this field to false . If the field is set to true , a default value of README.md will be assumed. The homepage field The homepage field should be a URL to a site that is the home page for your package. [package] # ... homepage = "https://serde.rs" A value should only be set for homepage if there is a dedicated website for the crate other than the source repository or API documentation. Do not make homepage redundant with either the documentation or repository values. The repository field The repository field should be a URL to the source repository for your package. [package] # ... repository = "https://github.com/rust-lang/cargo" The license and license-file fields The license field contains the name of the software license that the package is released under. The license-file field contains the path to a file containing the text of the license (relative to this Cargo.toml ). crates.io interprets the license field as an SPDX 2.3 license expression . The name must be a known license from the SPDX license list 3.20 . See the SPDX site for more information. SPDX license expressions support AND and OR operators to combine multiple licenses. 1 [package] # ... license = "MIT OR Apache-2.0" Using OR indicates the user may choose either license. Using AND indicates the user must comply with both licenses simultaneously. The WITH operator indicates a license with a special exception. Some examples: MIT OR Apache-2.0 LGPL-2.1-only AND MIT AND BSD-2-Clause GPL-2.0-or-later WITH Bison-exception-2.2 If a package is using a nonstandard license, then the license-file field may be specified in lieu of the license field. [package] # ... license-file = "LICENSE.txt" Note : crates.io requires either license or license-file to be set. The keywords field The keywords field is an array of strings that describe this package. This can help when searching for the package on a registry, and you may choose any words that would help someone find this crate. [package] # ... keywords = ["gamedev", "graphics"] Note : crates.io allows a maximum of 5 keywords. Each keyword must be ASCII text, have at most 20 characters, start with an alphanumeric character, and only contain letters, numbers, _ , - or + . The categories field The categories field is an array of strings of the categories this package belongs to. categories = ["command-line-utilities", "development-tools::cargo-plugins"] Note : crates.io has a maximum of 5 categories. Each category should match one of the strings available at https://crates.io/category_slugs , and must match exactly. The workspace field The workspace field can be used to configure the workspace that this package will be a member of. If not specified this will be inferred as the first Cargo.toml with [workspace] upwards in the filesystem. Setting this is useful if the member is not inside a subdirectory of the workspace root. [package] # ... workspace = "path/to/workspace/root" This field cannot be specified if the manifest already has a [workspace] table defined. That is, a crate cannot both be a root crate in a workspace (contain [workspace] ) and also be a member crate of another workspace (contain package.workspace ). For more information, see the workspaces chapter . The build field The build field specifies a file in the package root which is a build script for building native code. More information can be found in the build script guide . [package] # ... build = "build.rs" The default is "build.rs" , which loads the script from a file named build.rs in the root of the package. Use build = "custom_build_name.rs" to specify a path to a different file or build = false to disable automatic detection of the build script. The links field The links field specifies the name of a native library that is being linked to. More information can be found in the links section of the build script guide. For example, a crate that links a native library called “git2” (e.g. libgit2.a on Linux) may specify: [package] # ... links = "git2" The exclude and include fields The exclude and include fields can be used to explicitly specify which files are included when packaging a project to be published , and certain kinds of change tracking (described below). The patterns specified in the exclude field identify a set of files that are not included, and the patterns in include specify files that are explicitly included. You may run cargo package --list to verify which files will be included in the package. [package] # ... exclude = ["/ci", "images/", ".*"] [package] # ... include = ["/src", "COPYRIGHT", "/examples", "!/examples/big_example"] The default if neither field is specified is to include all files from the root of the package, except for the exclusions listed below. If include is not specified, then the following files will be excluded: If the package is not in a git repository, all “hidden” files starting with a dot will be skipped. If the package is in a git repository, any files that are ignored by the gitignore rules of the repository and global git configuration will be skipped. Regardless of whether exclude or include is specified, the following files are always excluded: Any sub-packages will be skipped (any subdirectory that contains a Cargo.toml file). A directory named target in the root of the package will be skipped. The following files are always included: The Cargo.toml file of the package itself is always included, it does not need to be listed in include . A minimized Cargo.lock is automatically included. See cargo package for more information. If a license-file is specified, it is always included. The options are mutually exclusive; setting include will override an exclude . If you need to have exclusions to a set of include files, use the ! operator described below. The patterns should be gitignore -style patterns. Briefly: foo matches any file or directory with the name foo anywhere in the package. This is equivalent to the pattern **/foo . /foo matches any file or directory with the name foo only in the root of the package. foo/ matches any directory with the name foo anywhere in the package. Common glob patterns like * , ? , and [] are supported: * matches zero or more characters except / . For example, *.html matches any file or directory with the .html extension anywhere in the package. ? matches any character except / . For example, foo? matches food , but not foo . [] allows for matching a range of characters. For example, [ab] matches either a or b . [a-z] matches letters a through z. **/ prefix matches in any directory. For example, **/foo/bar matches the file or directory bar anywhere that is directly under directory foo . /** suffix matches everything inside. For example, foo/** matches all files inside directory foo , including all files in subdirectories below foo . /**/ matches zero or more directories. For example, a/**/b matches a/b , a/x/b , a/x/y/b , and so on. ! prefix negates a pattern. For example, a pattern of src/*.rs and !foo.rs would match all files with the .rs extension inside the src directory, except for any file named foo.rs . The include/exclude list is also used for change tracking in some situations. For targets built with rustdoc , it is used to determine the list of files to track to determine if the target should be rebuilt. If the package has a build script that does not emit any rerun-if-* directives, then the include/exclude list is used for tracking if the build script should be re-run if any of those files change. The publish field The publish field can be used to control which registries names the package may be published to: [package] # ... publish = ["some-registry-name"] To prevent a package from being published to a registry (like crates.io) by mistake, for instance to keep a package private in a company, you can omit the version field. If you’d like to be more explicit, you can disable publishing: [package] # ... publish = false If publish array contains a single registry, cargo publish command will use it when --registry flag is not specified. The metadata table Cargo by default will warn about unused keys in Cargo.toml to assist in detecting typos and such. The package.metadata table, however, is completely ignored by Cargo and will not be warned about. This section can be used for tools which would like to store package configuration in Cargo.toml . For example: [package] name = "..." # ... # Metadata used when generating an Android APK, for example. [package.metadata.android] package-name = "my-awesome-android-app" assets = "path/to/static" You’ll need to look in the documentation for your tool to see how to use this field. For Rust Projects that use package.metadata tables, see: docs.rs There is a similar table at the workspace level at workspace.metadata . While cargo does not specify a format for the content of either of these tables, it is suggested that external tools may wish to use them in a consistent fashion, such as referring to the data in workspace.metadata if data is missing from package.metadata , if that makes sense for the tool in question. The default-run field The default-run field in the [package] section of the manifest can be used to specify a default binary picked by cargo run . For example, when there is both src/bin/a.rs and src/bin/b.rs : [package] default-run = "a" The [lints] section Override the default level of lints from different tools by assigning them to a new level in a table, for example: [lints.rust] unsafe_code = "forbid" This is short-hand for: [lints.rust] unsafe_code = { level = "forbid", priority = 0 } level corresponds to the lint levels in rustc : forbid deny warn allow priority is a signed integer that controls which lints or lint groups override other lint groups: lower (particularly negative) numbers have lower priority, being overridden by higher numbers, and show up first on the command-line to tools like rustc To know which table under [lints] a particular lint belongs under, it is the part before :: in the lint name. If there isn’t a :: , then the tool is rust . For example a warning about unsafe_code would be lints.rust.unsafe_code but a lint about clippy::enum_glob_use would be lints.clippy.enum_glob_use . For example: [lints.rust] unsafe_code = "forbid" [lints.clippy] enum_glob_use = "deny" Generally, these will only affect local development of the current package. Cargo only applies these to the current package and not to dependencies. As for dependents, Cargo suppresses lints from non-path dependencies with features like --cap-lints . MSRV: Respected as of 1.74 The [hints] section The [hints] section allows specifying hints for compiling this package. Cargo will respect these hints by default when compiling this package, though the top-level package being built can override these values through the [profile] mechanism. Hints are, by design, always safe for Cargo to ignore; if Cargo encounters a hint it doesn’t understand, or a hint it understands but with a value it doesn’t understand, it will warn, but not error. As a result, specifying hints in a crate does not impact the MSRV of the crate. Individual hints may have an associated unstable feature gate that you need to pass in order to apply the configuration they specify, but if you don’t specify that unstable feature gate, you will again get only a warning, not an error. There are no stable hints at this time. See the hint-mostly-unused documentation for information on an unstable hint. MSRV: Respected as of 1.90. The [badges] section The [badges] section is for specifying status badges that can be displayed on a registry website when the package is published. Note: crates.io previously displayed badges next to a crate on its website, but that functionality has been removed. Packages should place badges in its README file which will be displayed on crates.io (see the readme field ). [badges] # The `maintenance` table indicates the status of the maintenance of # the crate. This may be used by a registry, but is currently not # used by crates.io. See https://github.com/rust-lang/crates.io/issues/2437 # and https://github.com/rust-lang/crates.io/issues/2438 for more details. # # The `status` field is required. Available options are: # - `actively-developed`: New features are being added and bugs are being fixed. # - `passively-maintained`: There are no plans for new features, but the maintainer intends to # respond to issues that get filed. # - `as-is`: The crate is feature complete, the maintainer does not intend to continue working on # it or providing support, but it works for the purposes it was designed for. # - `experimental`: The author wants to share it with the community but is not intending to meet # anyone's particular use case. # - `looking-for-maintainer`: The current maintainer would like to transfer the crate to someone # else. # - `deprecated`: The maintainer does not recommend using this crate (the description of the crate # can describe why, there could be a better solution available or there could be problems with # the crate that the author does not want to fix). # - `none`: Displays no badge on crates.io, since the maintainer has not chosen to specify # their intentions, potential crate users will need to investigate on their own. maintenance = { status = "..." } Dependency sections See the specifying dependencies page for information on the [dependencies] , [dev-dependencies] , [build-dependencies] , and target-specific [target.*.dependencies] sections. The [profile.*] sections The [profile] tables provide a way to customize compiler settings such as optimizations and debug settings. See the Profiles chapter for more detail. Previously multiple licenses could be separated with a / , but that usage is deprecated. ↩ | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/cargo/reference/resolver.html#features | Dependency Resolution - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Dependency Resolution One of Cargo’s primary tasks is to determine the versions of dependencies to use based on the version requirements specified in each package. This process is called “dependency resolution” and is performed by the “resolver”. The result of the resolution is stored in the Cargo.lock file which “locks” the dependencies to specific versions, and keeps them fixed over time. The cargo tree command can be used to visualize the result of the resolver. Constraints and Heuristics In many cases there is no single “best” dependency resolution. The resolver operates under various constraints and heuristics to find a generally applicable resolution. To understand how these interact, it is helpful to have a coarse understanding of how dependency resolution works. This pseudo-code approximates what Cargo’s resolver does: #![allow(unused)] fn main() { pub fn resolve(workspace: &[Package], policy: Policy) -> Option<ResolveGraph> { let dep_queue = Queue::new(workspace); let resolved = ResolveGraph::new(); resolve_next(dep_queue, resolved, policy) } fn resolve_next(dep_queue: Queue, resolved: ResolveGraph, policy: Policy) -> Option<ResolveGraph> { let Some(dep_spec) = policy.pick_next_dep(dep_queue) else { // Done return Some(resolved); }; if let Some(resolved) = policy.try_unify_version(dep_spec, resolved.clone()) { return Some(resolved); } let dep_versions = dep_spec.lookup_versions()?; let mut dep_versions = policy.filter_versions(dep_spec, dep_versions); while let Some(dep_version) = policy.pick_next_version(&mut dep_versions) { if policy.needs_version_unification(dep_version, &resolved) { continue; } let mut dep_queue = dep_queue.clone(); dep_queue.enqueue(dep_version.dependencies); let mut resolved = resolved.clone(); resolved.register(dep_version); if let Some(resolved) = resolve_next(dep_queue, resolved) { return Some(resolved); } } // No valid solution found, backtrack and `pick_next_version` None } } Key steps: Walking dependencies ( pick_next_dep ): The order dependencies are walked can affect how related version requirements for the same dependency get resolved, see unifying versions, and how much the resolver backtracks, affecting resolver performance, Unifying versions ( try_unify_version , needs_version_unification ): Cargo reuses versions where possible to reduce build times and allow types from common dependencies to be passed between APIs. If multiple versions would have been unified if it wasn’t for conflicts in their dependency specifications , Cargo will backtrack, erroring if no solution is found, rather than selecting multiple versions. A dependency specification or Cargo may decide that a version is undesirable, preferring to backtrack or error rather than use it. Preferring versions ( pick_next_version ): Cargo may decide that it should prefer a specific version, falling back to the next version when backtracking. Version numbers Generally, Cargo prefers the highest version currently available. For example, if you had a package in the resolve graph with: [dependencies] bitflags = "*" If at the time the Cargo.lock file is generated, the greatest version of bitflags is 1.2.1 , then the package will use 1.2.1 . For an example of a possible exception, see Rust version . Version requirements Package specify what versions they support, rejecting all others, through version requirements . For example, if you had a package in the resolve graph with: [dependencies] bitflags = "1.0" # meaning `>=1.0.0,<2.0.0` If at the time the Cargo.lock file is generated, the greatest version of bitflags is 1.2.1 , then the package will use 1.2.1 because it is the greatest within the compatibility range. If 2.0.0 is published, it will still use 1.2.1 because 2.0.0 is considered incompatible. SemVer compatibility Cargo assumes packages follow SemVer and will unify dependency versions if they are SemVer compatible according to the Caret version requirements . If two compatible versions cannot be unified because of conflicting version requirements, Cargo will error. See the SemVer Compatibility chapter for guidance on what is considered a “compatible” change. Examples: The following two packages will have their dependencies on bitflags unified because any version picked will be compatible with each other. # Package A [dependencies] bitflags = "1.0" # meaning `>=1.0.0,<2.0.0` # Package B [dependencies] bitflags = "1.1" # meaning `>=1.1.0,<2.0.0` The following packages will error because the version requirements conflict, selecting two distinct compatible versions. # Package A [dependencies] log = "=0.4.11" # Package B [dependencies] log = "=0.4.8" The following two packages will not have their dependencies on rand unified because only incompatible versions are available for each. Instead, two different versions (e.g. 0.6.5 and 0.7.3) will be resolved and built. This can lead to potential problems, see the Version-incompatibility hazards section for more details. # Package A [dependencies] rand = "0.7" # meaning `>=0.7.0,<0.8.0` # Package B [dependencies] rand = "0.6" # meaning `>=0.6.0,<0.7.0` Generally, the following two packages will not have their dependencies unified because incompatible versions are available that satisfy the version requirements: Instead, two different versions (e.g. 0.6.5 and 0.7.3) will be resolved and built. The application of other constraints or heuristics may cause these to be unified, picking one version (e.g. 0.6.5). # Package A [dependencies] rand = ">=0.6,<0.8.0" # Package B [dependencies] rand = "0.6" # meaning `>=0.6.0,<0.7.0` Version-incompatibility hazards When multiple versions of a crate appear in the resolve graph, this can cause problems when types from those crates are exposed by the crates using them. This is because the types and items are considered different by the Rust compiler, even if they have the same name. Libraries should take care when publishing a SemVer-incompatible version (for example, publishing 2.0.0 after 1.0.0 has been in use), particularly for libraries that are widely used. The “ semver trick ” is a workaround for this problem of publishing a breaking change while retaining compatibility with older versions. The linked page goes into detail about what the problem is and how to address it. In short, when a library wants to publish a SemVer-breaking release, publish the new release, and also publish a point release of the previous version that reexports the types from the newer version. These incompatibilities usually manifest as a compile-time error, but sometimes they will only appear as a runtime misbehavior. For example, let’s say there is a common library named foo that ends up appearing with both version 1.0.0 and 2.0.0 in the resolve graph. If downcast_ref is used on an object created by a library using version 1.0.0 , and the code calling downcast_ref is downcasting to a type from version 2.0.0 , the downcast will fail at runtime. It is important to make sure that if you have multiple versions of a library that you are properly using them, especially if it is ever possible for the types from different versions to be used together. The cargo tree -d command can be used to identify duplicate versions and where they come from. Similarly, it is important to consider the impact on the ecosystem if you publish a SemVer-incompatible version of a popular library. Lock file Cargo gives the highest priority to versions contained in the Cargo.lock file , when used. This is intended to balance reproducible builds with adjusting to changes in the manifest. For example, if you had a package in the resolve graph with: [dependencies] bitflags = "*" If at the time your Cargo.lock file is generated, the greatest version of bitflags is 1.2.1 , then the package will use 1.2.1 and recorded in the Cargo.lock file. By the time Cargo next runs, bitflags 1.3.5 is out. When resolving dependencies, 1.2.1 will still be used because it is present in your Cargo.lock file. The package is then edited to: [dependencies] bitflags = "1.3.0" bitflags 1.2.1 does not match this version requirement and so that entry in your Cargo.lock file is ignored and version 1.3.5 will now be used and recorded in your Cargo.lock file. Rust version To support developing software with a minimum supported Rust version , the resolver can take into account a dependency version’s compatibility with your Rust version. This is controlled by the config field resolver.incompatible-rust-versions . With the fallback setting, the resolver will prefer packages with a Rust version that is less than or equal to your own Rust version. For example, you are using Rust 1.85 to develop the following package: [package] name = "my-cli" rust-version = "1.62" [dependencies] clap = "4.0" # resolves to 4.0.32 The resolver would pick version 4.0.32 because it has a Rust version of 1.60.0. 4.0.0 is not picked because it is a lower version number despite it also having a Rust version of 1.60.0. 4.5.20 is not picked because it is incompatible with my-cli ’s Rust version of 1.62 despite having a much higher version and it has a Rust version of 1.74.0 which is compatible with your 1.85 toolchain. If a version requirement does not include a Rust version compatible dependency version, the resolver won’t error but will instead pick a version, even if its potentially suboptimal. For example, you change the dependency on clap : [package] name = "my-cli" rust-version = "1.62" [dependencies] clap = "4.2" # resolves to 4.5.20 No version of clap matches that version requirement that is compatible with Rust version 1.62. The resolver will then pick an incompatible version, like 4.5.20 despite it having a Rust version of 1.74. When the resolver selects a dependency version of a package, it does not know all the workspace members that will eventually have a transitive dependency on that version and so it cannot take into account only the Rust versions relevant for that dependency. The resolver has heuristics to find a “good enough” solution when workspace members have different Rust versions. This applies even for packages in a workspace without a Rust version. When a workspace has members with different Rust versions, the resolver may pick a lower dependency version than necessary. For example, you have the following workspace members: [package] name = "a" rust-version = "1.62" [package] name = "b" [dependencies] clap = "4.2" # resolves to 4.5.20 Though package b does not have a Rust version and could use a higher version like 4.5.20, 4.0.32 will be selected because of package a ’s Rust version of 1.62. Or the resolver may pick too high of a version. For example, you have the following workspace members: [package] name = "a" rust-version = "1.62" [dependencies] clap = "4.2" # resolves to 4.5.20 [package] name = "b" [dependencies] clap = "4.5" # resolves to 4.5.20 Though each package has a version requirement for clap that would meet its own Rust version, because of version unification , the resolver will need to pick one version that works in both cases and that would be a version like 4.5.20. Features For the purpose of generating Cargo.lock , the resolver builds the dependency graph as-if all features of all workspace members are enabled. This ensures that any optional dependencies are available and properly resolved with the rest of the graph when features are added or removed with the --features command-line flag . The resolver runs a second time to determine the actual features used when compiling a crate, based on the features selected on the command-line. Dependencies are resolved with the union of all features enabled on them. For example, if one package depends on the im package with the serde dependency enabled and another package depends on it with the rayon dependency enabled, then im will be built with both features enabled, and the serde and rayon crates will be included in the resolve graph. If no packages depend on im with those features, then those optional dependencies will be ignored, and they will not affect resolution. When building multiple packages in a workspace (such as with --workspace or multiple -p flags), the features of the dependencies of all of those packages are unified. If you have a circumstance where you want to avoid that unification for different workspace members, you will need to build them via separate cargo invocations. The resolver will skip over versions of packages that are missing required features. For example, if a package depends on version ^1 of regex with the perf feature , then the oldest version it can select is 1.3.0 , because versions prior to that did not contain the perf feature. Similarly, if a feature is removed from a new release, then packages that require that feature will be stuck on the older releases that contain that feature. It is discouraged to remove features in a SemVer-compatible release. Beware that optional dependencies also define an implicit feature, so removing an optional dependency or making it non-optional can cause problems, see removing an optional dependency . Feature resolver version 2 When resolver = "2" is specified in Cargo.toml (see resolver versions below), a different feature resolver is used which uses a different algorithm for unifying features. The version "1" resolver will unify features for a package no matter where it is specified. The version "2" resolver will avoid unifying features in the following situations: Features for target-specific dependencies are not enabled if the target is not currently being built. For example: [dependencies.common] version = "1.0" features = ["f1"] [target.'cfg(windows)'.dependencies.common] version = "1.0" features = ["f2"] When building this example for a non-Windows platform, the f2 feature will not be enabled. Features enabled on build-dependencies or proc-macros will not be unified when those same dependencies are used as a normal dependency. For example: [dependencies] log = "0.4" [build-dependencies] log = {version = "0.4", features=['std']} When building the build script, the log crate will be built with the std feature. When building the library of your package, it will not enable the feature. Features enabled on dev-dependencies will not be unified when those same dependencies are used as a normal dependency, unless those dev-dependencies are currently being built. For example: [dependencies] serde = {version = "1.0", default-features = false} [dev-dependencies] serde = {version = "1.0", features = ["std"]} In this example, the library will normally link against serde without the std feature. However, when built as a test or example, it will include the std feature. For example, cargo test or cargo build --all-targets will unify these features. Note that dev-dependencies in dependencies are always ignored, this is only relevant for the top-level package or workspace members. links The links field is used to ensure only one copy of a native library is linked into a binary. The resolver will attempt to find a graph where there is only one instance of each links name. If it is unable to find a graph that satisfies that constraint, it will return an error. For example, it is an error if one package depends on libgit2-sys version 0.11 and another depends on 0.12 , because Cargo is unable to unify those, but they both link to the git2 native library. Due to this requirement, it is encouraged to be very careful when making SemVer-incompatible releases with the links field if your library is in common use. Yanked versions Yanked releases are those that are marked that they should not be used. When the resolver is building the graph, it will ignore all yanked releases unless they already exist in the Cargo.lock file or are explicitly requested by the --precise flag of cargo update (nightly only). Dependency updates Dependency resolution is automatically performed by all Cargo commands that need to know about the dependency graph. For example, cargo build will run the resolver to discover all the dependencies to build. After the first time it runs, the result is stored in the Cargo.lock file. Subsequent commands will run the resolver, keeping dependencies locked to the versions in Cargo.lock if it can . If the dependency list in Cargo.toml has been modified, for example changing the version of a dependency from 1.0 to 2.0 , then the resolver will select a new version for that dependency that matches the new requirements. If that new dependency introduces new requirements, those new requirements may also trigger additional updates. The Cargo.lock file will be updated with the new result. The --locked or --frozen flags can be used to change this behavior to prevent automatic updates when requirements change, and return an error instead. cargo update can be used to update the entries in Cargo.lock when new versions are published. Without any options, it will attempt to update all packages in the lock file. The -p flag can be used to target the update for a specific package, and other flags such as --recursive or --precise can be used to control how versions are selected. Overrides Cargo has several mechanisms to override dependencies within the graph. The Overriding Dependencies chapter goes into detail on how to use overrides. The overrides appear as an overlay to a registry, replacing the patched version with the new entry. Otherwise, resolution is performed like normal. Dependency kinds There are three kinds of dependencies in a package: normal, build , and dev . For the most part these are all treated the same from the perspective of the resolver. One difference is that dev-dependencies for non-workspace members are always ignored, and do not influence resolution. Platform-specific dependencies with the [target] table are resolved as-if all platforms are enabled. In other words, the resolver ignores the platform or cfg expression. dev-dependency cycles Usually the resolver does not allow cycles in the graph, but it does allow them for dev-dependencies . For example, project “foo” has a dev-dependency on “bar”, which has a normal dependency on “foo” (usually as a “path” dependency). This is allowed because there isn’t really a cycle from the perspective of the build artifacts. In this example, the “foo” library is built (which does not need “bar” because “bar” is only used for tests), and then “bar” can be built depending on “foo”, then the “foo” tests can be built linking to “bar”. Beware that this can lead to confusing errors. In the case of building library unit tests, there are actually two copies of the library linked into the final test binary: the one that was linked with “bar”, and the one built that contains the unit tests. Similar to the issues highlighted in the Version-incompatibility hazards section, the types between the two are not compatible. Be careful when exposing types of “foo” from “bar” in this situation, since the “foo” unit tests won’t treat them the same as the local types. If possible, try to split your package into multiple packages and restructure it so that it remains strictly acyclic. Resolver versions Different resolver behavior can be specified through the resolver version in Cargo.toml like this: [package] name = "my-package" version = "1.0.0" resolver = "2" "1" (default) "2" ( edition = "2021" default): Introduces changes in feature unification . See the features chapter for more details. "3" ( edition = "2024" default, requires Rust 1.84+): Change the default for resolver.incompatible-rust-versions from allow to fallback The resolver is a global option that affects the entire workspace. The resolver version in dependencies is ignored, only the value in the top-level package will be used. If using a virtual workspace , the version should be specified in the [workspace] table, for example: [workspace] members = ["member1", "member2"] resolver = "2" MSRV: Requires 1.51+ Recommendations The following are some recommendations for setting the version within your package, and for specifying dependency requirements. These are general guidelines that should apply to common situations, but of course some situations may require specifying unusual requirements. Follow the SemVer guidelines when deciding how to update your version number, and whether or not you will need to make a SemVer-incompatible version change. Use caret requirements for dependencies, such as "1.2.3" , for most situations. This ensures that the resolver can be maximally flexible in choosing a version while maintaining build compatibility. Specify all three components with the version you are currently using. This helps set the minimum version that will be used, and ensures that other users won’t end up with an older version of the dependency that might be missing something that your package requires. Avoid * requirements, as they are not allowed on crates.io , and they can pull in SemVer-breaking changes during a normal cargo update . Avoid overly broad version requirements. For example, >=2.0.0 can pull in any SemVer-incompatible version, like version 5.0.0 , which can result in broken builds in the future. Avoid overly narrow version requirements if possible. For example, if you specify a tilde requirement like bar="~1.3" , and another package specifies a requirement of bar="1.4" , this will fail to resolve, even though minor releases should be compatible. Try to keep the dependency versions up-to-date with the actual minimum versions that your library requires. For example, if you have a requirement of bar="1.0.12" , and then in a future release you start using new features added in the 1.1.0 release of “bar”, update your dependency requirement to bar="1.1.0" . If you fail to do this, it may not be immediately obvious because Cargo can opportunistically choose the newest version when you run a blanket cargo update . However, if another user depends on your library, and runs cargo update your-library , it will not automatically update “bar” if it is locked in their Cargo.lock . It will only update “bar” in that situation if the dependency declaration is also updated. Failure to do so can cause confusing build errors for the user using cargo update your-library . If two packages are tightly coupled, then an = dependency requirement may help ensure that they stay in sync. For example, a library with a companion proc-macro library will sometimes make assumptions between the two libraries that won’t work well if the two are out of sync (and it is never expected to use the two libraries independently). The parent library can use an = requirement on the proc-macro, and re-export the macros for easy access. 0.0.x versions can be used for packages that are permanently unstable. In general, the stricter you make the dependency requirements, the more likely it will be for the resolver to fail. Conversely, if you use requirements that are too loose, it may be possible for new versions to be published that will break the build. Troubleshooting The following illustrates some problems you may experience, and some possible solutions. Why was a dependency included? Say you see dependency rand in the cargo check output but don’t think it’s needed and want to understand why it’s being pulled in. You can run $ cargo tree --workspace --target all --all-features --invert rand rand v0.8.5 └── ... rand v0.8.5 └── ... Why was that feature on this dependency enabled? You might identify that it was an activated feature that caused rand to show up. To figure out which package activated the feature, you can add the --edges features $ cargo tree --workspace --target all --all-features --edges features --invert rand rand v0.8.5 └── ... rand v0.8.5 └── ... Unexpected dependency duplication You see multiple instances of rand when you run $ cargo tree --workspace --target all --all-features --duplicates rand v0.7.3 └── ... rand v0.8.5 └── ... The resolver algorithm has converged on a solution that includes two copies of a dependency when one would suffice. For example: # Package A [dependencies] rand = "0.7" # Package B [dependencies] rand = ">=0.6" # note: open requirements such as this are discouraged In this example, Cargo may build two copies of the rand crate, even though a single copy at version 0.7.3 would meet all requirements. This is because the resolver’s algorithm favors building the latest available version of rand for Package B, which is 0.8.5 at the time of this writing, and that is incompatible with Package A’s specification. The resolver’s algorithm does not currently attempt to “deduplicate” in this situation. The use of open-ended version requirements like >=0.6 is discouraged in Cargo. But, if you run into this situation, the cargo update command with the --precise flag can be used to manually remove such duplications. Why wasn’t a newer version selected? Say you noticed that the latest version of a dependency wasn’t selected when you ran: $ cargo update You can enable some extra logging to see why this happened: $ env CARGO_LOG=cargo::core::resolver=trace cargo update Note: Cargo log targets and levels may change over time. SemVer-breaking patch release breaks the build Sometimes a project may inadvertently publish a point release with a SemVer-breaking change. When users update with cargo update , they will pick up this new release, and then their build may break. In this situation, it is recommended that the project should yank the release, and either remove the SemVer-breaking change, or publish it as a new SemVer-major version increase. If the change happened in a third-party project, if possible try to (politely!) work with the project to resolve the issue. While waiting for the release to be yanked, some workarounds depend on the circumstances: If your project is the end product (such as a binary executable), just avoid updating the offending package in Cargo.lock . This can be done with the --precise flag in cargo update . If you publish a binary on crates.io , then you can temporarily add an = requirement to force the dependency to a specific good version. Binary projects can alternatively recommend users to use the --locked flag with cargo install to use the original Cargo.lock that contains the known good version. Libraries may also consider publishing a temporary new release with stricter requirements that avoid the troublesome dependency. You may want to consider using range requirements (instead of = ) to avoid overly-strict requirements that may conflict with other packages using the same dependency. Once the problem has been resolved, you can publish another point release that relaxes the dependency back to a caret requirement. If it looks like the third-party project is unable or unwilling to yank the release, then one option is to update your code to be compatible with the changes, and update the dependency requirement to set the minimum version to the new release. You will also need to consider if this is a SemVer-breaking change of your own library, for example if it exposes types from the dependency. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/reference/attributes.html#grammar-MetaItem | Attributes - The Rust Reference Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Reference [attributes] Attributes [attributes .syntax] Syntax InnerAttribute → # ! [ Attr ] OuterAttribute → # [ Attr ] Attr → SimplePath AttrInput ? | unsafe ( SimplePath AttrInput ? ) AttrInput → DelimTokenTree | = Expression Show Railroad InnerAttribute # ! [ Attr ] OuterAttribute # [ Attr ] Attr SimplePath AttrInput unsafe ( SimplePath AttrInput ) AttrInput DelimTokenTree = Expression [attributes .intro] An attribute is a general, free-form metadatum that is interpreted according to name, convention, language, and compiler version. Attributes are modeled on Attributes in ECMA-335 , with the syntax coming from ECMA-334 (C#). [attributes .inner] Inner attributes , written with a bang ( ! ) after the hash ( # ), apply to the item that the attribute is declared within. Outer attributes , written without the bang after the hash, apply to the thing that follows the attribute. [attributes .input] The attribute consists of a path to the attribute, followed by an optional delimited token tree whose interpretation is defined by the attribute. Attributes other than macro attributes also allow the input to be an equals sign ( = ) followed by an expression. See the meta item syntax below for more details. [attributes .safety] An attribute may be unsafe to apply. To avoid undefined behavior when using these attributes, certain obligations that cannot be checked by the compiler must be met. To assert these have been, the attribute is wrapped in unsafe(..) , e.g. #[unsafe(no_mangle)] . The following attributes are unsafe: export_name link_section naked no_mangle [attributes .kind] Attributes can be classified into the following kinds: Built-in attributes Proc macro attributes Derive macro helper attributes Tool attributes [attributes .allowed-position] Attributes may be applied to many things in the language: All item declarations accept outer attributes while external blocks , functions , implementations , and modules accept inner attributes. Most statements accept outer attributes (see Expression Attributes for limitations on expression statements). Block expressions accept outer and inner attributes, but only when they are the outer expression of an expression statement or the final expression of another block expression. Enum variants and struct and union fields accept outer attributes. Match expression arms accept outer attributes. Generic lifetime or type parameter accept outer attributes. Expressions accept outer attributes in limited situations, see Expression Attributes for details. Function , closure and function pointer parameters accept outer attributes. This includes attributes on variadic parameters denoted with ... in function pointers and external blocks . Some examples of attributes: #![allow(unused)] fn main() { // General metadata applied to the enclosing module or crate. #![crate_type = "lib"] // A function marked as a unit test #[test] fn test_foo() { /* ... */ } // A conditionally-compiled module #[cfg(target_os = "linux")] mod bar { /* ... */ } // A lint attribute used to suppress a warning/error #[allow(non_camel_case_types)] type int8_t = i8; // Inner attribute applies to the entire function. fn some_unused_variables() { #![allow(unused_variables)] let x = (); let y = (); let z = (); } } [attributes .meta] Meta item attribute syntax [attributes .meta .intro] A “meta item” is the syntax used for the Attr rule by most built-in attributes . It has the following grammar: [attributes .meta .syntax] Syntax MetaItem → SimplePath | SimplePath = Expression | SimplePath ( MetaSeq ? ) MetaSeq → MetaItemInner ( , MetaItemInner ) * , ? MetaItemInner → MetaItem | Expression Show Railroad MetaItem SimplePath SimplePath = Expression SimplePath ( MetaSeq ) MetaSeq MetaItemInner , MetaItemInner , MetaItemInner MetaItem Expression [attributes .meta .literal-expr] Expressions in meta items must macro-expand to literal expressions, which must not include integer or float type suffixes. Expressions which are not literal expressions will be syntactically accepted (and can be passed to proc-macros), but will be rejected after parsing. [attributes .meta .order] Note that if the attribute appears within another macro, it will be expanded after that outer macro. For example, the following code will expand the Serialize proc-macro first, which must preserve the include_str! call in order for it to be expanded: #[derive(Serialize)] struct Foo { #[doc = include_str!("x.md")] x: u32 } [attributes .meta .order-macro] Additionally, macros in attributes will be expanded only after all other attributes applied to the item: #[macro_attr1] // expanded first #[doc = mac!()] // `mac!` is expanded fourth. #[macro_attr2] // expanded second #[derive(MacroDerive1, MacroDerive2)] // expanded third fn foo() {} [attributes .meta .builtin] Various built-in attributes use different subsets of the meta item syntax to specify their inputs. The following grammar rules show some commonly used forms: [attributes .meta .builtin .syntax] Syntax MetaWord → IDENTIFIER MetaNameValueStr → IDENTIFIER = ( STRING_LITERAL | RAW_STRING_LITERAL ) MetaListPaths → IDENTIFIER ( ( SimplePath ( , SimplePath ) * , ? ) ? ) MetaListIdents → IDENTIFIER ( ( IDENTIFIER ( , IDENTIFIER ) * , ? ) ? ) MetaListNameValueStr → IDENTIFIER ( ( MetaNameValueStr ( , MetaNameValueStr ) * , ? ) ? ) Show Railroad MetaWord IDENTIFIER MetaNameValueStr IDENTIFIER = STRING_LITERAL RAW_STRING_LITERAL MetaListPaths IDENTIFIER ( SimplePath , SimplePath , ) MetaListIdents IDENTIFIER ( IDENTIFIER , IDENTIFIER , ) MetaListNameValueStr IDENTIFIER ( MetaNameValueStr , MetaNameValueStr , ) Some examples of meta items are: Style Example MetaWord no_std MetaNameValueStr doc = "example" MetaListPaths allow(unused, clippy::inline_always) MetaListIdents macro_use(foo, bar) MetaListNameValueStr link(name = "CoreFoundation", kind = "framework") [attributes .activity] Active and inert attributes [attributes .activity .intro] An attribute is either active or inert. During attribute processing, active attributes remove themselves from the thing they are on while inert attributes stay on. The cfg and cfg_attr attributes are active. Attribute macros are active. All other attributes are inert. [attributes .tool] Tool attributes [attributes .tool .intro] The compiler may allow attributes for external tools where each tool resides in its own module in the tool prelude . The first segment of the attribute path is the name of the tool, with one or more additional segments whose interpretation is up to the tool. [attributes .tool .ignored] When a tool is not in use, the tool’s attributes are accepted without a warning. When the tool is in use, the tool is responsible for processing and interpretation of its attributes. [attributes .tool .prelude] Tool attributes are not available if the no_implicit_prelude attribute is used. #![allow(unused)] fn main() { // Tells the rustfmt tool to not format the following element. #[rustfmt::skip] struct S { } // Controls the "cyclomatic complexity" threshold for the clippy tool. #[clippy::cyclomatic_complexity = "100"] pub fn f() {} } Note rustc currently recognizes the tools “clippy”, “rustfmt”, “diagnostic”, “miri” and “rust_analyzer”. [attributes .builtin] Built-in attributes index The following is an index of all built-in attributes. Conditional compilation cfg — Controls conditional compilation. cfg_attr — Conditionally includes attributes. Testing test — Marks a function as a test. ignore — Disables a test function. should_panic — Indicates a test should generate a panic. Derive derive — Automatic trait implementations. automatically_derived — Marker for implementations created by derive . Macros macro_export — Exports a macro_rules macro for cross-crate usage. macro_use — Expands macro visibility, or imports macros from other crates. proc_macro — Defines a function-like macro. proc_macro_derive — Defines a derive macro. proc_macro_attribute — Defines an attribute macro. Diagnostics allow , expect , warn , deny , forbid — Alters the default lint level. deprecated — Generates deprecation notices. must_use — Generates a lint for unused values. diagnostic::on_unimplemented — Hints the compiler to emit a certain error message if a trait is not implemented. diagnostic::do_not_recommend — Hints the compiler to not show a certain trait impl in error messages. ABI, linking, symbols, and FFI link — Specifies a native library to link with an extern block. link_name — Specifies the name of the symbol for functions or statics in an extern block. link_ordinal — Specifies the ordinal of the symbol for functions or statics in an extern block. no_link — Prevents linking an extern crate. repr — Controls type layout. crate_type — Specifies the type of crate (library, executable, etc.). no_main — Disables emitting the main symbol. export_name — Specifies the exported symbol name for a function or static. link_section — Specifies the section of an object file to use for a function or static. no_mangle — Disables symbol name encoding. used — Forces the compiler to keep a static item in the output object file. crate_name — Specifies the crate name. Code generation inline — Hint to inline code. cold — Hint that a function is unlikely to be called. naked — Prevent the compiler from emitting a function prologue and epilogue. no_builtins — Disables use of certain built-in functions. target_feature — Configure platform-specific code generation. track_caller — Pass the parent call location to std::panic::Location::caller() . instruction_set — Specify the instruction set used to generate a functions code Documentation doc — Specifies documentation. See The Rustdoc Book for more information. Doc comments are transformed into doc attributes. Preludes no_std — Removes std from the prelude. no_implicit_prelude — Disables prelude lookups within a module. Modules path — Specifies the filename for a module. Limits recursion_limit — Sets the maximum recursion limit for certain compile-time operations. type_length_limit — Sets the maximum size of a polymorphic type. Runtime panic_handler — Sets the function to handle panics. global_allocator — Sets the global memory allocator. windows_subsystem — Specifies the windows subsystem to link with. Features feature — Used to enable unstable or experimental compiler features. See The Unstable Book for features implemented in rustc . Type System non_exhaustive — Indicate that a type will have more fields/variants added in future. Debugger debugger_visualizer — Embeds a file that specifies debugger output for a type. collapse_debuginfo — Controls how macro invocations are encoded in debuginfo. | 2026-01-13T09:29:13 |
https://aws.amazon.com/blogs/big-data/category/serverless/page/2/ | Serverless | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Category: Serverless Accelerate data pipeline creation with the new visual interface in Amazon OpenSearch Ingestion by Samuel Selvan and Jagadish Kumar on 22 APR 2025 in Amazon OpenSearch Service , Analytics , Launch , Serverless Permalink Comments Share Today, we’re launching a new visual interface for OpenSearch Ingestion that makes it simple to create and manage your data pipelines from the AWS Management Console. With this new feature, you can build pipelines in minutes without writing complex configurations manually. In this post, we walk through how these new features work and how you can use them to accelerate your data ingestion projects. Build a data lakehouse in a hybrid Environment using Amazon EMR Serverless, Apache DolphinScheduler, and TiDB by Shiyang Wei on 20 MAR 2025 in Advanced (300) , Amazon EMR , Serverless Permalink Comments Share This post discusses a decoupled approach of building a serverless data lakehouse using AWS Cloud-centered services, including Amazon EMR Serverless, Amazon Athena, Amazon Simple Storage Service (Amazon S3), Apache DolphinScheduler (an open source data job scheduler) as well as PingCAP TiDB, a third-party data warehouse product that can be deployed either on premises or on the cloud or through a software as a service (SaaS). Amazon Redshift Serverless adds higher base capacity of up to 1024 RPUs by Ricardo Serafim , Harshida Patel , and Milind Oke on 10 FEB 2025 in Amazon Redshift , Analytics , Serverless Permalink Comments Share In this post, we explore the new higher base capacity of 1024 RPUs in Redshift Serverless, which doubles the previous maximum of 512 RPUs. This enhancement empowers you to get high performance for your workload containing highly complex queries and write-intensive workloads, with concurrent data ingestion and transformation tasks that require high throughput and low latency with Redshift Serverless. Jumia builds a next-generation data platform with metadata-driven specification frameworks by Ramon Diez Lejarazu , Hélder Russa , Pedro Gonçalves , and Paula Marenco Aguilar on 20 DEC 2024 in Customer Solutions , Serverless Permalink Comments Share Jumia is a technology company born in 2012, present in 14 African countries, with its main headquarters in Lagos, Nigeria. In this post, we share part of the journey that Jumia took with AWS Professional Services to modernize its data platform that ran under a Hadoop distribution to AWS serverless based solutions. Introducing Point in Time queries and SQL/PPL support in Amazon OpenSearch Serverless by Jagadish Kumar , Frank Dattalo , and Milav Shah on 19 NOV 2024 in Amazon OpenSearch Service , Analytics , Announcements , Serverless Permalink Comments Share Today we announced support for three new features for Amazon OpenSearch Serverless: Point in Time (PIT) search, which enables you to maintain stable sorting for deep pagination in the presence of updates, and PPL and SQL, which give you new ways to query your data. In this post, we discuss the benefits of these new features and how to get started. Enhance Amazon EMR scaling capabilities with Application Master Placement by Lorenzo Ripani , Bezuayehu Wate , Sajjan Bhattarai , and Miranda Diaz on 14 OCT 2024 in Amazon EMR , Analytics , Intermediate (200) , Serverless Permalink Comments Share Starting with the Amazon EMR 7.2 release, Amazon EMR on EC2 introduced a new feature called Application Master (AM) label awareness, which allows users to enable YARN node labels to allocate the AM containers within On-Demand nodes only. In this post, we explore the key features and use cases where this new functionality can provide significant benefits, enabling cluster administrators to achieve optimal resource utilization, improved application reliability, and cost-efficiency in your EMR on EC2 clusters. Extract insights in a 30TB time series workload with Amazon OpenSearch Serverless by Satish Nandi , Prashant Agrawal , Qiaoxuan Xue , and Milav Shah on 07 OCT 2024 in Amazon OpenSearch Service , Intermediate (200) , Serverless Permalink Comments Share We recently announced a new capacity level of 30TB for time series data per account per AWS Region. The OpenSearch Serverless compute capacity for data ingestion and search/query is measured in OpenSearch Compute Units (OCUs), which are shared among various collections with the same AWS Key Management Service (AWS KMS) key. This post discusses how you can analyze 30TB time series datasets with OpenSearch Serverless. Unlock scalable analytics with a secure connectivity pattern in AWS Glue to read from or write to Snowflake by Caio Montovani , Kamen Sharlandjiev , Bosco Albuquerque , Kartikay Khator , and Navnit Shukla on 19 AUG 2024 in Analytics , AWS Big Data , AWS Glue , Serverless Permalink Comments Share In today’s data-driven world, the ability to seamlessly integrate and utilize diverse data sources is critical for gaining actionable insights and driving innovation. As organizations increasingly rely on data stored across various platforms, such as Snowflake, Amazon Simple Storage Service (Amazon S3), and various software as a service (SaaS) applications, the challenge of bringing these […] Deliver Amazon CloudWatch logs to Amazon OpenSearch Serverless by Balaji Mohan , Muthu Pitchaimani , and Souvik Bose on 31 JUL 2024 in Amazon CloudWatch , Amazon OpenSearch Service , Serverless , Technical How-to Permalink Comments Share In this blog post, we will show how to use Amazon OpenSearch Ingestion to deliver CloudWatch logs to OpenSearch Serverless in near real-time. We outline a mechanism to connect a Lambda subscription filter with OpenSearch Ingestion and deliver logs to OpenSearch Serverless without explicitly needing a separate subscription filter for it. Perform reindexing in Amazon OpenSearch Serverless using Amazon OpenSearch Ingestion by Utkarsh Agarwal and Prashant Agrawal on 25 JUN 2024 in Amazon OpenSearch Service , Analytics , Serverless , Technical How-to Permalink Comments Share In this post, we outline the steps to copy data between two indexes in the same OpenSearch Serverless collection using the new OpenSearch source feature of OpenSearch Ingestion. This is particularly useful for reindexing operations where you want to change your data schema. OpenSearch Serverless and OpenSearch Ingestion are both serverless services that enable you to seamlessly handle your data workflows, providing optimal performance and scalability. ← Older posts Newer posts → Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center <a data-rg | 2026-01-13T09:29:13 |
https://aws.amazon.com/blogs/big-data/navigating-architectural-choices-for-a-lakehouse-using-amazon-sagemaker/ | Navigating architectural choices for a lakehouse using Amazon SageMaker | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Navigating architectural choices for a lakehouse using Amazon SageMaker by Lakshmi Nair and Saman Irfan on 12 JAN 2026 in Amazon SageMaker Data & AI Governance , Amazon SageMaker Lakehouse , Amazon SageMaker Unified Studio , Analytics Permalink Comments Share Organizations today are using data more than ever to drive decision-making and innovation. Because they work with petabytes of information, they have traditionally gravitated towards two distinct paradigms—data lakes and data warehouses. While each paradigm excels at specific use cases, they often create unintended barriers between the data assets. Data lakes are often built on object storage such as Amazon Simple Storage Service (Amazon S3) , which provide flexibility by supporting diverse data formats and schema-on-read capabilities. This enables multi-engine access where various processing frameworks (such as Apache Spark , Trino , and Presto ) can query the same data. On the other hand, data warehouses (such as Amazon Redshift ) excel in areas such as ACID (atomicity, consistency, isolation and durability) compliance, performance optimization, and straightforward deployment, making them suitable for structured and complex queries. As data volumes grow and analytics needs become more complex, organizations seek to bridge these silos and use the strengths of both paradigms. This is where the concept of lakehouse architecture is applied, offering a unified approach to data management and analytics. Over time, several distinct lakehouse approaches have emerged. In this post, we show you how to evaluate and choose the right lakehouse pattern for your needs. The data lake centric lakehouse approach begins with the scalability, cost-effectiveness, and flexibility of a traditional data lake built on object storage. The goal is to add a layer of transactional capabilities and data management traditionally found in databases, primarily through open table formats (such as Apache Hudi , Delta Lake , or Apache Iceberg ). While open table formats have made significant strides by introducing ACID guarantees for single-table operations in data lakes, implementing multi-table transactions with complex referential integrity constraints and joins remains challenging. The fundamental nature of querying petabytes of files on object storage, often through distributed query engines, can result in slow interactive queries at high concurrency when compared to a highly optimized, indexed, and materialized data warehouse. Open table formats introduce compaction and indexing, but the full suite of intelligent storage optimizations found in highly mature, proprietary data warehouses is still evolving in data lake-centric architecture. The data warehouse centric lakehouse approach offers robust analytical capabilities but has significant interoperability challenges. Though data warehouses provide JAVA Database Connectivity (JDBC) and Open Database Connectivity (ODBC) drivers for external access, the underlying data remains in proprietary formats, making it difficult for external tools or services to directly access it without complex extract, transform, and load (ETL) or API layers. This can lead to data duplication and latency. A data warehouse architecture might support reading open table formats, but its ability to write to them or participate in their transactional layers can be limited. This restricts true interoperability and can create shadow data silos. On AWS, you can build a modern, open lakehouse architecture to achieve unified access to both data warehouses and data lakes. By using this approach, you can build sophisticated analytics, machine learning (ML), and generative AI applications while maintaining a single source of truth for their data. You don’t have to choose between a data lake or data warehouse. You can use existing investments and preserve the strengths of both paradigms while eliminating their respective weaknesses. The lakehouse architecture on AWS embraces open table formats such as Apache Hudi, Delta Lake, and Apache Iceberg. You can accelerate your lakehouse journey with the next generation of Amazon SageMaker , which delivers an integrated experience for analytics and AI with unified access to data. SageMaker is built on an open lakehouse architecture that is fully compatible with Apache Iceberg. By extending support for Apache Iceberg REST APIs, SageMaker significantly adds interoperability and accessibility across various Apache Iceberg-compatible query engines and tools. At the core of this architecture is a metadata management layer built on AWS Glue Data Catalog and AWS Lake Formation , which provide unified governance and centralized access control. Foundations of the Amazon SageMaker lakehouse architecture The lakehouse architecture of Amazon SageMaker has four main components that work together to create a unified data platform. Flexible storage to adapt to the workload patterns and requirements Technical catalog that serves as a single source of truth for all metadata Integrated permission management with fine-grained access control across all data assets Open access framework built on Apache Iceberg REST APIs for universal compatibility Catalogs and permissions When building an open lakehouse, the catalog—your central repository of metadata—is a critical component for data discovery and governance. There are two types of catalogs in the lakehouse architecture of Amazon SageMaker: managed catalogs and federated catalogs. Managed catalog refers to when the metadata is managed by the lakehouse, and the data is stored in a general purpose S3 bucket. Federated catalog refers to mounting or connecting to external or existing data sources so you can query data from data sources such as Amazon Redshift , Snowflake, and Amazon DynamoDB without explicitly moving the data. For more information, see Data connections in the lakehouse architecture of Amazon SageMaker . You can use an AWS Glue crawler to automatically discover and register this metadata in Data Catalog . Data Catalog stores the schema and table metadata of your data assets, effectively turning files into logical tables. After your data is cataloged, the next challenge is controlling who can access it. While you could use complex S3 bucket policies for every folder, this approach is difficult to manage and scale. Lake Formation provides a centralized database-style permissions model on the Data Catalog, giving you the flexibility to grant or revoke fine-grained access at row, column, and cell levels for individual users or roles. Open access with Apache Iceberg REST APIs The lakehouse architecture described in the preceding section and shown in the following figure also uses the AWS Glue Iceberg REST catalog through the service endpoint, which provides OSS compatibility, enabling increased interoperability for managing Iceberg table metadata across Spark and other open source analytics engines. You can choose the appropriate API based on table format and use case requirements. In this post, we explore various lakehouse architecture patterns, focusing on how to optimally use data lake and data warehouse to create robust, scalable, and performance-driven data solutions. Bringing data into your lakehouse on AWS When building a lakehouse architecture, you can choose from three distinct patterns to access and integrate your data, each offering unique advantages for different use cases. Traditional ETL is the classic method of extracting data, transforming it and loading it into your lakehouse. When to use it: You need complex transformations and require highly curated and optimized data sets for downstream applications for better performance You need to perform historical data migrations You need data quality enforcement and standardization at scale You need highly governed curated data in a lakehouse Zero-ETL is a modern architectural pattern where data automatically and continuously replicates from a source system to lakehouse with minimal or no manual intervention or custom code. Behind the scenes, the pattern uses change data capture (CDC) to automatically stream all new inserts, updates, and deletes from the source to the target. This architectural pattern is effective when the source system maintains a high degree of data cleanliness and structure, minimizing the need for heavy pre-load transformations, or when data refinement and aggregation can occur at the target end within lakehouse. Zero-ETL replicates data with minimal delay, and the transformation logic is performed on the target end closer to where the insights are generated by shifting it to a more efficient, post-load phase. When to use it: You need to reduce operational complexity and gain flexible control over data replication for both near real-time and batch use cases. You need limited customization. While zero-ETL implies minimal work, some light transformations might still be required on the replicated data. You need to minimize the need for specialized ETL expertise. You need to maintain data freshness without processing delays and reduce risk of data inconsistencies. Zero-ETL facilitates faster time-to-insight. Data federation (no-movement approach) is a method that enables querying and combining data from multiple disparate sources without physically moving or copying it into a single centralized location. This query-in-place approach allows the query engine to connect directly to the external source systems, delegate and execute queries, and combine results on the fly for presentation to the user. The effectiveness of this architecture pattern depends on three key factors: network latency between systems, source system performance capabilities, and the query engine’s ability to push down predicates to optimize query execution. This no-movement approach can significantly reduce data duplication and storage costs while providing real-time access to source data. When to use it: You need to query the source system directly to use operational analytics. You don’t want to duplicate data to save on storage space and associated costs within your Lakehouse. You’re willing to trade some query performance and governance for immediate data availability and one-time analysis of live data. You don’t need to frequently query the data. Understanding the storage layer of your lakehouse on AWS Now that you’ve seen different ways to get data into a lakehouse, the next question is where to store the data. As shown in the following figure, you can architect a modern open lakehouse on AWS by storing the data in a data lake (Amazon S3 or Amazon S3 Tables ) or data warehouse ( Redshift Managed Storage ), so you can optimize for both flexibility and performance based on your specific workload requirements. A modern lakehouse isn’t a single storage technology but a strategic combination of them. The decision of where and how to store your data impacts everything from the speed of your dashboards to the efficiency of your ML models. You must consider not only the initial cost of storage but also the long-term costs of data retrieval, the latency required by your users, and the governance necessary to maintain a single source of truth. In this section, we delve into architectural patterns for the data lake and the data warehouse and provide a clear framework for when to use each storage pattern. While they have historically been seen as competing architectures, the modern and open lakehouse approach uses both to create a single, powerful data platform. General purpose S3 A general purpose S3 bucket in Amazon S3 is the standard, foundational bucket type used for storing objects. It provides flexibility so that you can store your data in its native format without a rigid upfront schema. Because of the ability of an S3 bucket to decouple storage from compute, you can store the data in a highly scalable location, while a variety of query engines can access and process it independently. This means that you can choose the right tool for the job without having to move or duplicate the data. You can store petabytes of data without ever having to provision or manage storage capacity, and its tiered storage classes provide significant cost savings by automatically moving less-frequently accessed data to more affordable storage. The existing Data Catalog functions as a managed catalog. It’s identified by the AWS account number, which means there is no migration needed for existing Data Catalogs; they’re already available in the lakehouse and become the default catalog for the new data, as shown in the following figure. A foundational data lake on general purpose S3 is highly efficient for append-only workloads. However, its file-based nature lacks the transactional guarantees of a traditional database. This is where you can use the support of open-source transactional table formats such as Apache Hudi, Delta Lake, and Apache Iceberg. With these table formats, you can implement multi-version concurrency control, allowing multiple readers and writers to operate simultaneously without conflicts. They provide snapshot isolation, so that readers see consistent views of data even during write operations. A typical medallion architecture pattern with Apache Iceberg is depicted in the following figure. When building a lakehouse on AWS with Apache Iceberg, customers can choose between two primary approaches for storing their data on Amazon S3: General purpose S3 buckets with self-managed Iceberg or using the fully managed S3 Tables. Each path has distinct advantages, and the right choice depends on your specific needs for control, performance, and operational overhead. General purpose S3 with Self-managed Iceberg Using general purpose S3 buckets with self-managed Iceberg is a traditional approach where you store both data and Iceberg metadata files in standard S3 buckets. With this option, you maintain full control but are responsible for managing the complete Iceberg table lifecycle, including essential maintenance tasks such as compaction and garbage collection. When to use it: Maximum control: This approach provides complete control over the entire data life cycle. You can fine-tune every aspect of table maintenance, such as defining your own compaction schedules and strategies, which can be crucial for specific high-performance workloads or to optimize costs. Flexibility and customization: It is ideal for organizations with strong in-house data engineering expertise that need to integrate with a wider range of open-source tools and custom scripts. You can use Amazon EMR or Apache Spark to manage the table operations. Lower upfront costs: You pay only for Amazon S3 storage, API requests, and the compute resources you use for maintenance. This can be more cost-effective for smaller or less-frequent workloads where continuous, automated optimization isn’t necessary. Note: The query performance depends entirely on your optimization strategy. Without continuous, scheduled jobs for compaction, performance can degrade over time as data gets fragmented. You must monitor these jobs to ensure efficient querying. S3 Tables S3 Tables provides S3 storage that’s optimized for analytic workloads and provides Apache Iceberg compatibility to store tabular data at scale. You can integrate S3 table buckets and tables with Data Catalog and register the catalog as a Lake Formation data location from the Lake Formation console or using service APIs, as shown in the following figure. This catalog will be registered and mounted as a federated lakehouse catalog. When to use it: Simplified operations: S3 Tables automatically handles table maintenance tasks such as compaction, snapshot management and orphan file cleanup in the background. This automation eliminates the need to build and manage custom maintenance jobs, significantly reducing your operational overhead. Automated optimization: S3 Tables provides built-in automatic optimizations that improve query performance. These optimizations include background processes such as file compaction to address the small files problem and data layout optimizations specific to tabular data. However, this automation trades flexibility for convenience. Because you can’t control the timing or method of compaction operations, workloads with specific performance requirements might experience varying query performance. Focus on data usage: S3 Tables reduces the engineering overhead and shifts the focus to data consumption, data governance and value creation. Simplified entry to open table formats: It’s suitable for teams who are new to the concept of Apache Iceberg but want to use transactional capabilities on data lake. No external catalog: Suitable for smaller teams who don’t want to manage an external catalog. Redshift managed storage While the data lake serves as the central source of truth for all your data, it’s not the most suitable data store for every job. For the most demanding business intelligence and reporting workloads, the data lake’s open and flexible nature can introduce performance unpredictability. To help ensure the desired performance, consider transitioning a curated subset of your data from the data lake to a data warehouse for the following reasons: High concurrency BI and reporting: When hundreds of business users are concurrently running complex queries on live dashboards, a data warehouse is specifically optimized to handle these workloads with predictable, sub-second query latency. Predictable performance SLAs: – For critical business processes that require data to be delivered at a guaranteed speed, such as financial reporting or end-of-day sales analysis, a data warehouse provides consistent performance. Complex SQL workloads: While data lakes are powerful, they can struggle with highly complex queries involving numerous joins and massive aggregations. A data warehouse is purpose-built to run these relational workloads efficiently. The lakehouse architecture on AWS supports Redshift Managed Storage (RMS), a storage option provided by Amazon Redshift, a fully managed, petabyte-scale data warehouse service in the cloud. RMS storage supports the automatic table optimization offered in Amazon Redshift such as built-in query optimizations for data warehousing workloads, automated materialized views , and AI-driven optimizations and scaling for frequently running workloads. Federated RMS catalog: Onboard existing Amazon Redshift data warehouses to lakehouse Implementing a federated catalog with existing Amazon Redshift data warehouses creates a metadata-only integration that requires no data movement. This approach lets you extend your established Amazon Redshift investments into a modern open lakehouse framework while maintaining compatibility with existing workflows. Amazon Redshift uses a hierarchical data organization structure: Cluster level : Starts with a namespace Database level : Contains multiple databases Schema level : Organizes tables within databases When you register your existing Amazon Redshift provisioned or serverless namespaces as a federated catalog in Data Catalog, this hierarchy maps directly into the lakehouse metadata layer. The lakehouse implementation on AWS supports multiple catalogs using a dynamic hierarchy to organize and map the underlying storage metadata. After you register a namespace, the federated catalog automatically mounts across all Amazon Redshift data warehouses in your AWS Region and account. During this process, Amazon Redshift internally creates external databases that correspond to data shares. This mechanism remains completely abstracted from end users. By using federated catalogs, you can create and use immediate visibility and accessibility across your data ecosystem. Permissions on the federated catalogs can be managed by Lake Formation for both same account and cross account access. The real capability of federated catalogs emerges when accessing Amazon Redshift-managed storage from external AWS engines such as Amazon Athena , Amazon EMR , or open source Spark. Because Amazon Redshift uses proprietary block-based storage that only Amazon Redshift engines can read natively, AWS automatically provisions a service-managed Amazon Redshift Serverless instance in the background. This service-managed instance acts as a translation layer between external engines and Amazon Redshift managed storage. AWS establishes automatic data shares between your registered federated catalog and the service-managed Amazon Redshift Serverless instance to enable secure, efficient data access. AWS also creates a service-managed Amazon S3 bucket in the background for data transfer. When an external engine such as Athena submits queries against Amazon Redshift federated catalog, Lake Formation handles the credential vending by providing the temporary credentials to the requesting service. The query executes through the service-managed Amazon Redshift Serverless, which accesses data through automatically established data shares, processes results, offloads them to a service-managed Amazon S3 staging area, and then returns results to the original requesting engine. To track the compute cost of the federated catalog of existing Amazon Redshift warehouse, use the following tag. aws:redshift-serverless:LakehouseManagedWorkgroup value: "True" To activate the AWS generated cost allocation tags for billing insight, follow the activation instructions . You can also view the computational cost of the resources in AWS Billing . When to use it: Existing Amazon Redshift investments: Federated catalogs are designed for organizations with existing Amazon Redshift deployments who want to use their data across multiple services without migration. Cross-service data sharing: – Implement so teams can share existing data in an Amazon Redshift data warehouse across different warehouses and centralize their permissions. Enterprise integration requirements: This approach is suitable for organizations that need to integrate with established data governance. It also maintains compatibility with current workflows while adding lakehouse capabilities. Infrastructure control and pricing: – You can retain full control over compute capacity for their existing warehouses for predictable workloads. You can optimize compute capacity, choose between on-demand and reserved capacity pricing, and fine-tune performance parameters. This provides cost predictability and performance control for consistent workloads. When implementing lakehouse architecture with multiple catalog types, selecting the appropriate query engine is crucial for both performance and cost optimization. This post focuses on the storage foundation of lakehouse, however for critical workloads involving extensive Amazon Redshift data operations, consider executing queries within Amazon Redshift or using Spark when possible. Complex joins spanning multiple Amazon Redshift tables through external engines might result in higher compute costs if the engines don’t support full predicate push-down. Other use-cases Build a multi-warehouse architecture Amazon Redshift supports data sharing , which you can use to share live data between source and target Amazon Redshift clusters. By using data sharing, you can share live data without creating copies or moving data, enabling uses cases such as workload isolation (hub and spoke architecture) and cross group collaboration (data mesh architecture). Without a lakehouse architecture, you must create an explicit data share between source and target Amazon Redshift clusters. While managing these data shares in small deployments is relatively straightforward, it becomes complex in data mesh architectures. The lakehouse architecture addresses this challenge so customers can publish their existing Amazon Redshift warehouses as federated catalogs. These federated catalogs are automatically mounted and made available as external databases in other consumer Amazon Redshift warehouses within the same account and Region. By using this approach, you can maintain a single copy of data and use multiple data warehouses to query it, eliminating the need to create and manage multiple data shares and scale with workload isolation. The permission management becomes centralized through Lake Formation, streamlining governance across the entire multi-warehouse environment. Near real-time analytics on petabytes of transactional data with no pipeline management: Zero-ETL integrations seamlessly replicate transactional data from OLTP data sources to Amazon Redshift, general purpose S3 (with self-managed Iceberg) or S3 Tables. This approach eliminates the need to maintain complex ETL pipelines, reducing the number of moving parts in your data architecture and potential points of failure. Business users can analyze fresh operational data immediately rather than working with stale data from the last ETL run. See Aurora zero-ETL integrations for a list of OLTP data sources that can be replicated to an existing Amazon Redshift warehouse. See Zero-ETL integrations for information about other supported data sources that can be replicated to an existing Amazon Redshift warehouse, general purpose S3 with self-managed Iceberg, and S3 Tables. Conclusion A lakehouse architecture isn’t about choosing between a data lake and a data warehouse. Instead, it’s an approach to interoperability where both frameworks coexist and serve different purposes within a unified data architecture. By understanding fundamental storage patterns, implementing effective catalog strategies, and using native storage capabilities, you can build scalable, high-performance data architectures that support both your current analytics needs and future innovation. For more information, see The lakehouse architecture of Amazon SageMaker . About the authors Lakshmi Nair Lakshmi is a Senior Analytics Specialist Solutions Architect at AWS. She specializes in designing advanced analytics systems across industries. She focuses on crafting cloud-based data platforms, enabling real-time streaming, big data processing, and robust data governance. Saman Irfan Saman is a Senior Specialist Solutions Architect at Amazon Web Services, based in Berlin, Germany. Saman is passionate about helping organizations modernize their data architectures and unlock the full potential of their data to drive innovation and business transformation. Outside of work, she enjoys spending time with her family, watching TV series, and staying updated with the latest advancements in technology. Loading comments… Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow Twitter Facebook LinkedIn Twitch Email Updates Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center SDKs & Tools .NET on AWS Python on AWS Java on AWS PHP on AWS JavaScript on AWS Help Contact Us File a Support Ticket AWS re:Post Knowledge Center AWS Support Overview Get Expert Help AWS Accessibility Legal English | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/reference/attributes.html#grammar-AttrInput | Attributes - The Rust Reference Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Reference [attributes] Attributes [attributes .syntax] Syntax InnerAttribute → # ! [ Attr ] OuterAttribute → # [ Attr ] Attr → SimplePath AttrInput ? | unsafe ( SimplePath AttrInput ? ) AttrInput → DelimTokenTree | = Expression Show Railroad InnerAttribute # ! [ Attr ] OuterAttribute # [ Attr ] Attr SimplePath AttrInput unsafe ( SimplePath AttrInput ) AttrInput DelimTokenTree = Expression [attributes .intro] An attribute is a general, free-form metadatum that is interpreted according to name, convention, language, and compiler version. Attributes are modeled on Attributes in ECMA-335 , with the syntax coming from ECMA-334 (C#). [attributes .inner] Inner attributes , written with a bang ( ! ) after the hash ( # ), apply to the item that the attribute is declared within. Outer attributes , written without the bang after the hash, apply to the thing that follows the attribute. [attributes .input] The attribute consists of a path to the attribute, followed by an optional delimited token tree whose interpretation is defined by the attribute. Attributes other than macro attributes also allow the input to be an equals sign ( = ) followed by an expression. See the meta item syntax below for more details. [attributes .safety] An attribute may be unsafe to apply. To avoid undefined behavior when using these attributes, certain obligations that cannot be checked by the compiler must be met. To assert these have been, the attribute is wrapped in unsafe(..) , e.g. #[unsafe(no_mangle)] . The following attributes are unsafe: export_name link_section naked no_mangle [attributes .kind] Attributes can be classified into the following kinds: Built-in attributes Proc macro attributes Derive macro helper attributes Tool attributes [attributes .allowed-position] Attributes may be applied to many things in the language: All item declarations accept outer attributes while external blocks , functions , implementations , and modules accept inner attributes. Most statements accept outer attributes (see Expression Attributes for limitations on expression statements). Block expressions accept outer and inner attributes, but only when they are the outer expression of an expression statement or the final expression of another block expression. Enum variants and struct and union fields accept outer attributes. Match expression arms accept outer attributes. Generic lifetime or type parameter accept outer attributes. Expressions accept outer attributes in limited situations, see Expression Attributes for details. Function , closure and function pointer parameters accept outer attributes. This includes attributes on variadic parameters denoted with ... in function pointers and external blocks . Some examples of attributes: #![allow(unused)] fn main() { // General metadata applied to the enclosing module or crate. #![crate_type = "lib"] // A function marked as a unit test #[test] fn test_foo() { /* ... */ } // A conditionally-compiled module #[cfg(target_os = "linux")] mod bar { /* ... */ } // A lint attribute used to suppress a warning/error #[allow(non_camel_case_types)] type int8_t = i8; // Inner attribute applies to the entire function. fn some_unused_variables() { #![allow(unused_variables)] let x = (); let y = (); let z = (); } } [attributes .meta] Meta item attribute syntax [attributes .meta .intro] A “meta item” is the syntax used for the Attr rule by most built-in attributes . It has the following grammar: [attributes .meta .syntax] Syntax MetaItem → SimplePath | SimplePath = Expression | SimplePath ( MetaSeq ? ) MetaSeq → MetaItemInner ( , MetaItemInner ) * , ? MetaItemInner → MetaItem | Expression Show Railroad MetaItem SimplePath SimplePath = Expression SimplePath ( MetaSeq ) MetaSeq MetaItemInner , MetaItemInner , MetaItemInner MetaItem Expression [attributes .meta .literal-expr] Expressions in meta items must macro-expand to literal expressions, which must not include integer or float type suffixes. Expressions which are not literal expressions will be syntactically accepted (and can be passed to proc-macros), but will be rejected after parsing. [attributes .meta .order] Note that if the attribute appears within another macro, it will be expanded after that outer macro. For example, the following code will expand the Serialize proc-macro first, which must preserve the include_str! call in order for it to be expanded: #[derive(Serialize)] struct Foo { #[doc = include_str!("x.md")] x: u32 } [attributes .meta .order-macro] Additionally, macros in attributes will be expanded only after all other attributes applied to the item: #[macro_attr1] // expanded first #[doc = mac!()] // `mac!` is expanded fourth. #[macro_attr2] // expanded second #[derive(MacroDerive1, MacroDerive2)] // expanded third fn foo() {} [attributes .meta .builtin] Various built-in attributes use different subsets of the meta item syntax to specify their inputs. The following grammar rules show some commonly used forms: [attributes .meta .builtin .syntax] Syntax MetaWord → IDENTIFIER MetaNameValueStr → IDENTIFIER = ( STRING_LITERAL | RAW_STRING_LITERAL ) MetaListPaths → IDENTIFIER ( ( SimplePath ( , SimplePath ) * , ? ) ? ) MetaListIdents → IDENTIFIER ( ( IDENTIFIER ( , IDENTIFIER ) * , ? ) ? ) MetaListNameValueStr → IDENTIFIER ( ( MetaNameValueStr ( , MetaNameValueStr ) * , ? ) ? ) Show Railroad MetaWord IDENTIFIER MetaNameValueStr IDENTIFIER = STRING_LITERAL RAW_STRING_LITERAL MetaListPaths IDENTIFIER ( SimplePath , SimplePath , ) MetaListIdents IDENTIFIER ( IDENTIFIER , IDENTIFIER , ) MetaListNameValueStr IDENTIFIER ( MetaNameValueStr , MetaNameValueStr , ) Some examples of meta items are: Style Example MetaWord no_std MetaNameValueStr doc = "example" MetaListPaths allow(unused, clippy::inline_always) MetaListIdents macro_use(foo, bar) MetaListNameValueStr link(name = "CoreFoundation", kind = "framework") [attributes .activity] Active and inert attributes [attributes .activity .intro] An attribute is either active or inert. During attribute processing, active attributes remove themselves from the thing they are on while inert attributes stay on. The cfg and cfg_attr attributes are active. Attribute macros are active. All other attributes are inert. [attributes .tool] Tool attributes [attributes .tool .intro] The compiler may allow attributes for external tools where each tool resides in its own module in the tool prelude . The first segment of the attribute path is the name of the tool, with one or more additional segments whose interpretation is up to the tool. [attributes .tool .ignored] When a tool is not in use, the tool’s attributes are accepted without a warning. When the tool is in use, the tool is responsible for processing and interpretation of its attributes. [attributes .tool .prelude] Tool attributes are not available if the no_implicit_prelude attribute is used. #![allow(unused)] fn main() { // Tells the rustfmt tool to not format the following element. #[rustfmt::skip] struct S { } // Controls the "cyclomatic complexity" threshold for the clippy tool. #[clippy::cyclomatic_complexity = "100"] pub fn f() {} } Note rustc currently recognizes the tools “clippy”, “rustfmt”, “diagnostic”, “miri” and “rust_analyzer”. [attributes .builtin] Built-in attributes index The following is an index of all built-in attributes. Conditional compilation cfg — Controls conditional compilation. cfg_attr — Conditionally includes attributes. Testing test — Marks a function as a test. ignore — Disables a test function. should_panic — Indicates a test should generate a panic. Derive derive — Automatic trait implementations. automatically_derived — Marker for implementations created by derive . Macros macro_export — Exports a macro_rules macro for cross-crate usage. macro_use — Expands macro visibility, or imports macros from other crates. proc_macro — Defines a function-like macro. proc_macro_derive — Defines a derive macro. proc_macro_attribute — Defines an attribute macro. Diagnostics allow , expect , warn , deny , forbid — Alters the default lint level. deprecated — Generates deprecation notices. must_use — Generates a lint for unused values. diagnostic::on_unimplemented — Hints the compiler to emit a certain error message if a trait is not implemented. diagnostic::do_not_recommend — Hints the compiler to not show a certain trait impl in error messages. ABI, linking, symbols, and FFI link — Specifies a native library to link with an extern block. link_name — Specifies the name of the symbol for functions or statics in an extern block. link_ordinal — Specifies the ordinal of the symbol for functions or statics in an extern block. no_link — Prevents linking an extern crate. repr — Controls type layout. crate_type — Specifies the type of crate (library, executable, etc.). no_main — Disables emitting the main symbol. export_name — Specifies the exported symbol name for a function or static. link_section — Specifies the section of an object file to use for a function or static. no_mangle — Disables symbol name encoding. used — Forces the compiler to keep a static item in the output object file. crate_name — Specifies the crate name. Code generation inline — Hint to inline code. cold — Hint that a function is unlikely to be called. naked — Prevent the compiler from emitting a function prologue and epilogue. no_builtins — Disables use of certain built-in functions. target_feature — Configure platform-specific code generation. track_caller — Pass the parent call location to std::panic::Location::caller() . instruction_set — Specify the instruction set used to generate a functions code Documentation doc — Specifies documentation. See The Rustdoc Book for more information. Doc comments are transformed into doc attributes. Preludes no_std — Removes std from the prelude. no_implicit_prelude — Disables prelude lookups within a module. Modules path — Specifies the filename for a module. Limits recursion_limit — Sets the maximum recursion limit for certain compile-time operations. type_length_limit — Sets the maximum size of a polymorphic type. Runtime panic_handler — Sets the function to handle panics. global_allocator — Sets the global memory allocator. windows_subsystem — Specifies the windows subsystem to link with. Features feature — Used to enable unstable or experimental compiler features. See The Unstable Book for features implemented in rustc . Type System non_exhaustive — Indicate that a type will have more fields/variants added in future. Debugger debugger_visualizer — Embeds a file that specifies debugger output for a type. collapse_debuginfo — Controls how macro invocations are encoded in debuginfo. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/reference/abi.html#the-export_name-attribute | Application binary interface - The Rust Reference Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Reference [abi] Application binary interface (ABI) [abi .intro] This section documents features that affect the ABI of the compiled output of a crate. See extern functions for information on specifying the ABI for exporting functions. See external blocks for information on specifying the ABI for linking external libraries. [abi .used] The used attribute [abi .used .intro] The used attribute can only be applied to static items . This attribute forces the compiler to keep the variable in the output object file (.o, .rlib, etc. excluding final binaries) even if the variable is not used, or referenced, by any other item in the crate. However, the linker is still free to remove such an item. Below is an example that shows under what conditions the compiler keeps a static item in the output object file. #![allow(unused)] fn main() { // foo.rs // This is kept because of `#[used]`: #[used] static FOO: u32 = 0; // This is removable because it is unused: #[allow(dead_code)] static BAR: u32 = 0; // This is kept because it is publicly reachable: pub static BAZ: u32 = 0; // This is kept because it is referenced by a public, reachable function: static QUUX: u32 = 0; pub fn quux() -> &'static u32 { &QUUX } // This is removable because it is referenced by a private, unused (dead) function: static CORGE: u32 = 0; #[allow(dead_code)] fn corge() -> &'static u32 { &CORGE } } $ rustc -O --emit=obj --crate-type=rlib foo.rs $ nm -C foo.o 0000000000000000 R foo::BAZ 0000000000000000 r foo::FOO 0000000000000000 R foo::QUUX 0000000000000000 T foo::quux [abi .no_mangle] The no_mangle attribute [abi .no_mangle .intro] The no_mangle attribute may be used on any item to disable standard symbol name mangling. The symbol for the item will be the identifier of the item’s name. [abi .no_mangle .publicly-exported] Additionally, the item will be publicly exported from the produced library or object file, similar to the used attribute . [abi .no_mangle .unsafe] This attribute is unsafe as an unmangled symbol may collide with another symbol with the same name (or with a well-known symbol), leading to undefined behavior. #![allow(unused)] fn main() { #[unsafe(no_mangle)] extern "C" fn foo() {} } [abi .no_mangle .edition2024] 2024 Edition differences Before the 2024 edition it is allowed to use the no_mangle attribute without the unsafe qualification. [abi .link_section] The link_section attribute [abi .link_section .intro] The link_section attribute specifies the section of the object file that a function or static ’s content will be placed into. [abi .link_section .syntax] The link_section attribute uses the MetaNameValueStr syntax to specify the section name. #![allow(unused)] fn main() { #[unsafe(no_mangle)] #[unsafe(link_section = ".example_section")] pub static VAR1: u32 = 1; } [abi .link_section .unsafe] This attribute is unsafe as it allows users to place data and code into sections of memory not expecting them, such as mutable data into read-only areas. [abi .link_section .edition2024] 2024 Edition differences Before the 2024 edition it is allowed to use the link_section attribute without the unsafe qualification. [abi .export_name] The export_name attribute [abi .export_name .intro] The export_name attribute specifies the name of the symbol that will be exported on a function or static . [abi .export_name .syntax] The export_name attribute uses the MetaNameValueStr syntax to specify the symbol name. #![allow(unused)] fn main() { #[unsafe(export_name = "exported_symbol_name")] pub fn name_in_rust() { } } [abi .export_name .unsafe] This attribute is unsafe as a symbol with a custom name may collide with another symbol with the same name (or with a well-known symbol), leading to undefined behavior. [abi .export_name .edition2024] 2024 Edition differences Before the 2024 edition it is allowed to use the export_name attribute without the unsafe qualification. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/reference/attributes.html#r-attributes.safety | Attributes - The Rust Reference Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Reference [attributes] Attributes [attributes .syntax] Syntax InnerAttribute → # ! [ Attr ] OuterAttribute → # [ Attr ] Attr → SimplePath AttrInput ? | unsafe ( SimplePath AttrInput ? ) AttrInput → DelimTokenTree | = Expression Show Railroad InnerAttribute # ! [ Attr ] OuterAttribute # [ Attr ] Attr SimplePath AttrInput unsafe ( SimplePath AttrInput ) AttrInput DelimTokenTree = Expression [attributes .intro] An attribute is a general, free-form metadatum that is interpreted according to name, convention, language, and compiler version. Attributes are modeled on Attributes in ECMA-335 , with the syntax coming from ECMA-334 (C#). [attributes .inner] Inner attributes , written with a bang ( ! ) after the hash ( # ), apply to the item that the attribute is declared within. Outer attributes , written without the bang after the hash, apply to the thing that follows the attribute. [attributes .input] The attribute consists of a path to the attribute, followed by an optional delimited token tree whose interpretation is defined by the attribute. Attributes other than macro attributes also allow the input to be an equals sign ( = ) followed by an expression. See the meta item syntax below for more details. [attributes .safety] An attribute may be unsafe to apply. To avoid undefined behavior when using these attributes, certain obligations that cannot be checked by the compiler must be met. To assert these have been, the attribute is wrapped in unsafe(..) , e.g. #[unsafe(no_mangle)] . The following attributes are unsafe: export_name link_section naked no_mangle [attributes .kind] Attributes can be classified into the following kinds: Built-in attributes Proc macro attributes Derive macro helper attributes Tool attributes [attributes .allowed-position] Attributes may be applied to many things in the language: All item declarations accept outer attributes while external blocks , functions , implementations , and modules accept inner attributes. Most statements accept outer attributes (see Expression Attributes for limitations on expression statements). Block expressions accept outer and inner attributes, but only when they are the outer expression of an expression statement or the final expression of another block expression. Enum variants and struct and union fields accept outer attributes. Match expression arms accept outer attributes. Generic lifetime or type parameter accept outer attributes. Expressions accept outer attributes in limited situations, see Expression Attributes for details. Function , closure and function pointer parameters accept outer attributes. This includes attributes on variadic parameters denoted with ... in function pointers and external blocks . Some examples of attributes: #![allow(unused)] fn main() { // General metadata applied to the enclosing module or crate. #![crate_type = "lib"] // A function marked as a unit test #[test] fn test_foo() { /* ... */ } // A conditionally-compiled module #[cfg(target_os = "linux")] mod bar { /* ... */ } // A lint attribute used to suppress a warning/error #[allow(non_camel_case_types)] type int8_t = i8; // Inner attribute applies to the entire function. fn some_unused_variables() { #![allow(unused_variables)] let x = (); let y = (); let z = (); } } [attributes .meta] Meta item attribute syntax [attributes .meta .intro] A “meta item” is the syntax used for the Attr rule by most built-in attributes . It has the following grammar: [attributes .meta .syntax] Syntax MetaItem → SimplePath | SimplePath = Expression | SimplePath ( MetaSeq ? ) MetaSeq → MetaItemInner ( , MetaItemInner ) * , ? MetaItemInner → MetaItem | Expression Show Railroad MetaItem SimplePath SimplePath = Expression SimplePath ( MetaSeq ) MetaSeq MetaItemInner , MetaItemInner , MetaItemInner MetaItem Expression [attributes .meta .literal-expr] Expressions in meta items must macro-expand to literal expressions, which must not include integer or float type suffixes. Expressions which are not literal expressions will be syntactically accepted (and can be passed to proc-macros), but will be rejected after parsing. [attributes .meta .order] Note that if the attribute appears within another macro, it will be expanded after that outer macro. For example, the following code will expand the Serialize proc-macro first, which must preserve the include_str! call in order for it to be expanded: #[derive(Serialize)] struct Foo { #[doc = include_str!("x.md")] x: u32 } [attributes .meta .order-macro] Additionally, macros in attributes will be expanded only after all other attributes applied to the item: #[macro_attr1] // expanded first #[doc = mac!()] // `mac!` is expanded fourth. #[macro_attr2] // expanded second #[derive(MacroDerive1, MacroDerive2)] // expanded third fn foo() {} [attributes .meta .builtin] Various built-in attributes use different subsets of the meta item syntax to specify their inputs. The following grammar rules show some commonly used forms: [attributes .meta .builtin .syntax] Syntax MetaWord → IDENTIFIER MetaNameValueStr → IDENTIFIER = ( STRING_LITERAL | RAW_STRING_LITERAL ) MetaListPaths → IDENTIFIER ( ( SimplePath ( , SimplePath ) * , ? ) ? ) MetaListIdents → IDENTIFIER ( ( IDENTIFIER ( , IDENTIFIER ) * , ? ) ? ) MetaListNameValueStr → IDENTIFIER ( ( MetaNameValueStr ( , MetaNameValueStr ) * , ? ) ? ) Show Railroad MetaWord IDENTIFIER MetaNameValueStr IDENTIFIER = STRING_LITERAL RAW_STRING_LITERAL MetaListPaths IDENTIFIER ( SimplePath , SimplePath , ) MetaListIdents IDENTIFIER ( IDENTIFIER , IDENTIFIER , ) MetaListNameValueStr IDENTIFIER ( MetaNameValueStr , MetaNameValueStr , ) Some examples of meta items are: Style Example MetaWord no_std MetaNameValueStr doc = "example" MetaListPaths allow(unused, clippy::inline_always) MetaListIdents macro_use(foo, bar) MetaListNameValueStr link(name = "CoreFoundation", kind = "framework") [attributes .activity] Active and inert attributes [attributes .activity .intro] An attribute is either active or inert. During attribute processing, active attributes remove themselves from the thing they are on while inert attributes stay on. The cfg and cfg_attr attributes are active. Attribute macros are active. All other attributes are inert. [attributes .tool] Tool attributes [attributes .tool .intro] The compiler may allow attributes for external tools where each tool resides in its own module in the tool prelude . The first segment of the attribute path is the name of the tool, with one or more additional segments whose interpretation is up to the tool. [attributes .tool .ignored] When a tool is not in use, the tool’s attributes are accepted without a warning. When the tool is in use, the tool is responsible for processing and interpretation of its attributes. [attributes .tool .prelude] Tool attributes are not available if the no_implicit_prelude attribute is used. #![allow(unused)] fn main() { // Tells the rustfmt tool to not format the following element. #[rustfmt::skip] struct S { } // Controls the "cyclomatic complexity" threshold for the clippy tool. #[clippy::cyclomatic_complexity = "100"] pub fn f() {} } Note rustc currently recognizes the tools “clippy”, “rustfmt”, “diagnostic”, “miri” and “rust_analyzer”. [attributes .builtin] Built-in attributes index The following is an index of all built-in attributes. Conditional compilation cfg — Controls conditional compilation. cfg_attr — Conditionally includes attributes. Testing test — Marks a function as a test. ignore — Disables a test function. should_panic — Indicates a test should generate a panic. Derive derive — Automatic trait implementations. automatically_derived — Marker for implementations created by derive . Macros macro_export — Exports a macro_rules macro for cross-crate usage. macro_use — Expands macro visibility, or imports macros from other crates. proc_macro — Defines a function-like macro. proc_macro_derive — Defines a derive macro. proc_macro_attribute — Defines an attribute macro. Diagnostics allow , expect , warn , deny , forbid — Alters the default lint level. deprecated — Generates deprecation notices. must_use — Generates a lint for unused values. diagnostic::on_unimplemented — Hints the compiler to emit a certain error message if a trait is not implemented. diagnostic::do_not_recommend — Hints the compiler to not show a certain trait impl in error messages. ABI, linking, symbols, and FFI link — Specifies a native library to link with an extern block. link_name — Specifies the name of the symbol for functions or statics in an extern block. link_ordinal — Specifies the ordinal of the symbol for functions or statics in an extern block. no_link — Prevents linking an extern crate. repr — Controls type layout. crate_type — Specifies the type of crate (library, executable, etc.). no_main — Disables emitting the main symbol. export_name — Specifies the exported symbol name for a function or static. link_section — Specifies the section of an object file to use for a function or static. no_mangle — Disables symbol name encoding. used — Forces the compiler to keep a static item in the output object file. crate_name — Specifies the crate name. Code generation inline — Hint to inline code. cold — Hint that a function is unlikely to be called. naked — Prevent the compiler from emitting a function prologue and epilogue. no_builtins — Disables use of certain built-in functions. target_feature — Configure platform-specific code generation. track_caller — Pass the parent call location to std::panic::Location::caller() . instruction_set — Specify the instruction set used to generate a functions code Documentation doc — Specifies documentation. See The Rustdoc Book for more information. Doc comments are transformed into doc attributes. Preludes no_std — Removes std from the prelude. no_implicit_prelude — Disables prelude lookups within a module. Modules path — Specifies the filename for a module. Limits recursion_limit — Sets the maximum recursion limit for certain compile-time operations. type_length_limit — Sets the maximum size of a polymorphic type. Runtime panic_handler — Sets the function to handle panics. global_allocator — Sets the global memory allocator. windows_subsystem — Specifies the windows subsystem to link with. Features feature — Used to enable unstable or experimental compiler features. See The Unstable Book for features implemented in rustc . Type System non_exhaustive — Indicate that a type will have more fields/variants added in future. Debugger debugger_visualizer — Embeds a file that specifies debugger output for a type. collapse_debuginfo — Controls how macro invocations are encoded in debuginfo. | 2026-01-13T09:29:13 |
https://aws.amazon.com/blogs/big-data/access-databricks-unity-catalog-data-using-catalog-federation-in-the-aws-glue-data-catalog/ | Access Databricks Unity Catalog data using catalog federation in the AWS Glue Data Catalog | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Access Databricks Unity Catalog data using catalog federation in the AWS Glue Data Catalog by Srividya Parthasarathy and Venkat Viswanathan on 12 JAN 2026 in Advanced (300) , Amazon SageMaker , AWS Glue , AWS Lake Formation , Technical How-to Permalink Comments Share AWS has launched the catalog federation capability, enabling direct access to Apache Iceberg tables managed in Databricks Unity Catalog through the AWS Glue Data Catalog . With this integration, you can discover and query Unity Catalog data in Iceberg format using an Iceberg REST API endpoint, while maintaining granular access controls through AWS Lake Formation . This approach significantly reduces operational overhead for managing catalog synchronization and associated costs by alleviating the need to replicate or duplicate datasets between platforms. In this post, we demonstrate how to set up catalog federation between the Glue Data Catalog and Databricks Unity Catalog, enabling data querying using AWS analytics services. Use cases and key benefits This federation capability is particularly valuable if you run multiple data platforms, because you can maintain your existing Iceberg catalog investments while using AWS analytics services. Catalog federation supports read operations and provides the following benefits: Interoperability – You can enable interoperability across different data platforms and tools through Iceberg REST APIs while preserving the value of your established technology investments. Cross-platform analytics – You can connect AWS analytics tools ( Amazon Athena , Amazon Redshift , Apache Spark) to query Iceberg and UniForm tables stored in Databricks Unity Catalog. It supports Databricks on AWS integration with the AWS Glue Iceberg REST Catalog for metadata retrieval, while using Lake Formation for permission management. Metadata management – The solution avoids manual catalog synchronization by making Databricks Unity Catalog databases and tables discoverable within the Data Catalog. You can implement unified governance through Lake Formation for fine-grained access control across federated catalog resources. Solution overview The solution uses catalog federation in the Data Catalog to integrate with Databricks Unity Catalog. The federated catalog created in AWS Glue mirrors the catalog objects in Databricks Unity Catalog and supports OAuth-based authentication. The solution is represented in the following diagram. The integration involves three high-level steps: Set up an integration principal in Databricks Unity Catalog and provide required read access on catalog resources to this principal. Enable OAuth-based authentication for the integration principal. Set up catalog federation to Databricks Unity Catalog in the Glue Data Catalog: Create a federated catalog in the Data Catalog using an AWS Glue connection. Create an AWS Glue connection that uses the credentials of the integration principal (in Step 1) to connect to Databricks Unity Catalog. Configure an AWS Identity and Access Management (IAM) role with permission to Amazon Simple Storage Service (Amazon S3) locations where the Iceberg table data resides. In a cross-account scenario, make sure the bucket policy grants required access to this IAM role. Discover Iceberg tables in federated catalogs using Lake Formation or AWS Glue APIs. During query operations, Lake Formation manages fine-grained permissions on federated resources and credential vending for access to the underlying data. In the following sections, we walk through the steps to integrate the Glue Data Catalog with Databricks Unity Catalog on AWS. Prerequisites To follow along with the solution presented in this post, you must have the following prerequisites: Databricks Workspace (on AWS) with Databricks Unity Catalog configured. An IAM role that is a Lake Formation data lake administrator in your AWS account. A data lake administrator is an IAM principal that can register S3 locations, access the Data Catalog, grant Lake Formation permissions to other users, and view AWS CloudTrail logs. See Create a data lake administrator for more information. Configure Databricks Unity Catalog for external access Catalog federation to a Databricks Unity Catalog uses the OAuth2 credentials of a Databricks service principal configured in the workspace admin settings. This authentication mechanism allows the Data Catalog to access the metadata of various objects (such as catalogs, databases, and tables) within Databricks Unity Catalog, based on the privileges associated with the service principal. For proper functionality, grant the service principal with the necessary permissions (read permission on catalog, schema, and tables) to read the metadata of these objects and allow access from external engines. Next, catalog federation enables discovery and query of Iceberg tables in your Databricks Unity Catalog. For reading delta tables, enable UniForm on a Delta Lake table in Databricks to generate Iceberg metadata. For more information, refer to Read Delta tables with Iceberg clients . Follow the Databricks tutorial and documentation to create the service principal and associated privileges in your Databricks workspace. For this post, we use a service principal named integrationprincipal that is configured with required permissions (SELECT, USE CATALOG, USE SCHEMA) on Databricks Unity Catalog objects and will be used for authentication to catalog instance. Catalog federation supports OAuth2 authentication, so enable OAuth for the service principal and note down the client_id and client_secret for later use. Set up Data Catalog federation with Databricks Unity Catalog Now that you have service principal access for Databricks Unity Catalog, you can set up catalog federation in the Data Catalog. To do so, you create an AWS Secrets Manager secret and create an IAM role for catalog federation. Create secret Complete the following steps to create a secret: Sign in to the AWS Management Console using an IAM role with access to Secrets Manager. On the Secrets Manager console, choose Store a new secret and Other type of secret . Set the key-value pair: Key: USER_MANAGED_CLIENT_APPLICATION_CLIENT_SECRET Value: The client secret noted earlier Choose Next . Enter a name for your secret (for this post, we use dbx ). Choose Store . Create IAM role for catalog federation As the catalog owner of a federated catalog in the Data Catalog, you can use Lake Formation to implement comprehensive access controls, including table filters, column filters, and row filters, as well as tag-based access for your data teams. Lake Formation requires an IAM role with permissions to access the underlying S3 locations of your external catalog. In this step, you create an IAM role that enables the AWS Glue connection to access Secrets Manager, optional virtual private cloud (VPC) configurations, and Lake Formation to manage credential vending for the S3 bucket and prefix: Secrets Manager access – The AWS Glue connection requires permissions to retrieve secret values from Secrets Manager for OAuth tokens stored for your Databricks Unity service connection. VPC access (optional) – When using VPC endpoints to restrict connectivity to your Databricks Unity account, the AWS Glue connection needs permissions to describe and utilize VPC network interfaces. This configuration provides secure, controlled access to both your stored credentials and network resources while maintaining proper isolation through VPC endpoints. S3 bucket and AWS KMS key permission – The AWS Glue connection requires Amazon S3 permissions to read certificates if used in the connection setup. Additionally, Lake Formation requires read permissions on the bucket and prefix where the remote catalog table data resides. If the data is encrypted using an AWS Key Management Service (AWS KMS) key, additional AWS KMS permissions are required. Complete the following steps: Create an IAM role called LFDataAccessRole with the following policies: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret" ], "Resource": [ "<secrets manager ARN>" ] }, { "Effect": "Allow", "Action": [ "ec2:CreateNetworkInterface", "ec2:DeleteNetworkInterface", "ec2:DescribeNetworkInterfaces" ], "Resource": "*", "Condition": { "ArnEquals": { "ec2:Vpc": "arn:aws:ec2:region:account-id:vpc/<vpc-id>", "ec2:Subnet": [ "arn:aws:ec2:region:account-id:subnet/<subnet-id>" ] } } }, { # Required when using custom cert to sign requests. "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3 :::<bucketname>/<certpath>" ] }, { # Required when using customer managed encryption key for s3 "Effect": "Allow", "Action": [ "kms:decrypt", "kms:encrypt" ], "Resource": [ "<kmsKey>" ] } ] } Configure the role with the following trust policy: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": ["glue.amazonaws.com","lakeformation.amazonaws.com"] }, "Action": "sts:AssumeRole" } ] } Create federated catalog in Data Catalog AWS Glue supports the DATABRICKSICEBERGRESTCATALOG connection type for connecting the Data Catalog with managed Databricks Unity Catalog. This AWS Glue connector supports OAuth2 authentication for discovering metadata in Databricks Unity Catalog. Complete the following steps to create the federated catalog: Sign in to the console as a data lake admin. On the Lake Formation console, choose Catalogs in the navigation pane. Choose Create catalog . For Name , enter a name for your catalog. For Catalog name in Databricks , enter the name of a catalog existing in Databricks Unity Catalog. For Connection name , enter a name for the AWS Glue connection. For Workspace URL , enter the Unity Iceberg REST API URL (in format https:// <workspace-url> /cloud.databricks.com ). For Authentication , provide the following information: For Authentication type , choose OAuth2 . Alternatively, you can choose Custom authentication . For Custom authentication , an access token is created, refreshed, and managed by the customer’s application or system and stored using Secrets Manager. For Token URL , enter the token authentication server URL. For OAuth Client ID , enter the client_id for integrationprincipal . For OAuth Secret , enter the secret ARN that you created in the previous step. Alternatively, you can provide the client_secret directly. For Token URL parameter map scope , provide the API scope supported. If you have AWS PrivateLink set up or a proxy set up, you can provide network details under Settings for network configurations . For Register Glue connection with Lake Formation , choose the IAM role ( LFDataAccessRole ) created earlier to manage data access using Lake Formation. When the setup is done using AWS Command Line Interface (AWS CLI) commands, you have options to create two separate IAM roles: IAM role with policies to access network and secrets, which AWS Glue assumes to manage authentication IAM role with access to the S3 bucket, which Lake Formation assumes to manage credential vending for data access On the console, this setup is simplified with a single role having combined policies. For more details, refer to Federate to Databricks Unity Catalog . To test the connection, choose Run test . You can proceed to create the catalog. After you create the catalog, you can see the databases and tables in Databricks Unity Catalog listed under the federated catalog. You can implement fine-grained access control on the tables by applying row and column filters using Lake Formation. The following video shows the catalog federation setup with Databricks Unity Catalog. Discover and query the data using Athena In this post, we show how to use the Athena query editor to discover and query the Databricks Unity Catalog tables. On the Athena console, run the following query to access the federated table: SELECT * FROM "customerschema"."person" limit 10; The following video demonstrates querying the federated table from Athena. If you use the Amazon Redshift query engine, you must create a resource link on the federated database and grant permission on the resource link to the user or role. This database resource link is automounted under awsdatacatalog based on the permission granted for the user or role and available for querying. For instructions, refer to Creating resource links. Clean up To clean up your resources, complete the following steps: Delete the catalog and namespace in Databricks Unity Catalog for this post. Drop the resources in the Data Catalog and Lake Formation created for this post. Delete the IAM roles and S3 buckets used for this post. Delete any VPC and KMS keys if used for this post. Conclusion In this post, we explored the key elements of catalog federation and its architectural design, illustrating the interaction between the AWS Glue Data Catalog and Databricks Unity Catalog through centralized authorization and credential distribution for protected data access. By removing the requirement for complicated synchronization workflows, catalog federation makes it possible to query Iceberg data on Amazon S3 directly at its source using AWS analytics services with data governance across multi-catalog platforms. Try out the solution for your own use case, and share your feedback and questions in the comments. About the Authors Srividya Parthasarathy Srividya is a Senior Big Data Architect on the AWS Lake Formation team. She works with the product team and customers to build robust features and solutions for their analytical data platform. She enjoys building data mesh solutions and sharing them with the community. Venkatavaradhan (Venkat) Viswanathan Venkat” is a Global Partner Solutions Architect at Amazon Web Services. Venkat is a Technology Strategy Leader in Data, AI, ML, Generative AI, and Advanced Analytics. Venkat is a Global SME for Databricks and helps AWS customers design, build, secure, and optimize Databricks workloads on AWS. Loading comments… Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow Twitter Facebook LinkedIn Twitch Email Updates Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center SDKs & Tools .NET on AWS Python on AWS Java on AWS PHP on AWS JavaScript on AWS Help Contact Us File a Support Ticket AWS re:Post Knowledge Center AWS Support Overview Get Expert Help AWS Accessibility Legal English Back to top Amazon is an Equal Opportunity Employer: Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation / Age. x facebook linkedin instagram twitch youtube podcasts <path d="M14.8571 13.1432V6.28606C14.6667 6.50035 14.4613 6.69678 14.2411 6.87535C12.6458 8.10154 11.378 9.10749 10.4375 9.89321C10.1339 10.1492 9.88691 10.3486 9.69643 10.4914C9.50595 10.6343 9.24851 10.7786 8.92411 10.9245C8.5997 11.0703 8.29464 11.1432 8.00893 11.1432H8H7.99107C7.70536 11.1432 7.4003 11.0703 7.07589 10.9245C6.75149 10.7786 6.49405 10.6343 6.30357 10.4914C6.1131 10.3486 5.86607 10.1492 5.5625 9.89321C4.62202 9.10749 3.35417 8.10154 1.75893 6.87535C1.53869 6.69678 1.33333 6.50035 1.14286 6.28606V13.1432C1.14286 13.2206 1.17113 13.2876 1.22768 13.3441C1.28423 13.4006 1.35119 13.4289 1.42857 13.4289H14.5714C14.6488 13.4289 14.7158 13.4006 14.7723 13.3441C14.8289 13.2876 14.8571 13.2206 14.8571 13.1432ZM14.8571 3.75928C14.8571 3.74737 14.8571 3.71463 14.8571 3.6 | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/cargo/reference/config.html#configuration-format | Configuration - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Configuration This document explains how Cargo’s configuration system works, as well as available keys or configuration. For configuration of a package through its manifest, see the manifest format . Hierarchical structure Cargo allows local configuration for a particular package as well as global configuration. It looks for configuration files in the current directory and all parent directories. If, for example, Cargo were invoked in /projects/foo/bar/baz , then the following configuration files would be probed for and unified in this order: /projects/foo/bar/baz/.cargo/config.toml /projects/foo/bar/.cargo/config.toml /projects/foo/.cargo/config.toml /projects/.cargo/config.toml /.cargo/config.toml $CARGO_HOME/config.toml which defaults to: Windows: %USERPROFILE%\.cargo\config.toml Unix: $HOME/.cargo/config.toml With this structure, you can specify configuration per-package, and even possibly check it into version control. You can also specify personal defaults with a configuration file in your home directory. If a key is specified in multiple config files, the values will get merged together. Numbers, strings, and booleans will use the value in the deeper config directory taking precedence over ancestor directories, where the home directory is the lowest priority. Arrays will be joined together with higher precedence items being placed later in the merged array. At present, when being invoked from a workspace, Cargo does not read config files from crates within the workspace. i.e. if a workspace has two crates in it, named /projects/foo/bar/baz/mylib and /projects/foo/bar/baz/mybin , and there are Cargo configs at /projects/foo/bar/baz/mylib/.cargo/config.toml and /projects/foo/bar/baz/mybin/.cargo/config.toml , Cargo does not read those configuration files if it is invoked from the workspace root ( /projects/foo/bar/baz/ ). Note: Cargo also reads config files without the .toml extension, such as .cargo/config . Support for the .toml extension was added in version 1.39 and is the preferred form. If both files exist, Cargo will use the file without the extension. Configuration format Configuration files are written in the TOML format (like the manifest), with simple key-value pairs inside of sections (tables). The following is a quick overview of all settings, with detailed descriptions found below. paths = ["/path/to/override"] # path dependency overrides [alias] # command aliases b = "build" c = "check" t = "test" r = "run" rr = "run --release" recursive_example = "rr --example recursions" space_example = ["run", "--release", "--", "\"command list\""] [build] jobs = 1 # number of parallel jobs, defaults to # of CPUs rustc = "rustc" # the rust compiler tool rustc-wrapper = "…" # run this wrapper instead of `rustc` rustc-workspace-wrapper = "…" # run this wrapper instead of `rustc` for workspace members rustdoc = "rustdoc" # the doc generator tool target = "triple" # build for the target triple (ignored by `cargo install`) target-dir = "target" # path of where to place generated artifacts build-dir = "target" # path of where to place intermediate build artifacts rustflags = ["…", "…"] # custom flags to pass to all compiler invocations rustdocflags = ["…", "…"] # custom flags to pass to rustdoc incremental = true # whether or not to enable incremental compilation dep-info-basedir = "…" # path for the base directory for targets in depfiles [credential-alias] # Provides a way to define aliases for credential providers. my-alias = ["/usr/bin/cargo-credential-example", "--argument", "value", "--flag"] [doc] browser = "chromium" # browser to use with `cargo doc --open`, # overrides the `BROWSER` environment variable [env] # Set ENV_VAR_NAME=value for any process run by Cargo ENV_VAR_NAME = "value" # Set even if already present in environment ENV_VAR_NAME_2 = { value = "value", force = true } # `value` is relative to the parent of `.cargo/config.toml`, env var will be the full absolute path ENV_VAR_NAME_3 = { value = "relative/path", relative = true } [future-incompat-report] frequency = 'always' # when to display a notification about a future incompat report [cache] auto-clean-frequency = "1 day" # How often to perform automatic cache cleaning [cargo-new] vcs = "none" # VCS to use ('git', 'hg', 'pijul', 'fossil', 'none') [http] debug = false # HTTP debugging proxy = "host:port" # HTTP proxy in libcurl format ssl-version = "tlsv1.3" # TLS version to use ssl-version.max = "tlsv1.3" # maximum TLS version ssl-version.min = "tlsv1.1" # minimum TLS version timeout = 30 # timeout for each HTTP request, in seconds low-speed-limit = 10 # network timeout threshold (bytes/sec) cainfo = "cert.pem" # path to Certificate Authority (CA) bundle proxy-cainfo = "cert.pem" # path to proxy Certificate Authority (CA) bundle check-revoke = true # check for SSL certificate revocation multiplexing = true # HTTP/2 multiplexing user-agent = "…" # the user-agent header [install] root = "/some/path" # `cargo install` destination directory [net] retry = 3 # network retries git-fetch-with-cli = true # use the `git` executable for git operations offline = true # do not access the network [net.ssh] known-hosts = ["..."] # known SSH host keys [patch.<registry>] # Same keys as for [patch] in Cargo.toml [profile.<name>] # Modify profile settings via config. inherits = "dev" # Inherits settings from [profile.dev]. opt-level = 0 # Optimization level. debug = true # Include debug info. split-debuginfo = '...' # Debug info splitting behavior. strip = "none" # Removes symbols or debuginfo. debug-assertions = true # Enables debug assertions. overflow-checks = true # Enables runtime integer overflow checks. lto = false # Sets link-time optimization. panic = 'unwind' # The panic strategy. incremental = true # Incremental compilation. codegen-units = 16 # Number of code generation units. rpath = false # Sets the rpath linking option. [profile.<name>.build-override] # Overrides build-script settings. # Same keys for a normal profile. [profile.<name>.package.<name>] # Override profile for a package. # Same keys for a normal profile (minus `panic`, `lto`, and `rpath`). [resolver] incompatible-rust-versions = "allow" # Specifies how resolver reacts to these [registries.<name>] # registries other than crates.io index = "…" # URL of the registry index token = "…" # authentication token for the registry credential-provider = "cargo:token" # The credential provider for this registry. [registries.crates-io] protocol = "sparse" # The protocol to use to access crates.io. [registry] default = "…" # name of the default registry token = "…" # authentication token for crates.io credential-provider = "cargo:token" # The credential provider for crates.io. global-credential-providers = ["cargo:token"] # The credential providers to use by default. [source.<name>] # source definition and replacement replace-with = "…" # replace this source with the given named source directory = "…" # path to a directory source registry = "…" # URL to a registry source local-registry = "…" # path to a local registry source git = "…" # URL of a git repository source branch = "…" # branch name for the git repository tag = "…" # tag name for the git repository rev = "…" # revision for the git repository [target.<triple>] linker = "…" # linker to use runner = "…" # wrapper to run executables rustflags = ["…", "…"] # custom flags for `rustc` rustdocflags = ["…", "…"] # custom flags for `rustdoc` [target.<cfg>] linker = "…" # linker to use runner = "…" # wrapper to run executables rustflags = ["…", "…"] # custom flags for `rustc` [target.<triple>.<links>] # `links` build script override rustc-link-lib = ["foo"] rustc-link-search = ["/path/to/foo"] rustc-flags = "-L /some/path" rustc-cfg = ['key="value"'] rustc-env = {key = "value"} rustc-cdylib-link-arg = ["…"] metadata_key1 = "value" metadata_key2 = "value" [term] quiet = false # whether cargo output is quiet verbose = false # whether cargo provides verbose output color = 'auto' # whether cargo colorizes output hyperlinks = true # whether cargo inserts links into output unicode = true # whether cargo can render output using non-ASCII unicode characters progress.when = 'auto' # whether cargo shows progress bar progress.width = 80 # width of progress bar progress.term-integration = true # whether cargo reports progress to terminal emulator Environment variables Cargo can also be configured through environment variables in addition to the TOML configuration files. For each configuration key of the form foo.bar the environment variable CARGO_FOO_BAR can also be used to define the value. Keys are converted to uppercase, dots and dashes are converted to underscores. For example the target.x86_64-unknown-linux-gnu.runner key can also be defined by the CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUNNER environment variable. Environment variables will take precedence over TOML configuration files. Currently only integer, boolean, string and some array values are supported to be defined by environment variables. Descriptions below indicate which keys support environment variables and otherwise they are not supported due to technical issues . In addition to the system above, Cargo recognizes a few other specific environment variables . Command-line overrides Cargo also accepts arbitrary configuration overrides through the --config command-line option. The argument should be in TOML syntax of KEY=VALUE or provided as a path to an extra configuration file: # With `KEY=VALUE` in TOML syntax cargo --config net.git-fetch-with-cli=true fetch # With a path to a configuration file cargo --config ./path/to/my/extra-config.toml fetch The --config option may be specified multiple times, in which case the values are merged in left-to-right order, using the same merging logic that is used when multiple configuration files apply. Configuration values specified this way take precedence over environment variables, which take precedence over configuration files. When the --config option is provided as an extra configuration file, The configuration file loaded this way follow the same precedence rules as other options specified directly with --config . Some examples of what it looks like using Bourne shell syntax: # Most shells will require escaping. cargo --config http.proxy=\"http://example.com\" … # Spaces may be used. cargo --config "net.git-fetch-with-cli = true" … # TOML array example. Single quotes make it easier to read and write. cargo --config 'build.rustdocflags = ["--html-in-header", "header.html"]' … # Example of a complex TOML key. cargo --config "target.'cfg(all(target_arch = \"arm\", target_os = \"none\"))'.runner = 'my-runner'" … # Example of overriding a profile setting. cargo --config profile.dev.package.image.opt-level=3 … Config-relative paths Paths in config files may be absolute, relative, or a bare name without any path separators. Paths for executables without a path separator will use the PATH environment variable to search for the executable. Paths for non-executables will be relative to where the config value is defined. In particular, rules are: For environment variables, paths are relative to the current working directory. For config values loaded directly from the --config KEY=VALUE option, paths are relative to the current working directory. For config files, paths are relative to the parent directory of the directory where the config files were defined, no matter those files are from either the hierarchical probing or the --config <path> option. Note: To maintain consistency with existing .cargo/config.toml probing behavior, it is by design that a path in a config file passed via --config <path> is also relative to two levels up from the config file itself. To avoid unexpected results, the rule of thumb is putting your extra config files at the same level of discovered .cargo/config.toml in your project. For instance, given a project /my/project , it is recommended to put config files under /my/project/.cargo or a new directory at the same level, such as /my/project/.config . # Relative path examples. [target.x86_64-unknown-linux-gnu] runner = "foo" # Searches `PATH` for `foo`. [source.vendored-sources] # Directory is relative to the parent where `.cargo/config.toml` is located. # For example, `/my/project/.cargo/config.toml` would result in `/my/project/vendor`. directory = "vendor" Executable paths with arguments Some Cargo commands invoke external programs, which can be configured as a path and some number of arguments. The value may be an array of strings like ['/path/to/program', 'somearg'] or a space-separated string like '/path/to/program somearg' . If the path to the executable contains a space, the list form must be used. If Cargo is passing other arguments to the program such as a path to open or run, they will be passed after the last specified argument in the value of an option of this format. If the specified program does not have path separators, Cargo will search PATH for its executable. Credentials Configuration values with sensitive information are stored in the $CARGO_HOME/credentials.toml file. This file is automatically created and updated by cargo login and cargo logout when using the cargo:token credential provider. Tokens are used by some Cargo commands such as cargo publish for authenticating with remote registries. Care should be taken to protect the tokens and to keep them secret. It follows the same format as Cargo config files. [registry] token = "…" # Access token for crates.io [registries.<name>] token = "…" # Access token for the named registry As with most other config values, tokens may be specified with environment variables. The token for crates.io may be specified with the CARGO_REGISTRY_TOKEN environment variable. Tokens for other registries may be specified with environment variables of the form CARGO_REGISTRIES_<name>_TOKEN where <name> is the name of the registry in all capital letters. Note: Cargo also reads and writes credential files without the .toml extension, such as .cargo/credentials . Support for the .toml extension was added in version 1.39. In version 1.68, Cargo writes to the file with the extension by default. However, for backward compatibility reason, when both files exist, Cargo will read and write the file without the extension. Configuration keys This section documents all configuration keys. The description for keys with variable parts are annotated with angled brackets like target.<triple> where the <triple> part can be any target triple like target.x86_64-pc-windows-msvc . paths Type: array of strings (paths) Default: none Environment: not supported An array of paths to local packages which are to be used as overrides for dependencies. For more information see the Overriding Dependencies guide . [alias] Type: string or array of strings Default: see below Environment: CARGO_ALIAS_<name> The [alias] table defines CLI command aliases. For example, running cargo b is an alias for running cargo build . Each key in the table is the subcommand, and the value is the actual command to run. The value may be an array of strings, where the first element is the command and the following are arguments. It may also be a string, which will be split on spaces into subcommand and arguments. The following aliases are built-in to Cargo: [alias] b = "build" c = "check" d = "doc" t = "test" r = "run" rm = "remove" Aliases are not allowed to redefine existing built-in commands. Aliases are recursive: [alias] rr = "run --release" recursive_example = "rr --example recursions" [build] The [build] table controls build-time operations and compiler settings. build.jobs Type: integer or string Default: number of logical CPUs Environment: CARGO_BUILD_JOBS Sets the maximum number of compiler processes to run in parallel. If negative, it sets the maximum number of compiler processes to the number of logical CPUs plus provided value. Should not be 0. If a string default is provided, it sets the value back to defaults. Can be overridden with the --jobs CLI option. build.rustc Type: string (program path) Default: "rustc" Environment: CARGO_BUILD_RUSTC or RUSTC Sets the executable to use for rustc . build.rustc-wrapper Type: string (program path) Default: none Environment: CARGO_BUILD_RUSTC_WRAPPER or RUSTC_WRAPPER Sets a wrapper to execute instead of rustc . The first argument passed to the wrapper is the path to the actual executable to use (i.e., build.rustc , if that is set, or "rustc" otherwise). build.rustc-workspace-wrapper Type: string (program path) Default: none Environment: CARGO_BUILD_RUSTC_WORKSPACE_WRAPPER or RUSTC_WORKSPACE_WRAPPER Sets a wrapper to execute instead of rustc , for workspace members only. When building a single-package project without workspaces, that package is considered to be the workspace. The first argument passed to the wrapper is the path to the actual executable to use (i.e., build.rustc , if that is set, or "rustc" otherwise). It affects the filename hash so that artifacts produced by the wrapper are cached separately. If both rustc-wrapper and rustc-workspace-wrapper are set, then they will be nested: the final invocation is $RUSTC_WRAPPER $RUSTC_WORKSPACE_WRAPPER $RUSTC . build.rustdoc Type: string (program path) Default: "rustdoc" Environment: CARGO_BUILD_RUSTDOC or RUSTDOC Sets the executable to use for rustdoc . build.target Type: string or array of strings Default: host platform Environment: CARGO_BUILD_TARGET The default target platform triples to compile to. Possible values: Any supported target in rustc --print target-list . "host-tuple" , which will internally be substituted by the host’s target. This can be particularly useful if you’re cross-compiling some crates, and don’t want to specify your host’s machine as a target (for instance, an xtask in a shared project that may be worked on by many hosts). A path to a custom target specification. See Custom Target Lookup Path for more information. Can be overridden with the --target CLI option. [build] target = ["x86_64-unknown-linux-gnu", "i686-unknown-linux-gnu"] build.target-dir Type: string (path) Default: "target" Environment: CARGO_BUILD_TARGET_DIR or CARGO_TARGET_DIR The path to where all compiler output is placed. The default if not specified is a directory named target located at the root of the workspace. Can be overridden with the --target-dir CLI option. For more information see the build cache documentation . build.build-dir Type: string (path) Default: Defaults to the value of build.target-dir Environment: CARGO_BUILD_BUILD_DIR The directory where intermediate build artifacts will be stored. Intermediate artifacts are produced by Rustc/Cargo during the build process. This option supports path templating. Available template variables: {workspace-root} resolves to root of the current workspace. {cargo-cache-home} resolves to CARGO_HOME {workspace-path-hash} resolves to a hash of the manifest path For more information see the build cache documentation . build.rustflags Type: string or array of strings Default: none Environment: CARGO_BUILD_RUSTFLAGS or CARGO_ENCODED_RUSTFLAGS or RUSTFLAGS Extra command-line flags to pass to rustc . The value may be an array of strings or a space-separated string. There are four mutually exclusive sources of extra flags. They are checked in order, with the first one being used: CARGO_ENCODED_RUSTFLAGS environment variable. RUSTFLAGS environment variable. All matching target.<triple>.rustflags and target.<cfg>.rustflags config entries joined together. build.rustflags config value. Additional flags may also be passed with the cargo rustc command. If the --target flag (or build.target ) is used, then the flags will only be passed to the compiler for the target. Things being built for the host, such as build scripts or proc macros, will not receive the args. Without --target , the flags will be passed to all compiler invocations (including build scripts and proc macros) because dependencies are shared. If you have args that you do not want to pass to build scripts or proc macros and are building for the host, pass --target with the host triple . It is not recommended to pass in flags that Cargo itself usually manages. For example, the flags driven by profiles are best handled by setting the appropriate profile setting. Caution : Due to the low-level nature of passing flags directly to the compiler, this may cause a conflict with future versions of Cargo which may issue the same or similar flags on its own which may interfere with the flags you specify. This is an area where Cargo may not always be backwards compatible. build.rustdocflags Type: string or array of strings Default: none Environment: CARGO_BUILD_RUSTDOCFLAGS or CARGO_ENCODED_RUSTDOCFLAGS or RUSTDOCFLAGS Extra command-line flags to pass to rustdoc . The value may be an array of strings or a space-separated string. There are four mutually exclusive sources of extra flags. They are checked in order, with the first one being used: CARGO_ENCODED_RUSTDOCFLAGS environment variable. RUSTDOCFLAGS environment variable. All matching target.<triple>.rustdocflags config entries joined together. build.rustdocflags config value. Additional flags may also be passed with the cargo rustdoc command. Caution : Due to the low-level nature of passing flags directly to the compiler, this may cause a conflict with future versions of Cargo which may issue the same or similar flags on its own which may interfere with the flags you specify. This is an area where Cargo may not always be backwards compatible. build.incremental Type: bool Default: from profile Environment: CARGO_BUILD_INCREMENTAL or CARGO_INCREMENTAL Whether or not to perform incremental compilation . The default if not set is to use the value from the profile . Otherwise this overrides the setting of all profiles. The CARGO_INCREMENTAL environment variable can be set to 1 to force enable incremental compilation for all profiles, or 0 to disable it. This env var overrides the config setting. build.dep-info-basedir Type: string (path) Default: none Environment: CARGO_BUILD_DEP_INFO_BASEDIR Strips the given path prefix from dep info file paths. This config setting is intended to convert absolute paths to relative paths for tools that require relative paths. The setting itself is a config-relative path. So, for example, a value of "." would strip all paths starting with the parent directory of the .cargo directory. build.pipelining This option is deprecated and unused. Cargo always has pipelining enabled. [credential-alias] Type: string or array of strings Default: empty Environment: CARGO_CREDENTIAL_ALIAS_<name> The [credential-alias] table defines credential provider aliases. These aliases can be referenced as an element of the registry.global-credential-providers array, or as a credential provider for a specific registry under registries.<NAME>.credential-provider . If specified as a string, the value will be split on spaces into path and arguments. For example, to define an alias called my-alias : [credential-alias] my-alias = ["/usr/bin/cargo-credential-example", "--argument", "value", "--flag"] See Registry Authentication for more information. [doc] The [doc] table defines options for the cargo doc command. doc.browser Type: string or array of strings ( program path with args ) Default: BROWSER environment variable, or, if that is missing, opening the link in a system specific way This option sets the browser to be used by cargo doc , overriding the BROWSER environment variable when opening documentation with the --open option. [cargo-new] The [cargo-new] table defines defaults for the cargo new command. cargo-new.name This option is deprecated and unused. cargo-new.email This option is deprecated and unused. cargo-new.vcs Type: string Default: "git" or "none" Environment: CARGO_CARGO_NEW_VCS Specifies the source control system to use for initializing a new repository. Valid values are git , hg (for Mercurial), pijul , fossil or none to disable this behavior. Defaults to git , or none if already inside a VCS repository. Can be overridden with the --vcs CLI option. [env] The [env] section allows you to set additional environment variables for build scripts, rustc invocations, cargo run and cargo build . [env] OPENSSL_DIR = "/opt/openssl" By default, the variables specified will not override values that already exist in the environment. This behavior can be changed by setting the force flag. Setting the relative flag evaluates the value as a config-relative path that is relative to the parent directory of the .cargo directory that contains the config.toml file. The value of the environment variable will be the full absolute path. [env] TMPDIR = { value = "/home/tmp", force = true } OPENSSL_DIR = { value = "vendor/openssl", relative = true } [future-incompat-report] The [future-incompat-report] table controls setting for future incompat reporting future-incompat-report.frequency Type: string Default: "always" Environment: CARGO_FUTURE_INCOMPAT_REPORT_FREQUENCY Controls how often we display a notification to the terminal when a future incompat report is available. Possible values: always (default): Always display a notification when a command (e.g. cargo build ) produces a future incompat report never : Never display a notification [cache] The [cache] table defines settings for cargo’s caches. Global caches When running cargo commands, Cargo will automatically track which files you are using within the global cache. Periodically, Cargo will delete files that have not been used for some period of time. It will delete files that have to be downloaded from the network if they have not been used in 3 months. Files that can be generated without network access will be deleted if they have not been used in 1 month. The automatic deletion of files only occurs when running commands that are already doing a significant amount of work, such as all of the build commands ( cargo build , cargo test , cargo check , etc.), and cargo fetch . Automatic deletion is disabled if cargo is offline such as with --offline or --frozen to avoid deleting artifacts that may need to be used if you are offline for a long period of time. Note : This tracking is currently only implemented for the global cache in Cargo’s home directory. This includes registry indexes and source files downloaded from registries and git dependencies. Support for tracking build artifacts is not yet implemented, and tracked in cargo#13136 . Additionally, there is an unstable feature to support manually triggering cache cleaning, and to further customize the configuration options. See the Unstable chapter for more information. cache.auto-clean-frequency Type: string Default: "1 day" Environment: CARGO_CACHE_AUTO_CLEAN_FREQUENCY This option defines how often Cargo will automatically delete unused files in the global cache. This does not define how old the files must be, those thresholds are described above . It supports the following settings: "never" — Never deletes old files. "always" — Checks to delete old files every time Cargo runs. An integer followed by “seconds”, “minutes”, “hours”, “days”, “weeks”, or “months” — Checks to delete old files at most the given time frame. [http] The [http] table defines settings for HTTP behavior. This includes fetching crate dependencies and accessing remote git repositories. http.debug Type: boolean Default: false Environment: CARGO_HTTP_DEBUG If true , enables debugging of HTTP requests. The debug information can be seen by setting the CARGO_LOG=network=debug environment variable (or use network=trace for even more information). Be wary when posting logs from this output in a public location. The output may include headers with authentication tokens which you don’t want to leak! Be sure to review logs before posting them. http.proxy Type: string Default: none Environment: CARGO_HTTP_PROXY or HTTPS_PROXY or https_proxy or http_proxy Sets an HTTP and HTTPS proxy to use. The format is in libcurl format as in [protocol://]host[:port] . If not set, Cargo will also check the http.proxy setting in your global git configuration. If none of those are set, the HTTPS_PROXY or https_proxy environment variables set the proxy for HTTPS requests, and http_proxy sets it for HTTP requests. http.timeout Type: integer Default: 30 Environment: CARGO_HTTP_TIMEOUT or HTTP_TIMEOUT Sets the timeout for each HTTP request, in seconds. http.cainfo Type: string (path) Default: none Environment: CARGO_HTTP_CAINFO Path to a Certificate Authority (CA) bundle file, used to verify TLS certificates. If not specified, Cargo attempts to use the system certificates. http.proxy-cainfo Type: string (path) Default: falls back to http.cainfo if not set Environment: CARGO_HTTP_PROXY_CAINFO Path to a Certificate Authority (CA) bundle file, used to verify proxy TLS certificates. http.check-revoke Type: boolean Default: true (Windows) false (all others) Environment: CARGO_HTTP_CHECK_REVOKE This determines whether or not TLS certificate revocation checks should be performed. This only works on Windows. http.ssl-version Type: string or min/max table Default: none Environment: CARGO_HTTP_SSL_VERSION This sets the minimum TLS version to use. It takes a string, with one of the possible values of "default" , "tlsv1" , "tlsv1.0" , "tlsv1.1" , "tlsv1.2" , or "tlsv1.3" . This may alternatively take a table with two keys, min and max , which each take a string value of the same kind that specifies the minimum and maximum range of TLS versions to use. The default is a minimum version of "tlsv1.0" and a max of the newest version supported on your platform, typically "tlsv1.3" . http.low-speed-limit Type: integer Default: 10 Environment: CARGO_HTTP_LOW_SPEED_LIMIT This setting controls timeout behavior for slow connections. If the average transfer speed in bytes per second is below the given value for http.timeout seconds (default 30 seconds), then the connection is considered too slow and Cargo will abort and retry. http.multiplexing Type: boolean Default: true Environment: CARGO_HTTP_MULTIPLEXING When true , Cargo will attempt to use the HTTP2 protocol with multiplexing. This allows multiple requests to use the same connection, usually improving performance when fetching multiple files. If false , Cargo will use HTTP 1.1 without pipelining. http.user-agent Type: string Default: Cargo’s version Environment: CARGO_HTTP_USER_AGENT Specifies a custom user-agent header to use. The default if not specified is a string that includes Cargo’s version. [install] The [install] table defines defaults for the cargo install command. install.root Type: string (path) Default: Cargo’s home directory Environment: CARGO_INSTALL_ROOT Sets the path to the root directory for installing executables for cargo install . Executables go into a bin directory underneath the root. To track information of installed executables, some extra files, such as .crates.toml and .crates2.json , are also created under this root. The default if not specified is Cargo’s home directory (default .cargo in your home directory). Can be overridden with the --root command-line option. [net] The [net] table controls networking configuration. net.retry Type: integer Default: 3 Environment: CARGO_NET_RETRY Number of times to retry possibly spurious network errors. net.git-fetch-with-cli Type: boolean Default: false Environment: CARGO_NET_GIT_FETCH_WITH_CLI If this is true , then Cargo will use the git executable to fetch registry indexes and git dependencies. If false , then it uses a built-in git library. Setting this to true can be helpful if you have special authentication requirements that Cargo does not support. See Git Authentication for more information about setting up git authentication. net.offline Type: boolean Default: false Environment: CARGO_NET_OFFLINE If this is true , then Cargo will avoid accessing the network, and attempt to proceed with locally cached data. If false , Cargo will access the network as needed, and generate an error if it encounters a network error. Can be overridden with the --offline command-line option. net.ssh The [net.ssh] table contains settings for SSH connections. net.ssh.known-hosts Type: array of strings Default: see description Environment: not supported The known-hosts array contains a list of SSH host keys that should be accepted as valid when connecting to an SSH server (such as for SSH git dependencies). Each entry should be a string in a format similar to OpenSSH known_hosts files. Each string should start with one or more hostnames separated by commas, a space, the key type name, a space, and the base64-encoded key. For example: [net.ssh] known-hosts = [ "example.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFO4Q5T0UV0SQevair9PFwoxY9dl4pQl3u5phoqJH3cF" ] Cargo will attempt to load known hosts keys from common locations supported in OpenSSH, and will join those with any listed in a Cargo configuration file. If any matching entry has the correct key, the connection will be allowed. Cargo comes with the host keys for github.com built-in. If those ever change, you can add the new keys to the config or known_hosts file. See Git Authentication for more details. [patch] Just as you can override dependencies using [patch] in Cargo.toml , you can override them in the cargo configuration file to apply those patches to any affected build. The format is identical to the one used in Cargo.toml . Since .cargo/config.toml files are not usually checked into source control, you should prefer patching using Cargo.toml where possible to ensure that other developers can compile your crate in their own environments. Patching through cargo configuration files is generally only appropriate when the patch section is automatically generated by an external build tool. If a given dependency is patched both in a cargo configuration file and a Cargo.toml file, the patch in the configuration file is used. If multiple configuration files patch the same dependency, standard cargo configuration merging is used, which prefers the value defined closest to the current directory, with $HOME/.cargo/config.toml taking the lowest precedence. Relative path dependencies in such a [patch] section are resolved relative to the configuration file they appear in. [profile] The [profile] table can be used to globally change profile settings, and override settings specified in Cargo.toml . It has the same syntax and options as profiles specified in Cargo.toml . See the Profiles chapter for details about the options. [profile.<name>.build-override] Environment: CARGO_PROFILE_<name>_BUILD_OVERRIDE_<key> The build-override table overrides settings for build scripts, proc macros, and their dependencies. It has the same keys as a normal profile. See the overrides section for more details. [profile.<name>.package.<name>] Environment: not supported The package table overrides settings for specific packages. It has the same keys as a normal profile, minus the panic , lto , and rpath settings. See the overrides section for more details. profile.<name>.codegen-units Type: integer Default: See profile docs. Environment: CARGO_PROFILE_<name>_CODEGEN_UNITS See codegen-units . profile.<name>.debug Type: integer or boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_DEBUG See debug . profile.<name>.split-debuginfo Type: string Default: See profile docs. Environment: CARGO_PROFILE_<name>_SPLIT_DEBUGINFO See split-debuginfo . profile.<name>.debug-assertions Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_DEBUG_ASSERTIONS See debug-assertions . profile.<name>.incremental Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_INCREMENTAL See incremental . profile.<name>.lto Type: string or boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_LTO See lto . profile.<name>.overflow-checks Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_OVERFLOW_CHECKS See overflow-checks . profile.<name>.opt-level Type: integer or string Default: See profile docs. Environment: CARGO_PROFILE_<name>_OPT_LEVEL See opt-level . profile.<name>.panic Type: string Default: See profile docs. Environment: CARGO_PROFILE_<name>_PANIC See panic . profile.<name>.rpath Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_RPATH See rpath . profile.<name>.strip Type: string or boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_STRIP See strip . [resolver] The [resolver] table overrides dependency resolution behavior for local development (e.g. excludes cargo install ). resolver.incompatible-rust-versions Type: string Default: See resolver docs Environment: CARGO_RESOLVER_INCOMPATIBLE_RUST_VERSIONS When resolving which version of a dependency to use, select how versions with incompatible package.rust-version s are treated. Values include: allow : treat rust-version -incompatible versions like any other version fallback : only consider rust-version -incompatible versions if no other version matched Can be overridden with --ignore-rust-version CLI option Setting the dependency’s version requirement higher than any version with a compatible rust-version Specifying the version to cargo update with --precise See the resolver chapter for more details. MSRV: allow is supported on any version fallback is respected as of 1.84 [registries] The [registries] table is used for specifying additional registries . It consists of a sub-table for each named registry. registries.<name>.index Type: string (url) Default: none Environment: CARGO_REGISTRIES_<name>_INDEX Specifies the URL of the index for the registry. registries.<name>.token Type: string Default: none Environment: CARGO_REGISTRIES_<name>_TOKEN Specifies the authentication token for the given registry. This value should only appear in the credentials file. This is used for registry commands like cargo publish that require authentication. Can be overridden with the --token command-line option. registries.<name>.credential-provider Type: string or array of path and arguments Default: none Environment: CARGO_REGISTRIES_<name>_CREDENTIAL_PROVIDER Specifies the credential provider for the given registry. If not set, the providers in registry.global-credential-providers will be used. If specified as a string, path and arguments will be split on spaces. For paths or arguments that contain spaces, use an array. If the value exists in the [credential-alias] table, the alias will be used. See Registry Authentication for more information. registries.crates-io.protocol Type: string Default: "sparse" Environment: CARGO_REGISTRIES_CRATES_IO_PROTOCOL Specifies the protocol used to access crates.io. Allowed values are git or sparse . git causes Cargo to clone the entire index of all packages ever published to crates.io from https://github.com/rust-lang/crates.io-index/ . This can have performance implications due to the size of the index. sparse is a newer protocol which uses HTTPS to download only what is necessary from https://index.crates.io/ . This can result in a significant performance improvement for resolving new dependencies in most situations. More information about registry protocols may be found in the Registries chapter . [registry] The [registry] table controls the default registry used when one is not specified. registry.index This value is no longer accepted and should not be used. registry.default Type: string Default: "crates-io" Environment: CARGO_REGISTRY_DEFAULT The name of the registry (from the registries table ) to use by default for registry commands like cargo publish . Can be overridden with the --registry command-line option. registry.credential-provider Type: string or array of path and arguments Default: none Environment: CARGO_REGISTRY_CREDENTIAL_PROVIDER Specifies the credential provider for crates.io . If not set, the providers in registry.global-credential-providers will be used. If specified as a string, path and arguments will be split on spaces. For paths or arguments that contain spaces, use an array. If the value exists in the [credential-alias] table, the alias will be used. See Registry Authentication for more information. registry.token Type: string Default: none Environment: CARGO_REGISTRY_TOKEN Specifies the authentication token for crates.io . This value should only appear in the credentials file. This is used for registry commands like cargo publish that require authentication. Can be overridden with the --token command-line option. registry.global-credential-providers Type: array Default: ["cargo:token"] Environment: CARGO_REGISTRY_GLOBAL_CREDENTIAL_PROVIDERS Specifies the list of global credential providers. If credential provider is not set for a specific registry using registries.<name>.credential-provider , Cargo will use the credential providers in this list. Providers toward the end of the list have precedence. Path and arguments are split on spaces. If the path or arguments contains spaces, the credential provider should be defined in the [credential-alias] table and referenced here by its alias. See Registry Authentication for more information. [source] The [source] table defines the registry sources available. See Source Replacement for more information. It consists of a sub-table for each named source. A source should only define one kind (directory, registry, local-registry, or git). source.<name>.replace-with Type: string Default: none Environment: not supported If set, replace this source with the given named source or named registry. source.<name>.directory Type: string (path) Default: none Environment: not supported Sets the path to a directory to use as a directory source. source.<name>.registry Type: string (url) Default: none Environment: not supported Sets the URL to use for a registry source. source.<name>.local-registry Type: string (path) Default: none Environment: not supported Sets the path to a directory to use as a local registry source. source.<name>.git Type: string (url) Default: none Environment: not supported Sets the URL to use for a git repository source. source.<name>.branch Type: string Default: none Environment: not supported Sets the branch name to use for a git repository. If none of branch , tag , or rev is set, defaults to the master branch. source.<name>.tag Type: string Default: none Environment: not supported Sets the tag name to use for a git repository. If none of branch , tag , or rev is set, defaults to the master branch. source.<name>.rev Type: string Default: none Environment: not supported Sets the revision to use for a git repository. If none of branch , tag , or rev is set, defaults to the master branch. [target] The [target] table is used for specifying settings for specific platform targets. It consists of a sub-table which is either a platform triple or a cfg() expression . The given values will be used if the target platform matches either the <triple> value or the <cfg> expression. [target.thumbv7m-none-eabi] linker = "arm-none-eabi-gcc" runner = "my-emulator" rustflags = ["…", "…"] [target.'cfg(all(target_arch = "arm", target_os = "none"))'] runner = "my-arm-wrapper" rustflags = ["…", "…"] cfg values come from those built-in to the compiler (run rustc --print=cfg to view) and extra --cfg flags passed to rustc (such as those defined in RUSTFLAGS ). Do not try to match on debug_assertions , test , Cargo features like feature="foo" , or values set by build scripts . If using a target spec JSON file, the <triple> value is the filename stem. For example --target foo/bar.json would match [target.bar] . target.<triple>.ar This option is deprecated and unused. target.<triple>.linker Type: string (program path) Default: none Environment: CARGO_TARGET_<triple>_LINKER Specifies the linker which is passed to rustc (via -C linker ) when the <triple> is being compiled for. By default, the linker is not overridden. target.<cfg>.linker This is similar to the target linker , but using a cfg() expression . If both a <triple> and <cfg> runner match, the <triple> will take precedence. It is an error if more than one <cfg> runner matches the current target. target.<triple>.runner Type: string or array of strings ( program path with args ) Default: none Environment: CARGO_TARGET_<triple>_RUNNER If a runner is provided, executables for the target <triple> will be executed by invoking the specified runner with the actual executable passed as an argument. This applies to cargo run , cargo test and cargo bench commands. By default, compiled executables are executed directly. target.<cfg>.runner This is similar to the target runner , but using a cfg() expression . If both a <triple> and <cfg> runner match, the <triple> will take precedence. It is an error if more than one <cfg> runner matches the current target. target.<triple>.rustflags Type: string or array of strings Default: none Environment: CARGO_TARGET_<triple>_RUSTFLAGS Passes a set of custom flags to the compiler for this <triple> . The value may be an array of strings or a space-separated string. See build.rustflags for more details on the different ways to specific extra flags. target.<cfg>.rustflags This is similar to the target rustflags , but using a cfg() expression . If several <cfg> and <triple> entries match the current target, the flags are joined together. target.<triple>.rustdocflags Type: string or array of strings Default: none Environment: CARGO_TARGET_<triple>_RUSTDOCFLAGS Passes a set of custom flags to the compiler for this <triple> . The value may be an array of strings or a space-separated string. See build.rustdocflags for more details on the different ways to specific extra flags. target.<triple>.<links> The links sub-table provides a way to override a build script . When specified, the build script for the given links library will not be run, and the given values will be used instead. [target.x86_64-unknown-linux-gnu.foo] rustc-link-lib = ["foo"] rustc-link-search = ["/path/to/foo"] rustc-flags = "-L /some/path" rustc-cfg = ['key="value"'] rustc-env = {key = "value"} rustc-cdylib-link-arg = ["…"] metadata_key1 = "value" metadata_key2 = "value" [term] The [term] table controls terminal output and interaction. term.quiet Type: boolean Default: false Environment: CARGO_TERM_QUIET Controls whether or not log messages are displayed by Cargo. Specifying the --quiet flag will override and force quiet output. Specifying the --verbose flag will override and disable quiet output. term.verbose Type: boolean Default: false Environment: CARGO_TERM_VERBOSE Controls whether or not extra detailed messages are displayed by Cargo. Specifying the --quiet flag will override and disable verbose output. Specifying the --verbose flag will override and force verbose output. term.color Type: string Default: "auto" Environment: CARGO_TERM_COLOR Controls whether or not colored output is used in the terminal. Possible values: auto (default): Automatically detect if color support is available on the terminal. always : Always display colors. never : Never display colors. Can be overridden with the --color command-line option. term.hyperlinks Type: bool Default: auto-detect Environment: CARGO_TERM_HYPERLINKS Controls whether or not hyperlinks are used in the terminal. term.unicode Type: bool Default: auto-detect Environment: CARGO_TERM_UNICODE Control whether output can be rendered using non-ASCII unicode characters. term.progress.when Type: string Default: "auto" Environment: CARGO_TERM_PROGRESS_WHEN Controls whether or not progress bar is shown in the terminal. Possible values: auto (default): Intelligently guess whether to show progress bar. always : Always show progress bar. never : Never show progress bar. term.progress.width Type: integer Default: none Environment: CARGO_TERM_PROGRESS_WIDTH Sets the width for progress bar. term.progress.term-integration Type: bool Default: auto-detect Environment: CARGO_TERM_PROGRESS_TERM_INTEGRATION Report progress to the terminal emulator for display in places like the task bar. | 2026-01-13T09:29:13 |
https://man.freebsd.org/cgi/man.cgi?query=fish-doc&sektion=1&manpath=freebsd-ports#end | fish-doc Skip site navigation (1) Skip section navigation (2) Header And Logo Peripheral Links . Donate to FreeBSD . Search Site Navigation Home About Introduction Features Advocacy Marketing Get FreeBSD Release Information Release Engineering Documentation FAQ Handbook Porter's Handbook Developer's Handbook Manual Pages Documentation Project Primer All Books and Articles Community Mailing Lists Forums User Groups Events Developers Project Ideas GIT Repository Support Vendors Security Information Bug Reports Submit Bug-report Foundation Donate FreeBSD Manual Pages man apropos All Sections 1 - General Commands 2 - System Calls 3 - Subroutines 4 - Special Files 5 - File Formats 6 - Games 7 - Macros and Conventions 8 - Maintenance Commands 9 - Kernel Interface n - New Commands FreeBSD 16.0-CURRENT FreeBSD 15.0-RELEASE FreeBSD 15.0-RELEASE and Ports FreeBSD 15.0-STABLE FreeBSD 14.3-RELEASE FreeBSD 14.3-RELEASE and Ports FreeBSD 14.3-STABLE FreeBSD 14.2-RELEASE FreeBSD 14.2-RELEASE and Ports FreeBSD 14.1-RELEASE FreeBSD 14.1-RELEASE and Ports FreeBSD 14.0-RELEASE FreeBSD 14.0-RELEASE and Ports FreeBSD 13.5-RELEASE FreeBSD 13.5-RELEASE and Ports FreeBSD 13.5-STABLE FreeBSD 13.4-RELEASE FreeBSD 13.4-RELEASE and Ports FreeBSD 13.3-RELEASE FreeBSD 13.3-RELEASE and Ports FreeBSD 13.2-RELEASE FreeBSD 13.2-RELEASE and Ports FreeBSD 13.1-RELEASE FreeBSD 13.1-RELEASE and Ports FreeBSD 13.0-RELEASE FreeBSD 13.0-RELEASE and Ports FreeBSD 12.4-RELEASE FreeBSD 12.4-RELEASE and Ports FreeBSD 12.3-RELEASE FreeBSD 12.3-RELEASE and Ports FreeBSD 12.2-RELEASE FreeBSD 12.2-RELEASE and Ports FreeBSD 12.1-RELEASE FreeBSD 12.1-RELEASE and Ports FreeBSD 12.0-RELEASE FreeBSD 12.0-RELEASE and Ports FreeBSD 11.4-RELEASE FreeBSD 11.4-RELEASE and Ports FreeBSD 11.3-RELEASE FreeBSD 11.3-RELEASE and Ports FreeBSD 11.2-RELEASE FreeBSD 11.2-RELEASE and Ports FreeBSD 11.1-RELEASE FreeBSD 11.1-RELEASE and Ports FreeBSD 11.0-RELEASE FreeBSD 11.0-RELEASE and Ports FreeBSD 10.4-RELEASE FreeBSD 10.4-RELEASE and Ports FreeBSD 10.3-RELEASE FreeBSD 10.3-RELEASE and Ports FreeBSD 10.2-RELEASE FreeBSD 10.2-RELEASE and Ports FreeBSD 10.1-RELEASE FreeBSD 10.1-RELEASE and Ports FreeBSD 10.0-RELEASE FreeBSD 10.0-RELEASE and Ports FreeBSD 9.3-RELEASE FreeBSD 9.3-RELEASE and Ports FreeBSD 9.2-RELEASE FreeBSD 9.2-RELEASE and Ports FreeBSD 9.1-RELEASE FreeBSD 9.1-RELEASE and Ports FreeBSD 9.0-RELEASE FreeBSD 9.0-RELEASE and Ports FreeBSD 8.4-RELEASE FreeBSD 8.4-RELEASE and Ports FreeBSD 8.3-RELEASE FreeBSD 8.3-RELEASE and Ports FreeBSD 8.2-RELEASE FreeBSD 8.2-RELEASE and Ports FreeBSD 8.1-RELEASE FreeBSD 8.1-RELEASE and Ports FreeBSD 8.0-RELEASE FreeBSD 8.0-RELEASE and Ports FreeBSD 7.4-RELEASE FreeBSD 7.4-RELEASE and Ports FreeBSD 7.3-RELEASE FreeBSD 7.3-RELEASE and Ports FreeBSD 7.2-RELEASE FreeBSD 7.2-RELEASE and Ports FreeBSD 7.1-RELEASE FreeBSD 7.1-RELEASE and Ports FreeBSD 7.0-RELEASE FreeBSD 6.4-RELEASE FreeBSD 6.4-RELEASE and Ports FreeBSD 6.3-RELEASE FreeBSD 6.3-RELEASE and Ports FreeBSD 6.2-RELEASE FreeBSD 6.1-RELEASE FreeBSD 6.0-RELEASE FreeBSD 6.0-RELEASE and Ports FreeBSD 5.5-RELEASE FreeBSD 5.5-RELEASE and Ports FreeBSD 5.4-RELEASE FreeBSD 5.4-RELEASE and Ports FreeBSD 5.3-RELEASE FreeBSD 5.3-RELEASE and Ports FreeBSD 5.2.1-RELEASE FreeBSD 5.2.1-RELEASE and Ports FreeBSD 5.2-RELEASE FreeBSD 5.2-RELEASE and Ports FreeBSD 5.1-RELEASE FreeBSD 5.0-RELEASE FreeBSD 4.11-RELEASE FreeBSD 4.11-RELEASE and Ports FreeBSD 4.10-RELEASE FreeBSD 4.10-RELEASE and Ports FreeBSD 4.9-RELEASE FreeBSD 4.9-RELEASE and Ports FreeBSD 4.8-RELEASE FreeBSD 4.8-RELEASE and Ports FreeBSD 4.7-RELEASE FreeBSD 4.6.2-RELEASE FreeBSD 4.6.2-RELEASE and Ports FreeBSD 4.6-RELEASE FreeBSD 4.6-RELEASE and Ports FreeBSD 4.5-RELEASE FreeBSD 4.5-RELEASE and Ports FreeBSD 4.4-RELEASE FreeBSD 4.3-RELEASE FreeBSD 4.3-RELEASE and Ports FreeBSD 4.2-RELEASE FreeBSD 4.2-RELEASE and Ports FreeBSD 4.1.1-RELEASE FreeBSD 4.1.1-RELEASE and Ports FreeBSD 4.1-RELEASE FreeBSD 4.0-RELEASE FreeBSD 3.5.1-RELEASE FreeBSD 3.5.1-RELEASE and Ports FreeBSD 3.5-RELEASE and Ports FreeBSD 3.4-RELEASE FreeBSD 3.4-RELEASE and Ports FreeBSD 3.3-RELEASE FreeBSD 3.2-RELEASE FreeBSD 3.1-RELEASE FreeBSD 3.0-RELEASE FreeBSD 2.2.8-RELEASE FreeBSD 2.2.8-RELEASE and Ports FreeBSD 2.2.7-RELEASE FreeBSD 2.2.6-RELEASE FreeBSD 2.2.5-RELEASE FreeBSD 2.2.2-RELEASE FreeBSD 2.2.1-RELEASE FreeBSD 2.1.7.1-RELEASE FreeBSD 2.1.6.1-RELEASE FreeBSD 2.1.5-RELEASE FreeBSD 2.1.0-RELEASE FreeBSD 2.0.5-RELEASE FreeBSD 2.0-RELEASE FreeBSD 1.1.5.1-RELEASE FreeBSD 1.1-RELEASE FreeBSD 1.0-RELEASE FreeBSD Ports 15.0 FreeBSD Ports 14.3 FreeBSD Ports 14.3.quarterly FreeBSD Ports 14.2 FreeBSD Ports 14.1 FreeBSD Ports 14.0 FreeBSD Ports 13.5 FreeBSD Ports 13.4 FreeBSD Ports 13.3 FreeBSD Ports 13.2 FreeBSD Ports 13.1 FreeBSD Ports 13.0 FreeBSD Ports 12.4 FreeBSD Ports 12.3 FreeBSD Ports 12.2 FreeBSD Ports 12.1 FreeBSD Ports 12.0 FreeBSD Ports 11.4 FreeBSD Ports 11.3 FreeBSD Ports 11.2 FreeBSD Ports 11.1 FreeBSD Ports 11.0 FreeBSD Ports 10.4 FreeBSD Ports 10.3 FreeBSD Ports 10.2 FreeBSD Ports 10.1 FreeBSD Ports 10.0 FreeBSD Ports 9.3 FreeBSD Ports 9.2 FreeBSD Ports 9.1 FreeBSD Ports 9.0 FreeBSD Ports 8.4 FreeBSD Ports 8.3 FreeBSD Ports 8.2 FreeBSD Ports 8.1 FreeBSD Ports 8.0 FreeBSD Ports 7.4 FreeBSD Ports 7.3 FreeBSD Ports 7.2 FreeBSD Ports 7.1 FreeBSD Ports 7.0 FreeBSD Ports 6.4 FreeBSD Ports 6.3 FreeBSD Ports 6.2 FreeBSD Ports 6.0 FreeBSD Ports 5.5 FreeBSD Ports 5.4 FreeBSD Ports 5.3 FreeBSD Ports 5.2.1 FreeBSD Ports 5.2 FreeBSD Ports 5.1 FreeBSD Ports 4.11 FreeBSD Ports 4.10 FreeBSD Ports 4.9 FreeBSD Ports 4.8 FreeBSD Ports 4.7 FreeBSD Ports 4.6.2 FreeBSD Ports 4.6 FreeBSD Ports 4.5 FreeBSD Ports 4.3 FreeBSD Ports 4.2 FreeBSD Ports 4.1.1 FreeBSD Ports 3.5.1 FreeBSD Ports 3.5 FreeBSD Ports 3.4 FreeBSD Ports 2.2.8 4.4BSD Lite2 4.3BSD NET/2 4.3BSD Reno 2.11 BSD 2.10 BSD 2.9.1 BSD 2.8 BSD 386BSD 0.1 386BSD 0.0 CentOS 7.9 CentOS 7.8 CentOS 7.7 CentOS 7.6 CentOS 7.5 CentOS 7.4 CentOS 7.3 CentOS 7.2 CentOS 7.1 CentOS 7.0 CentOS 6.10 CentOS 6.9 CentOS 6.8 CentOS 6.7 CentOS 6.6 CentOS 6.5 CentOS 6.4 CentOS 6.3 CentOS 6.2 CentOS 6.1 CentOS 6.0 CentOS 5.11 CentOS 5.10 CentOS 5.9 CentOS 5.8 CentOS 5.7 CentOS 5.6 CentOS 5.5 CentOS 5.4 CentOS 4.8 CentOS 3.9 Darwin 8.0.1/ppc Darwin 7.0.1 Darwin 6.0.2/x86 Darwin 1.4.1/x86 Darwin 1.3.1/x86 Debian 14.0 unstable Debian 13.2.0 Debian 12.12.0 Debian 11.11.0 Debian 10.13.0 Debian 9.13.0 Debian 8.11.1 Debian 7.11.0 Debian 6.0.10 Debian 5.0.10 Debian 4.0.9 Debian 3.1.8 Debian 2.2.7 Debian 2.0.0 Dell UNIX SVR4 2.2 DragonFly 6.4.2 DragonFly 5.8.3 DragonFly 4.8.1 DragonFly 3.8.2 DragonFly 2.10.1 DragonFly 1.12.1 DragonFly 1.0A HP-UX 11.22 HP-UX 11.20 HP-UX 11.11 HP-UX 11.00 HP-UX 10.20 HP-UX 10.10 HP-UX 10.01 HP-UX 9.07 HP-UX 8.07 Inferno 4th Edition IRIX 6.5.30 Linux Slackware 3.1 MACH 2.5/i386 macOS 26.2 macOS 15.7.3 macOS 14.8.3 macOS 13.6.5 macOS 12.7.3 macOS 11.1 macOS 10.15.7 macOS 10.13.6 macOS 10.12.0 Minix 3.3.0 Minix 3.2.1 Minix 3.2.0 Minix 3.1.7 Minix 3.1.6 Minix 3.1.5 Minix 2.0.0 NetBSD 10.1 NetBSD 10.0 NetBSD 9.4 NetBSD 9.3 NetBSD 9.2 NetBSD 9.1 NetBSD 9.0 NetBSD 8.3 NetBSD 8.2 NetBSD 8.1 NetBSD 8.0 NetBSD 7.2 NetBSD 7.1.2 NetBSD 7.1 NetBSD 7.0 NetBSD 6.1.5 NetBSD 6.0 NetBSD 5.2.3 NetBSD 5.2 NetBSD 5.1 NetBSD 5.0 NetBSD 4.0.1 NetBSD 4.0 NetBSD 3.1 NetBSD 3.0 NetBSD 2.1 NetBSD 2.0.2 NetBSD 2.0 NetBSD 1.6.2 NetBSD 1.6.1 NetBSD 1.6 NetBSD 1.5.3 NetBSD 1.5.2 NetBSD 1.5.1 NetBSD 1.5 NetBSD 1.4.3 NetBSD 1.4.2 NetBSD 1.4.1 NetBSD 1.4 NetBSD 1.3.3 NetBSD 1.3.2 NetBSD 1.3.1 NetBSD 1.3 NetBSD 1.2.1 NetBSD 1.2 NetBSD 1.1 NetBSD 1.0 NeXTSTEP 3.3 OpenBSD 7.8 OpenBSD 7.7 OpenBSD 7.6 OpenBSD 7.5 OpenBSD 7.4 OpenBSD 7.3 OpenBSD 7.2 OpenBSD 7.1 OpenBSD 7.0 OpenBSD 6.9 OpenBSD 6.8 OpenBSD 6.7 OpenBSD 6.6 OpenBSD 6.5 OpenBSD 6.4 OpenBSD 6.3 OpenBSD 6.2 OpenBSD 6.1 OpenBSD 6.0 OpenBSD 5.9 OpenBSD 5.8 OpenBSD 5.7 OpenBSD 5.6 OpenBSD 5.5 OpenBSD 5.4 OpenBSD 5.3 OpenBSD 5.2 OpenBSD 5.1 OpenBSD 5.0 OpenBSD 4.9 OpenBSD 4.8 OpenBSD 4.7 OpenBSD 4.6 OpenBSD 4.5 OpenBSD 4.4 OpenBSD 4.3 OpenBSD 4.2 OpenBSD 4.1 OpenBSD 4.0 OpenBSD 3.9 OpenBSD 3.8 OpenBSD 3.7 OpenBSD 3.6 OpenBSD 3.5 OpenBSD 3.4 OpenBSD 3.3 OpenBSD 3.2 OpenBSD 3.1 OpenBSD 3.0 OpenBSD 2.9 OpenBSD 2.8 OpenBSD 2.7 OpenBSD 2.6 OpenBSD 2.5 OpenBSD 2.4 OpenBSD 2.3 OpenBSD 2.2 OpenBSD 2.1 OpenBSD 2.0 OpenDarwin 7.2.1 OpenDarwin 6.6.2/x86 OpenDarwin 6.6.1/x86 OpenDarwin 20030208pre4/ppc OpenIndiana 2024.10 OpenIndiana 2022.10 OpenIndiana 2020.10 OpenIndiana 2017.10 OpenIndiana 2015.10 OpenIndiana 2013.08 OpenSolaris 2010.03 OpenSolaris 2009.06 OpenStep 4.2 openSUSE 42.3 openSUSE 42.2 openSUSE 42.1 openSUSE 16.0 openSUSE 15.6 openSUSE 15.5 openSUSE 15.4 openSUSE 15.3 openSUSE 15.2 openSUSE 15.1 openSUSE 15.0 openSUSE 13.2 openSUSE 13.1 openSUSE 11.4 openSUSE 11.3 openSUSE 11.2 openSUSE 10.3 openSUSE 10.2 OSF1 V5.1/alpha OSF1 V4.0/alpha OSF1 V1.0/mips Plan 9 Red Hat 9.0 Red Hat 8.0 Red Hat 7.3 Red Hat 7.2 Red Hat 7.1 Red Hat 7.0 Red Hat 6.2 Red Hat 6.1 Red Hat 5.2 Red Hat 5.0 Red Hat 4.2 Rhapsody DR1 Rhapsody DR2 Rocky 10.0 Rocky 9.6 Rocky 9.5 Rocky 9.4 Rocky 9.3 Rocky 9.2 Rocky 9.1 Rocky 9.0 Rocky 8.10 Rocky 8.9 Rocky 8.8 Rocky 8.7 Rocky 8.6 Rocky 8.5 Rocky 8.4 Rocky 8.3 Sun UNIX 0.4 SunOS 5.10 SunOS 5.9 SunOS 5.8 SunOS 5.7 SunOS 5.6 SunOS 5.5.1 SunOS 4.1.3 SuSE 11.3 SuSE 11.2 SuSE 11.1 SuSE 11.0 SuSE 10.3 SuSE 10.2 SuSE 10.1 SuSE 10.0 SuSE 9.3 SuSE 9.2 SuSE 8.2 SuSE 8.1 SuSE 8.0 SuSE 7.3 SuSE 7.2 SuSE 7.1 SuSE 7.0 SuSE 6.4 SuSE 6.3 SuSE 6.1 SuSE 6.0 SuSE 5.3 SuSE 5.2 SuSE 5.0 SuSE 4.3 SuSE ES 10 SP1 Ubuntu 24.04 noble Ubuntu 23.10 mantic Ubuntu 22.04 jammy Ubuntu 20.04 focal Ubuntu 18.04 bionic Ubuntu 16.04 xenial Ubuntu 14.04 trusty ULTRIX 4.2 Ultrix-32 2.0/VAX Unix Seventh Edition X11R7.4 X11R7.3.2 X11R7.2 X11R6.9.0 X11R6.8.2 X11R6.7.0 XFree86 4.8.0 XFree86 4.7.0 XFree86 4.6.0 XFree86 4.5.0 XFree86 4.4.0 XFree86 4.3.0 XFree86 4.2.99.3 XFree86 4.2.0 XFree86 4.1.0 XFree86 4.0.2 XFree86 4.0.1 XFree86 4.0 XFree86 3.3.6 XFree86 3.3 XFree86 2.1 All Architectures html pdf ascii home | help FISH-DOC (1) fish-shell FISH-DOC (1) This is the documentation for fish , the f riendly i nteractive sh ell. A shell is a program that helps you operate your computer by starting other programs. fish offers a command-line interface focused on usability and inter- active use. Some of the special features of fish are: • Extensive UI : Syntax highlighting , Autosuggestions , tab completion and selection lists that can be navigated and filtered. • No configuration needed : fish is designed to be ready to use immedi- ately, without requiring extensive configuration. • Easy scripting : New functions can be added on the fly. The syntax is easy to learn and use. This page explains how to install and set up fish and where to get more information. WHERE TO GO? If this is your first time using fish, see the tutorial . If you are already familiar with other shells like bash and want to see the scripting differences, see Fish For Bash Users . For an overview of fish's scripting language, see The Fish Language . If it would be useful in a script file, it's here. For information on using fish interactively, see Interactive use . If it's about key presses, syntax highlighting or anything else that needs an interactive terminal session, look here. If you need to install fish first, read on, the rest of this document will tell you how to get, install and configure fish. INSTALLATION This section describes how to install, uninstall, start, and exit fish . It also explains how to make fish the default shell. Installation Up-to-date instructions for installing the latest version of fish are on the fish homepage < https://fishshell.com/ >. To install the development version of fish, see the instructions on the project's GitHub page < https://github.com/fish-shell/fish-shell >. Starting and Exiting Once fish has been installed, open a terminal. If fish is not the de- fault shell: • Type fish to start a shell: > fish • Type exit to end the session: > exit Default Shell There are multiple ways to switch to fish (or any other shell) as your default. The simplest method is to set your terminal emulator (eg GNOME Termi- nal, Apple's Terminal.app, or Konsole) to start fish directly. See its configuration and set the program to start to /usr/local/bin/fish (if that's where fish is installed - substitute another location as appro- priate). Alternatively, you can set fish as your login shell so that it will be started by all terminal logins, including SSH. WARNING: Setting fish as your login shell may cause issues, such as an incor- rect PATH . Some operating systems, including a number of Linux dis- tributions, require the login shell to be Bourne-compatible and to read configuration from /etc/profile . fish may not be suitable as a login shell on these systems. To change your login shell to fish: 1. Add the shell to /etc/shells with: > echo /usr/local/bin/fish | sudo tee -a /etc/shells 2. Change your default shell with: > chsh -s /usr/local/bin/fish Again, substitute the path to fish for /usr/local/bin/fish - see com- mand -s fish inside fish. To change it back to another shell, just sub- stitute /usr/local/bin/fish with /bin/bash , /bin/tcsh or /bin/zsh as appropriate in the steps above. Uninstalling For uninstalling fish: see FAQ: Uninstalling fish . Shebang Line Because shell scripts are written in many different languages, they need to carry information about which interpreter should be used to ex- ecute them. For this, they are expected to have a first line, the she- bang line, which names the interpreter executable. A script written in bash would need a first line like this: #!/bin/bash When the shell tells the kernel to execute the file, it will use the interpreter /bin/bash . For a script written in another language, just replace /bin/bash with the interpreter for that language. For example: /usr/bin/python for a python script, or /usr/local/bin/fish for a fish script, if that is where you have them installed. If you want to share your script with others, you might want to use env to allow for the interpreter to be installed in other locations. For example: #!/usr/bin/env fish echo Hello from fish $version This will call env , which then goes through PATH to find a program called "fish". This makes it work, whether fish is installed in (for example) /usr/local/bin/fish , /usr/bin/fish , or ~/.local/bin/fish , as long as that directory is in PATH . The shebang line is only used when scripts are executed without speci- fying the interpreter. For functions inside fish or when executing a script with fish /path/to/script , a shebang is not required (but it doesn't hurt!). When executing files without an interpreter, fish, like other shells, tries your system shell, typically /bin/sh . This is needed because some scripts are shipped without a shebang line. CONFIGURATION To store configuration write it to a file called ~/.config/fish/con- fig.fish . .fish scripts in ~/.config/fish/conf.d/ are also automatically executed before config.fish . These files are read on the startup of every shell, whether interactive and/or if they're login shells. Use status --is-interactive and status --is-login to do things only in interactive/login shells, respectively. This is the short version; for a full explanation, like for sysadmins or integration for developers of other software, see Configuration files . If you want to see what you changed over fish's defaults, see fish_delta . Examples: To add ~/linux/bin to PATH variable when using a login shell, add this to ~/.config/fish/config.fish file: if status --is-login set -gx PATH $PATH ~/linux/bin end This is just an example; using fish_add_path e.g. fish_add_path ~/linux/bin which only adds the path if it isn't included yet is eas- ier. To run commands on exit, use an event handler that is triggered by the exit of the shell: function on_exit --on-event fish_exit echo fish is now exiting end RESOURCES • The GitHub page < https://github.com/fish-shell/fish-shell/ > • The official Gitter channel < https://gitter.im/fish-shell/fish-shell > • The official mailing list at fish-users@lists.sourceforge.net < https://lists.sourceforge.net/lists/listinfo/fish-users > If you have an improvement for fish, you can submit it via the GitHub page. OTHER HELP PAGES Frequently asked questions What is the equivalent to this thing from bash (or other shells)? See Fish for bash users How do I set or clear an environment variable? Use the set command: set -x key value # typically set -gx key value set -e key Since fish 3.1 you can set an environment variable for just one command using the key=value some command syntax, like in other shells. The two lines below behave identically - unlike other shells, fish will output value both times: key=value echo $key begin; set -lx key value; echo $key; end Note that "exported" is not a scope , but an additional bit of state. A variable can be global and exported or local and exported or even uni- versal and exported. Typically it makes sense to make an exported vari- able global. How do I check whether a variable is defined? Use set -q var . For example, if set -q var; echo variable defined; end . To check multiple variables you can combine with and and or like so: if set -q var1; or set -q var2 echo either variable defined end Keep in mind that a defined variable could also be empty, either by having no elements (if set like set var ) or only empty elements (if set like set var "" ). Read on for how to deal with those. How do I check whether a variable is not empty? Use string length -q -- $var . For example, if string length -q -- $var; echo not empty; end . Note that string length will interpret a list of multiple variables as a disjunction (meaning any/or): if string length -q -- $var1 $var2 $var3 echo at least one of these variables is not empty end Alternatively, use test -n "$var" , but remember that the variable must be double-quoted . For example, if test -n "$var"; echo not empty; end . The test command provides its own and (-a) and or (-o): if test -n "$var1" -o -n "$var2" -o -n "$var3" echo at least one of these variables is not empty end If you want to know if a variable has no elements , use set -q var[1] . Why doesn't set -Ux (exported universal variables) seem to work? A global variable of the same name already exists. Environment variables such as EDITOR or TZ can be set universally using set -Ux . However, if there is an environment variable already set be- fore fish starts (such as by login scripts or system administrators), it is imported into fish as a global variable. The variable scopes are searched from the "inside out", which means that local variables are checked first, followed by global variables, and finally universal variables. This means that the global value takes precedence over the universal value. To avoid this problem, consider changing the setting which fish inher- its. If this is not possible, add a statement to your configuration file (usually ~/.config/fish/config.fish ): set -gx EDITOR vim How do I run a command every login? What's fish's equivalent to .bashrc or .profile? Edit the file ~/.config/fish/config.fish [1], creating it if it does not exist (Note the leading period). Unlike .bashrc and .profile, this file is always read, even in non-in- teractive or login shells. To do something only in interactive shells, check status is-interactive like: if status is-interactive # use the coolbeans theme fish_config theme choose coolbeans end [1] The "~/.config" part of this can be set via $XDG_CONFIG_HOME, that's just the default. How do I set my prompt? The prompt is the output of the fish_prompt function. Put it in ~/.con- fig/fish/functions/fish_prompt.fish . For example, a simple prompt is: function fish_prompt set_color $fish_color_cwd echo -n (prompt_pwd) set_color normal echo -n ' > ' end You can also use the Web configuration tool, fish_config , to preview and choose from a gallery of sample prompts. Or you can use fish_config from the commandline: > fish_config prompt show # displays all the prompts fish ships with > fish_config prompt choose disco # loads the disco prompt in the current shell > fish_config prompt save # makes the change permanent If you want to modify your existing prompt, you can use funced and funcsave like: >_ funced fish_prompt # This opens up your editor (set in $EDITOR). # Modify the function, # save the file and repeat to your liking. # Once you are happy with it: >_ funcsave fish_prompt This also applies to fish_right_prompt and fish_mode_prompt . Why does my prompt show a [I]? That's the fish_mode_prompt . It is displayed by default when you've ac- tivated vi mode using fish_vi_key_bindings . If you haven't activated vi mode on purpose, you might have installed a third-party theme or plugin that does it. If you want to change or disable this display, modify the fish_mode_prompt function, for instance via funced . How do I customize my syntax highlighting colors? Use the web configuration tool, fish_config , or alter the fish_color family of environment variables . You can also use fish_config on the commandline, like: > fish_config theme show # to demonstrate all the colorschemes > fish_config theme choose coolbeans # to load the "coolbeans" theme > fish_config theme save # to make the change permanent How do I change the greeting message? Change the value of the variable fish_greeting or create a fish_greeting function. For example, to remove the greeting use: set -U fish_greeting Or if you prefer not to use a universal variable, use: set -g fish_greeting in config.fish . How do I run a command from history? Type some part of the command, and then hit the up () or down () arrow keys to navigate through history matches, or press ctrl-r to open the history in a searchable pager. In this pager you can press ctrl-r or ctrl-s to move to older or younger history respectively. Additional default key bindings include ctrl-p (up) and ctrl-n (down). See Searchable command history for more information. Why doesn't history substitution ("!$" etc.) work? Because history substitution is an awkward interface that was invented before interactive line editing was even possible. Instead of adding this pseudo-syntax, fish opts for nice history searching and recall features. Switching requires a small change of habits: if you want to modify an old line/word, first recall it, then edit. As a special case, most of the time history substitution is used as sudo !! . In that case just press alt-s , and it will recall your last commandline with sudo prefixed (or toggle a sudo prefix on the current commandline if there is anything). In general, fish's history recall works like this: • Like other shells, the Up arrow, up recalls whole lines, starting from the last executed line. So instead of typing !! , you would just hit the up-arrow. • If the line you want is far back in the history, type any part of the line and then press Up one or more times. This will filter the re- called lines to ones that include this text, and you will get to the line you want much faster. This replaces "!vi", "!?bar.c" and the like. If you want to see more context, you can press ctrl-r to open the history in the pager. • alt-up recalls individual arguments, starting from the last argument in the last executed line. This can be used instead of "!$". See documentation for more details about line editing in fish. That being said, you can use Abbreviations to implement history substi- tution. Here's just !! : function last_history_item; echo $history[1]; end abbr -a !! --position anywhere --function last_history_item Run this and !! will be replaced with the last history entry, anywhere on the commandline. Put it into config.fish to keep it. How do I run a subcommand? The backtick doesn't work! fish uses parentheses for subcommands. For example: for i in (ls) echo $i end It also supports the familiar $() syntax, even in quotes. Backticks are not supported because they are discouraged even in POSIX shells. They nest poorly and are hard to tell from single quotes ( '' ). My command (pkg-config) gives its output as a single long string? Unlike other shells, fish splits command substitutions only on new- lines, not spaces or tabs or the characters in $IFS. That means if you run count (printf '%s ' a b c) It will print 1 , because the "a b c " is used in one piece. But if you do count (printf '%s\n' a b c) it will print 3 , because it gave count the arguments "a", "b" and "c" separately. In the overwhelming majority of cases, splitting on spaces is unwanted, so this is an improvement. This is why you hear about problems with filenames with spaces, after all. However sometimes, especially with pkg-config and related tools, split- ting on spaces is needed. In these cases use string split -n " " like: g++ example_01.cpp (pkg-config --cflags --libs gtk+-2.0 | string split -n " ") The -n is so empty elements are removed like POSIX shells would do. How do I get the exit status of a command? Use the $status variable. This replaces the $? variable used in other shells. somecommand if test $status -eq 7 echo "That's my lucky number!" end If you are just interested in success or failure, you can run the com- mand directly as the if-condition: if somecommand echo "Command succeeded" else echo "Command failed" end Or if you just want to do one command in case the first succeeded or failed, use and or or : somecommand or someothercommand See the Conditions and the documentation for test and if for more in- formation. My command prints "No matches for wildcard" but works in bash In short: quote or escape the wildcard: scp user@ip:/dir/"string-*" When fish sees an unquoted * , it performs wildcard expansion . That means it tries to match filenames to the given string. If the wildcard doesn't match any files, fish prints an error instead of running the command: > echo *this*does*not*exist fish: No matches for wildcard '*this*does*not*exist'. See `help expand`. echo *this*does*not*exist ^ Now, bash also tries to match files in this case, but when it doesn't find a match, it passes along the literal wildcard string instead. That means that commands like the above scp user@ip:/dir/string-* or apt install postgres-* appear to work, because most of the time the string doesn't match and so it passes along the string-* , which is then interpreted by the re- ceiving program. But it also means that these commands can stop working at any moment once a matching file is encountered (because it has been created or the command is executed in a different working directory), and to deal with that bash needs workarounds like for f in ./*.mpg; do # We need to test if the file really exists because # the wildcard might have failed to match. test -f "$f" || continue mympgviewer "$f" done (from http://mywiki.wooledge.org/BashFAQ/004 ) For these reasons, fish does not do this, and instead expects asterisks to be quoted or escaped if they aren't supposed to be expanded. This is similar to bash's "failglob" option. Why won't SSH/SCP/rsync connect properly when fish is my login shell? This problem may show up as messages like " Received message too long ", " open terminal failed: not a terminal ", " Bad packet length ", or " Con- nection refused " with strange output in ssh_exchange_identification messages in the debug log. This usually happens because fish reads the user configuration file ( ~/.config/fish/config.fish ) always , whether it's in an interactive or login or non-interactive or non-login shell. This simplifies matters, but it also means when config.fish generates output, it will do that even in non-interactive shells like the one ssh/scp/rsync start when they connect. Anything in config.fish that produces output should be guarded with status is-interactive (or status is-login if you prefer): if status is-interactive ... end The same applies for example when you start tmux in config.fish without guards, which will cause a message like sessions should be nested with care, unset $TMUX to force . I'm getting weird graphical glitches (a staircase effect, ghost characters, cursor in the wrong position,...)? In a terminal, the application running inside it and the terminal it- self need to agree on the width of characters in order to handle cursor movement. This is more important to fish than other shells because features like syntax highlighting and autosuggestions are implemented by moving the cursor. Sometimes, there is disagreement on the width. There are numerous causes and fixes for this: • It is possible the character is simply too new for your system to know - in this case you need to refrain from using it. • Fish or your terminal might not know about the character or handle it wrong - in this case fish or your terminal needs to be fixed, or you need to update to a fixed version. • The character has an "ambiguous" width and fish thinks that means a width of X while your terminal thinks it's Y. In this case you either need to change your terminal's configuration or set $fish_ambigu- ous_width to the correct value. • The character is an emoji and the host system only supports Unicode 8, while you are running the terminal on a system that uses Unicode >= 9. In this case set $fish_emoji_width to 2. This also means that a few things are unsupportable: • Non-monospace fonts - there is no way for fish to figure out what width a specific character has as it has no influence on the termi- nal's font rendering. • Different widths for multiple ambiguous width characters - there is no way for fish to know which width you assign to each character. Uninstalling fish If you want to uninstall fish, first make sure fish is not set as your shell. Run chsh -s /bin/bash if you are not sure. If you installed it with a package manager, just use that package man- ager's uninstall function. If you built fish yourself, assuming you in- stalled it to /usr/local, do this: rm -Rf /usr/local/etc/fish /usr/local/share/fish ~/.config/fish rm /usr/local/share/man/man1/fish*.1 cd /usr/local/bin rm -f fish fish_indent Interactive use Fish prides itself on being really nice to use interactively. That's down to a few features we'll explain in the next few sections. Fish is used by giving commands in the fish language, see The Fish Lan- guage for information on that. Help Fish has an extensive help system. Use the help command to obtain help on a specific subject or command. For instance, writing help syntax displays the syntax section of this documentation. Fish also has man pages for its commands, and translates the help pages to man pages. For example, man set will show the documentation for set as a man page. Help on a specific builtin can also be obtained with the -h parameter. For instance, to obtain help on the fg builtin, either type fg -h or help fg . The main page can be viewed via help index (or just help ) or man fish-doc . The tutorial can be viewed with help tutorial or man fish-tu- torial . Autosuggestions fish suggests commands as you type, based on command history , comple- tions, and valid file paths. As you type commands, you will see a sug- gestion offered after the cursor, in a muted gray color (which can be changed with the fish_color_autosuggestion variable). To accept the autosuggestion (replacing the command line contents), press right () or ctrl-f . To accept the first suggested word, press alt-right () or alt-f . If the autosuggestion is not what you want, just ignore it: it won't execute unless you accept it. Autosuggestions are a powerful way to quickly summon frequently entered commands, by typing the first few characters. They are also an effi- cient technique for navigating through directory hierarchies. If you don't like autosuggestions, you can disable them by setting $fish_autosuggestion_enabled to 0: set -g fish_autosuggestion_enabled 0 Tab Completion Tab completion is a time saving feature of any modern shell. When you type tab , fish tries to guess the rest of the word under the cursor. If it finds just one possibility, it inserts it. If it finds more, it in- serts the longest unambiguous part and then opens a menu (the "pager") that you can navigate to find what you're looking for. The pager can be navigated with the arrow keys, pageup / pagedown , tab or shift-tab . Pressing ctrl-s (the pager-toggle-search binding - / in vi mode) opens up a search menu that you can use to filter the list. Fish provides some general purpose completions, like for commands, variable names, usernames or files. It also provides a large number of program specific scripted comple- tions. Most of these completions are simple options like the -l option for ls , but a lot are more advanced. For example: • man and whatis show the installed manual pages as completions. • make uses targets in the Makefile in the current directory as comple- tions. • mount uses mount points specified in fstab as completions. • apt , rpm and yum show installed or installable packages You can also write your own completions or install some you got from someone else. For that, see Writing your own completions . Completion scripts are loaded on demand, just like functions are . The difference is the $fish_complete_path list is used instead of $fish_function_path . Typically you can drop new completions in ~/.con- fig/fish/completions/name-of-command.fish and fish will find them auto- matically. Syntax highlighting Fish interprets the command line as it is typed and uses syntax high- lighting to provide feedback. The most important feedback is the detec- tion of potential errors. By default, errors are marked red. Detected errors include: • Non-existing commands. • Reading from or appending to a non-existing file. • Incorrect use of output redirects • Mismatched parenthesis To customize the syntax highlighting, you can set the environment vari- ables listed in the Variables for changing highlighting colors section. Fish also provides pre-made color themes you can pick with fish_config . Running just fish_config opens a browser interface, or you can use fish_config theme in the terminal. For example, to disable nearly all coloring: fish_config theme choose None Or, to see all themes, right in your terminal: fish_config theme show Syntax highlighting variables The colors used by fish for syntax highlighting can be configured by changing the values of various variables. The value of these variables can be one of the colors accepted by the set_color command. The modi- fier switches accepted by set_color like --bold , --dim , --italics , --reverse and --underline are also accepted. Example: to make errors highlighted and red, use: set fish_color_error red --bold The following variables are available to change the highlighting colors in fish: +--------------------------------+----------------------------+ | Variable | Meaning | +--------------------------------+----------------------------+ | | default color | | fish_color_normal | | +--------------------------------+----------------------------+ | | commands like echo | | fish_color_command | | +--------------------------------+----------------------------+ | | keywords like if - this | | fish_color_keyword | falls back on the command | | | color if unset | +--------------------------------+----------------------------+ | | quoted text like "abc" | | fish_color_quote | | +--------------------------------+----------------------------+ | | IO redirections like | | fish_color_redirec- | >/dev/null | | tion | | +--------------------------------+----------------------------+ | | process separators like ; | | fish_color_end | and & | +--------------------------------+----------------------------+ | | syntax errors | | fish_color_error | | +--------------------------------+----------------------------+ | | ordinary command parame- | | fish_color_param | ters | +--------------------------------+----------------------------+ | | parameters that are file- | | fish_color_valid_path | names (if the file exists) | +--------------------------------+----------------------------+ | | options starting with "-", | | fish_color_option | up to the first "--" para- | | | meter | +--------------------------------+----------------------------+ | | comments like '# impor- | | fish_color_comment | tant' | +--------------------------------+----------------------------+ | | selected text in vi visual | | fish_color_selection | mode | +--------------------------------+----------------------------+ | | parameter expansion opera- | | fish_color_operator | tors like * and ~ | +--------------------------------+----------------------------+ | | character escapes like \n | | fish_color_escape | and \x70 | +--------------------------------+----------------------------+ | | autosuggestions (the pro- | | fish_color_autosug- | posed rest of a command) | | gestion | | +--------------------------------+----------------------------+ | | the current working direc- | | fish_color_cwd | tory in the default prompt | +--------------------------------+----------------------------+ | | the current working direc- | | fish_color_cwd_root | tory in the default prompt | | | for the root user | +--------------------------------+----------------------------+ | | the username in the de- | | fish_color_user | fault prompt | +--------------------------------+----------------------------+ | | the hostname in the de- | | fish_color_host | fault prompt | +--------------------------------+----------------------------+ | | the hostname in the de- | | fish_color_host_re- | fault prompt for remote | | mote | sessions (like ssh) | +--------------------------------+----------------------------+ | | the last command's nonzero | | fish_color_status | exit code in the default | | | prompt | +--------------------------------+----------------------------+ | | the '^C' indicator on a | | fish_color_cancel | canceled command | +--------------------------------+----------------------------+ | | history search matches and | | fish_color_search_match | selected pager items | | | (background only) | +--------------------------------+----------------------------+ | | the current position in | | fish_color_history_cur- | the history for commands | | rent | like dirh and cdh | +--------------------------------+----------------------------+ If a variable isn't set or is empty, fish usually tries $fish_color_normal , except for: • $fish_color_keyword , where it tries $fish_color_command first. • $fish_color_option , where it tries $fish_color_param first. • For $fish_color_valid_path , if that doesn't have a color, but only modifiers, it adds those to the color that would otherwise be used, like $fish_color_param . But if valid paths have a color, it uses that and adds in modifiers from the other color. Pager color variables fish will sometimes present a list of choices in a table, called the pager. Example: to set the background of each pager row, use: set fish_pager_color_background --background=white To have black text on alternating white and gray backgrounds: set fish_pager_color_prefix black set fish_pager_color_completion black set fish_pager_color_description black set fish_pager_color_background --background=white set fish_pager_color_secondary_background --background=brwhite Variables affecting the pager colors: +----------------------------------+----------------------------+ | Variable | Meaning | +----------------------------------+----------------------------+ | | the progress bar at the | | fish_pager_color_progress | bottom left corner | +----------------------------------+----------------------------+ | | the background color of a | | fish_pager_color_back- | line | | ground | | +----------------------------------+----------------------------+ | | the prefix string, i.e. | | fish_pager_color_prefix | the string that is to be | | | completed | +----------------------------------+----------------------------+ | | the completion itself, | | fish_pager_color_comple- | i.e. the proposed rest of | | tion | the string | +----------------------------------+----------------------------+ | | the completion description | | fish_pager_color_descrip- | | | tion | | +----------------------------------+----------------------------+ | | background of the selected | | fish_pager_color_se- | completion | | lected_background | | +----------------------------------+----------------------------+ | | prefix of the selected | | fish_pager_color_se- | completion | | lected_prefix | | +----------------------------------+----------------------------+ | | suffix of the selected | | fish_pager_color_se- | completion | | lected_completion | | +----------------------------------+----------------------------+ | | description of the se- | | fish_pager_color_se- | lected completion | | lected_description | | +----------------------------------+----------------------------+ | | background of every second | | fish_pager_color_sec- | unselected completion | | ondary_background | | +----------------------------------+----------------------------+ | | prefix of every second un- | | fish_pager_color_sec- | selected completion | | ondary_prefix | | +----------------------------------+----------------------------+ | | suffix of every second un- | | fish_pager_color_sec- | selected completion | | ondary_completion | | +----------------------------------+----------------------------+ | | description of every sec- | | fish_pager_color_sec- | ond unselected completion | | ondary_description | | +----------------------------------+----------------------------+ When the secondary or selected variables aren't set or are empty, the normal variables are used, except for $fish_pager_color_selected_back- ground , where the background of $fish_color_search_match is tried first. Abbreviations To avoid needless typing, a frequently-run command like git checkout can be abbreviated to gco using the abbr command. abbr -a gco git checkout After entering gco and pressing space or enter , a gco in command posi- tion will turn into git checkout in the command line. If you want to use a literal gco sometimes, use ctrl-space [1]. Abbreviations are a lot more powerful than just replacing literal strings. For example you can make going up a number of directories eas- ier with this: function multicd echo cd (string repeat -n (math (string length -- $argv[1]) - 1) ../) end abbr --add dotdot --regex '^\.\.+$' --function multicd Now, .. transforms to cd ../ , while ... turns into cd ../../ and .... expands to cd ../../../ . The advantage over aliases is that you can see the actual command be- fore using it, add to it or change it, and the actual command will be stored in history. [1] Any binding that executes the expand-abbr or execute bind function will expand abbreviations. By default ctrl-space is bound to just inserting a space. Programmable prompt When it is fish's turn to ask for input (like after it started or the command ended), it will show a prompt. Often this looks something like: you@hostname ~> This prompt is determined by running the fish_prompt and fish_right_prompt functions. The output of the former is displayed on the left and the latter's out- put on the right side of the terminal. For vi mode , the output of fish_mode_prompt will be prepended on the left. Fish ships with a few prompts which you can see with fish_config . If you run just fish_config it will open a web interface [2] where you'll be shown the prompts and can pick which one you want. fish_config prompt show will show you the prompts right in your terminal. For example fish_config prompt choose disco will temporarily select the "disco" prompt. If you like it and decide to keep it, run fish_config prompt save . You can also change these functions yourself by running funced fish_prompt and funcsave fish_prompt once you are happy with the result (or fish_right_prompt if you want to change that). [2] The web interface runs purely locally on your computer and re- quires python to be installed. Configurable greeting When it is started interactively, fish tries to run the fish_greeting function. The default fish_greeting prints a simple message. You can change its text by changing the $fish_greeting variable, for instance using a universal variable : set -U fish_greeting or you can set it globally in config.fish : set -g fish_greeting 'Hey, stranger!' or you can script it by changing the function: function fish_greeting random choice "Hello!" "Hi" "G'day" "Howdy" end save this in config.fish or a function file . You can also use funced and funcsave to edit it easily. Programmable title When using most terminals, it is possible to set the text displayed in the titlebar of the terminal window. Fish does this by running the fish_title function. It is executed before and after a command and the output is used as a titlebar message. The status current-command builtin will always return the name of the job to be put into the foreground (or fish if control is returning to the shell) when the fish_title function is called. The first argument will contain the most recently executed foreground command as a string. The default title shows the hostname if connected via ssh, the cur- rently running command (unless it is fish) and the current working di- rectory. All of this is shortened to not make the tab too wide. Examples: To show the last command and working directory in the title: function fish_title # `prompt_pwd` shortens the title. This helps prevent tabs from becoming very wide. echo $argv[1] (prompt_pwd) pwd end Command line editor The fish editor features copy and paste, a searchable history and many editor functions that can be bound to special keyboard shortcuts. Like bash and other shells, fish includes two sets of keyboard short- cuts (or key bindings): one inspired by the Emacs text editor, and one by the vi text editor. The default editing mode is Emacs. You can switch to vi mode by running fish_vi_key_bindings and switch back with fish_default_key_bindings . You can also make your own key bindings by creating a function and setting the fish_key_bindings variable to its name. For example: function fish_hybrid_key_bindings --description \ "Vi-style bindings that inherit emacs-style bindings in all modes" for mode in default insert visual fish_default_key_bindings -M $mode end fish_vi_key_bindings --no-erase end set -g fish_key_bindings fish_hybrid_key_bindings While the key bindings included with fish include many of the shortcuts popular from the respective text editors, they are not a complete im- plementation. They include a shortcut to open the current command line in your preferred editor ( alt-e by default) if you need the full power of your editor. Shared bindings Some bindings are common across Emacs and vi mode, because they aren't text editing bindings, or because what vi/Vim does for a particular key doesn't make sense for a shell. • tab completes the current token. shift-tab completes the current to- ken and starts the pager's search mode. tab is the same as ctrl-i . • left () and right () move the cursor left or right by one character. If the cursor is already at the end of the line, and an autosugges- tion is available, right () accepts the autosuggestion. • enter executes the current commandline or inserts a newline if it's not complete yet (e.g. a ) or end is missing). • alt-enter inserts a newline at the cursor position. This is useful to add a line to a commandline that's already complete. • alt-left () and alt-right () move the cursor one word left or right (to the next space or punctuation mark), or moves forward/backward in the directory history if the command line is empty. If the cursor is already at the end of the line, and an autosuggestion is available, alt-right () (or alt-f ) accepts the first word in the suggestion. • ctrl-left () and ctrl-right () move the cursor one word left or right. These accept one word of the autosuggestion - the part they'd move over. • shift-left () and shift-right () move the cursor one word left or right, without stopping on punctuation. These accept one big word of the autosuggestion. • up () and down () (or ctrl-p and ctrl-n for emacs aficionados) search the command history for the previous/next command containing the string that was specified on the commandline before the search was started. If the commandline was empty when the search started, all commands match. See the history section for more information on his- tory searching. • alt-up () and alt-down () search the command history for the previ- ous/next token containing the token under the cursor before the search was started. If the commandline was not on a token when the search started, all tokens match. See the history section for more information on history searching. • ctrl-c interrupts/kills whatever is running (SIGINT). • ctrl-d deletes one character to the right of the cursor. If the com- mand line is empty, ctrl-d will exit fish. • ctrl-u removes contents from the beginning of line to the cursor (moving it to the killring ). • ctrl-l clears and repaints the screen. • ctrl-w removes the previous path component (everything up to the pre- vious "/", ":" or "@") (moving it to the Copy and paste (Kill Ring) ). • ctrl-x copies the current buffer to the system's clipboard, ctrl-v inserts the clipboard contents. (see fish_clipboard_copy and fish_clipboard_paste ) • alt-d or ctrl-delete moves the next word to the Copy and paste (Kill Ring) . • alt-d lists the directory history if the command line is empty. • alt-delete moves the next argument to the Copy and paste (Kill Ring) . • shift-delete removes t | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/reference/items/unions.html | Unions - The Rust Reference Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Reference [items .union] Unions [items .union .syntax] Syntax Union → union IDENTIFIER GenericParams ? WhereClause ? { StructFields ? } Show Railroad Union union IDENTIFIER GenericParams WhereClause { StructFields } [items .union .intro] A union declaration uses the same syntax as a struct declaration, except with union in place of struct . [items .union .namespace] A union declaration defines the given name in the type namespace of the module or block where it is located. #![allow(unused)] fn main() { #[repr(C)] union MyUnion { f1: u32, f2: f32, } } [items .union .common-storage] The key property of unions is that all fields of a union share common storage. As a result, writes to one field of a union can overwrite its other fields, and size of a union is determined by the size of its largest field. [items .union .field-restrictions] Union field types are restricted to the following subset of types: [items .union .field-copy] Copy types [items .union .field-references] References ( &T and &mut T for arbitrary T ) [items .union .field-manually-drop] ManuallyDrop<T> (for arbitrary T ) [items .union .field-tuple] Tuples and arrays containing only allowed union field types [items .union .drop] This restriction ensures, in particular, that union fields never need to be dropped. Like for structs and enums, it is possible to impl Drop for a union to manually define what happens when it gets dropped. [items .union .fieldless] Unions without any fields are not accepted by the compiler, but can be accepted by macros. [items .union .init] Initialization of a union [items .union .init .intro] A value of a union type can be created using the same syntax that is used for struct types, except that it must specify exactly one field: #![allow(unused)] fn main() { union MyUnion { f1: u32, f2: f32 } let u = MyUnion { f1: 1 }; } [items .union .init .result] The expression above creates a value of type MyUnion and initializes the storage using field f1 . The union can be accessed using the same syntax as struct fields: #![allow(unused)] fn main() { union MyUnion { f1: u32, f2: f32 } let u = MyUnion { f1: 1 }; let f = unsafe { u.f1 }; } [items .union .fields] Reading and writing union fields [items .union .fields .intro] Unions have no notion of an “active field”. Instead, every union access just interprets the storage as the type of the field used for the access. [items .union .fields .read] Reading a union field reads the bits of the union at the field’s type. [items .union .fields .offset] Fields might have a non-zero offset (except when the C representation is used); in that case the bits starting at the offset of the fields are read [items .union .fields .validity] It is the programmer’s responsibility to make sure that the data is valid at the field’s type. Failing to do so results in undefined behavior . For example, reading the value 3 from a field of the boolean type is undefined behavior. Effectively, writing to and then reading from a union with the C representation is analogous to a transmute from the type used for writing to the type used for reading. [items .union .fields .read-safety] Consequently, all reads of union fields have to be placed in unsafe blocks: #![allow(unused)] fn main() { union MyUnion { f1: u32, f2: f32 } let u = MyUnion { f1: 1 }; unsafe { let f = u.f1; } } Commonly, code using unions will provide safe wrappers around unsafe union field accesses. [items .union .fields .write-safety] In contrast, writes to union fields are safe, since they just overwrite arbitrary data, but cannot cause undefined behavior. (Note that union field types can never have drop glue, so a union field write will never implicitly drop anything.) [items .union .pattern] Pattern matching on unions [items .union .pattern .intro] Another way to access union fields is to use pattern matching. [items .union .pattern .one-field] Pattern matching on union fields uses the same syntax as struct patterns, except that the pattern must specify exactly one field. [items .union .pattern .safety] Since pattern matching is like reading the union with a particular field, it has to be placed in unsafe blocks as well. #![allow(unused)] fn main() { union MyUnion { f1: u32, f2: f32 } fn f(u: MyUnion) { unsafe { match u { MyUnion { f1: 10 } => { println!("ten"); } MyUnion { f2 } => { println!("{}", f2); } } } } } [items .union .pattern .subpattern] Pattern matching may match a union as a field of a larger structure. In particular, when using a Rust union to implement a C tagged union via FFI, this allows matching on the tag and the corresponding field simultaneously: #![allow(unused)] fn main() { #[repr(u32)] enum Tag { I, F } #[repr(C)] union U { i: i32, f: f32, } #[repr(C)] struct Value { tag: Tag, u: U, } fn is_zero(v: Value) -> bool { unsafe { match v { Value { tag: Tag::I, u: U { i: 0 } } => true, Value { tag: Tag::F, u: U { f: num } } if num == 0.0 => true, _ => false, } } } } [items .union .ref] References to union fields [items .union .ref .intro] Since union fields share common storage, gaining write access to one field of a union can give write access to all its remaining fields. [items .union .ref .borrow] Borrow checking rules have to be adjusted to account for this fact. As a result, if one field of a union is borrowed, all its remaining fields are borrowed as well for the same lifetime. #![allow(unused)] fn main() { union MyUnion { f1: u32, f2: f32 } // ERROR: cannot borrow `u` (via `u.f2`) as mutable more than once at a time fn test() { let mut u = MyUnion { f1: 1 }; unsafe { let b1 = &mut u.f1; // ---- first mutable borrow occurs here (via `u.f1`) let b2 = &mut u.f2; // ^^^^ second mutable borrow occurs here (via `u.f2`) *b1 = 5; } // - first borrow ends here assert_eq!(unsafe { u.f1 }, 5); } } [items .union .ref .usage] As you could see, in many aspects (except for layouts, safety, and ownership) unions behave exactly like structs, largely as a consequence of inheriting their syntactic shape from structs. This is also true for many unmentioned aspects of Rust language (such as privacy, name resolution, type inference, generics, trait implementations, inherent implementations, coherence, pattern checking, etc etc etc). | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/cargo/reference/overriding-dependencies.html#paths-overrides | Overriding Dependencies - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Overriding Dependencies The desire to override a dependency can arise through a number of scenarios. Most of them, however, boil down to the ability to work with a crate before it’s been published to crates.io . For example: A crate you’re working on is also used in a much larger application you’re working on, and you’d like to test a bug fix to the library inside of the larger application. An upstream crate you don’t work on has a new feature or a bug fix on the master branch of its git repository which you’d like to test out. You’re about to publish a new major version of your crate, but you’d like to do integration testing across an entire package to ensure the new major version works. You’ve submitted a fix to an upstream crate for a bug you found, but you’d like to immediately have your application start depending on the fixed version of the crate to avoid blocking on the bug fix getting merged. These scenarios can be solved with the [patch] manifest section . This chapter walks through a few different use cases, and includes details on the different ways to override a dependency. Example use cases Testing a bugfix Working with an unpublished minor version Overriding repository URL Prepublishing a breaking change Using [patch] with multiple versions Reference The [patch] section The [replace] section paths overrides Note : See also specifying a dependency with multiple locations , which can be used to override the source for a single dependency declaration in a local package. Testing a bugfix Let’s say you’re working with the uuid crate but while you’re working on it you discover a bug. You are, however, quite enterprising so you decide to also try to fix the bug! Originally your manifest will look like: [package] name = "my-library" version = "0.1.0" [dependencies] uuid = "1.0" First thing we’ll do is to clone the uuid repository locally via: $ git clone https://github.com/uuid-rs/uuid.git Next we’ll edit the manifest of my-library to contain: [patch.crates-io] uuid = { path = "../path/to/uuid" } Here we declare that we’re patching the source crates-io with a new dependency. This will effectively add the local checked out version of uuid to the crates.io registry for our local package. Next up we need to ensure that our lock file is updated to use this new version of uuid so our package uses the locally checked out copy instead of one from crates.io. The way [patch] works is that it’ll load the dependency at ../path/to/uuid and then whenever crates.io is queried for versions of uuid it’ll also return the local version. This means that the version number of the local checkout is significant and will affect whether the patch is used. Our manifest declared uuid = "1.0" which means we’ll only resolve to >= 1.0.0, < 2.0.0 , and Cargo’s greedy resolution algorithm also means that we’ll resolve to the maximum version within that range. Typically this doesn’t matter as the version of the git repository will already be greater or match the maximum version published on crates.io, but it’s important to keep this in mind! In any case, typically all you need to do now is: $ cargo build Compiling uuid v1.0.0 (.../uuid) Compiling my-library v0.1.0 (.../my-library) Finished dev [unoptimized + debuginfo] target(s) in 0.32 secs And that’s it! You’re now building with the local version of uuid (note the path in parentheses in the build output). If you don’t see the local path version getting built then you may need to run cargo update uuid --precise $version where $version is the version of the locally checked out copy of uuid . Once you’ve fixed the bug you originally found the next thing you’ll want to do is to likely submit that as a pull request to the uuid crate itself. Once you’ve done this then you can also update the [patch] section. The listing inside of [patch] is just like the [dependencies] section, so once your pull request is merged you could change your path dependency to: [patch.crates-io] uuid = { git = 'https://github.com/uuid-rs/uuid.git' } Working with an unpublished minor version Let’s now shift gears a bit from bug fixes to adding features. While working on my-library you discover that a whole new feature is needed in the uuid crate. You’ve implemented this feature, tested it locally above with [patch] , and submitted a pull request. Let’s go over how you continue to use and test it before it’s actually published. Let’s also say that the current version of uuid on crates.io is 1.0.0 , but since then the master branch of the git repository has updated to 1.0.1 . This branch includes your new feature you submitted previously. To use this repository we’ll edit our Cargo.toml to look like [package] name = "my-library" version = "0.1.0" [dependencies] uuid = "1.0.1" [patch.crates-io] uuid = { git = 'https://github.com/uuid-rs/uuid.git' } Note that our local dependency on uuid has been updated to 1.0.1 as it’s what we’ll actually require once the crate is published. This version doesn’t exist on crates.io, though, so we provide it with the [patch] section of the manifest. Now when our library is built it’ll fetch uuid from the git repository and resolve to 1.0.1 inside the repository instead of trying to download a version from crates.io. Once 1.0.1 is published on crates.io the [patch] section can be deleted. It’s also worth noting that [patch] applies transitively . Let’s say you use my-library in a larger package, such as: [package] name = "my-binary" version = "0.1.0" [dependencies] my-library = { git = 'https://example.com/git/my-library' } uuid = "1.0" [patch.crates-io] uuid = { git = 'https://github.com/uuid-rs/uuid.git' } Remember that [patch] is applicable transitively but can only be defined at the top level so we consumers of my-library have to repeat the [patch] section if necessary. Here, though, the new uuid crate applies to both our dependency on uuid and the my-library -> uuid dependency. The uuid crate will be resolved to one version for this entire crate graph, 1.0.1, and it’ll be pulled from the git repository. Overriding repository URL In case the dependency you want to override isn’t loaded from crates.io , you’ll have to change a bit how you use [patch] . For example, if the dependency is a git dependency, you can override it to a local path with: [patch."https://github.com/your/repository"] my-library = { path = "../my-library/path" } And that’s it! Prepublishing a breaking change Let’s take a look at working with a new major version of a crate, typically accompanied with breaking changes. Sticking with our previous crates, this means that we’re going to be creating version 2.0.0 of the uuid crate. After we’ve submitted all changes upstream we can update our manifest for my-library to look like: [dependencies] uuid = "2.0" [patch.crates-io] uuid = { git = "https://github.com/uuid-rs/uuid.git", branch = "2.0.0" } And that’s it! Like with the previous example the 2.0.0 version doesn’t actually exist on crates.io but we can still put it in through a git dependency through the usage of the [patch] section. As a thought exercise let’s take another look at the my-binary manifest from above again as well: [package] name = "my-binary" version = "0.1.0" [dependencies] my-library = { git = 'https://example.com/git/my-library' } uuid = "1.0" [patch.crates-io] uuid = { git = 'https://github.com/uuid-rs/uuid.git', branch = '2.0.0' } Note that this will actually resolve to two versions of the uuid crate. The my-binary crate will continue to use the 1.x.y series of the uuid crate but the my-library crate will use the 2.0.0 version of uuid . This will allow you to gradually roll out breaking changes to a crate through a dependency graph without being forced to update everything all at once. Using [patch] with multiple versions You can patch in multiple versions of the same crate with the package key used to rename dependencies. For example let’s say that the serde crate has a bugfix that we’d like to use to its 1.* series but we’d also like to prototype using a 2.0.0 version of serde we have in our git repository. To configure this we’d do: [patch.crates-io] serde = { git = 'https://github.com/serde-rs/serde.git' } serde2 = { git = 'https://github.com/example/serde.git', package = 'serde', branch = 'v2' } The first serde = ... directive indicates that serde 1.* should be used from the git repository (pulling in the bugfix we need) and the second serde2 = ... directive indicates that the serde package should also be pulled from the v2 branch of https://github.com/example/serde . We’re assuming here that Cargo.toml on that branch mentions version 2.0.0 . Note that when using the package key the serde2 identifier here is actually ignored. We simply need a unique name which doesn’t conflict with other patched crates. The [patch] section The [patch] section of Cargo.toml can be used to override dependencies with other copies. The syntax is similar to the [dependencies] section: [patch.crates-io] foo = { git = 'https://github.com/example/foo.git' } bar = { path = 'my/local/bar' } [dependencies.baz] git = 'https://github.com/example/baz.git' [patch.'https://github.com/example/baz'] baz = { git = 'https://github.com/example/patched-baz.git', branch = 'my-branch' } Note : The [patch] table can also be specified as a configuration option , such as in a .cargo/config.toml file or a CLI option like --config 'patch.crates-io.rand.path="rand"' . This can be useful for local-only changes that you don’t want to commit, or temporarily testing a patch. The [patch] table is made of dependency-like sub-tables. Each key after [patch] is a URL of the source that is being patched, or the name of a registry. The name crates-io may be used to override the default registry crates.io . The first [patch] in the example above demonstrates overriding crates.io , and the second [patch] demonstrates overriding a git source. Each entry in these tables is a normal dependency specification, the same as found in the [dependencies] section of the manifest. The dependencies listed in the [patch] section are resolved and used to patch the source at the URL specified. The above manifest snippet patches the crates-io source (e.g. crates.io itself) with the foo crate and bar crate. It also patches the https://github.com/example/baz source with a my-branch that comes from elsewhere. Sources can be patched with versions of crates that do not exist, and they can also be patched with versions of crates that already exist. If a source is patched with a crate version that already exists in the source, then the source’s original crate is replaced. Cargo only looks at the patch settings in the Cargo.toml manifest at the root of the workspace. Patch settings defined in dependencies will be ignored. The [replace] section Note : [replace] is deprecated. You should use the [patch] table instead. This section of Cargo.toml can be used to override dependencies with other copies. The syntax is similar to the [dependencies] section: [replace] "foo:0.1.0" = { git = 'https://github.com/example/foo.git' } "bar:1.0.2" = { path = 'my/local/bar' } Each key in the [replace] table is a package ID specification , which allows arbitrarily choosing a node in the dependency graph to override (the 3-part version number is required). The value of each key is the same as the [dependencies] syntax for specifying dependencies, except that you can’t specify features. Note that when a crate is overridden the copy it’s overridden with must have both the same name and version, but it can come from a different source (e.g., git or a local path). Cargo only looks at the replace settings in the Cargo.toml manifest at the root of the workspace. Replace settings defined in dependencies will be ignored. paths overrides Sometimes you’re only temporarily working on a crate and you don’t want to have to modify Cargo.toml like with the [patch] section above. For this use case Cargo offers a much more limited version of overrides called path overrides . Path overrides are specified through .cargo/config.toml instead of Cargo.toml . Inside of .cargo/config.toml you’ll specify a key called paths : paths = ["/path/to/uuid"] This array should be filled with directories that contain a Cargo.toml . In this instance, we’re just adding uuid , so it will be the only one that’s overridden. This path can be either absolute or relative to the directory that contains the .cargo folder. Path overrides are more restricted than the [patch] section, however, in that they cannot change the structure of the dependency graph. When a path replacement is used then the previous set of dependencies must all match exactly to the new Cargo.toml specification. For example this means that path overrides cannot be used to test out adding a dependency to a crate. Instead, [patch] must be used in that situation. As a result, usage of a path override is typically isolated to quick bug fixes rather than larger changes. Note : using a local configuration to override paths will only work for crates that have been published to crates.io . You cannot use this feature to tell Cargo how to find local unpublished crates. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/edition-guide/editions/index.html#editions-do-not-split-the-ecosystem | What are editions? - The Rust Edition Guide Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Edition Guide What are Editions? In May 2015, the release of Rust 1.0 established " stability without stagnation " as a core Rust axiom. Since then, Rust has committed to a pivotal rule: once a feature is released through stable , contributors will continue to support that feature for all future releases. However, there are times when it's useful to make backwards-incompatible changes to the language. A common example is the introduction of a new keyword. For instance, early versions of Rust didn't feature the async and await keywords. If Rust had suddenly introduced these new keywords, some code would have broken: let async = 1; would no longer work. Rust uses editions to solve this problem. When there are backwards-incompatible changes, they are pushed into the next edition. Since editions are opt-in, existing crates won't use the changes unless they explicitly migrate into the new edition. For example, the latest version of Rust doesn't treat async as a keyword unless edition 2018 or later is chosen. Each crate chooses its edition within its Cargo.toml file . When creating a new crate with Cargo, it will automatically select the newest stable edition. Editions do not split the ecosystem When creating editions, there is one most consequential rule: crates in one edition must seamlessly interoperate with those compiled with other editions. In other words, each crate can decide when to migrate to a new edition independently. This decision is 'private' - it won't affect other crates in the ecosystem. For Rust, this required compatibility implies some limits on the kinds of changes that can be featured in an edition. As a result, changes found in new Rust editions tend to be 'skin deep'. All Rust code - regardless of edition - will ultimately compile down to the same internal representation within the compiler. Edition migration is easy and largely automated Rust aims to make upgrading to a new edition an easy process. When a new edition releases, crate authors may use automatic migration tooling within cargo to migrate. Cargo will then make minor changes to the code to make it compatible with the new version. For example, when migrating to Rust 2018, anything named async will now use the equivalent raw identifier syntax : r#async . Cargo's automatic migrations aren't perfect: there may still be corner cases where manual changes are required. It aims to avoid changes to semantics that could affect the correctness or performance of the code. What this guide covers In addition to tooling, this Rust Edition Guide also covers the changes that are part of each edition. It describes each change and links to additional details, if available. It also covers corner cases or tricky details crate authors should be aware of. Crate authors should find: An overview of editions A migration guide for specific editions A quick troubleshooting reference when automated tooling isn't working. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/cargo/reference/features.html#features | Features - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Features Cargo “features” provide a mechanism to express conditional compilation and optional dependencies . A package defines a set of named features in the [features] table of Cargo.toml , and each feature can either be enabled or disabled. Features for the package being built can be enabled on the command-line with flags such as --features . Features for dependencies can be enabled in the dependency declaration in Cargo.toml . Note : New crates or versions published on crates.io are now limited to a maximum of 300 features. Exceptions are granted on a case-by-case basis. See this blog post for details. Participation in solution discussions is encouraged via the crates.io Zulip stream. See also the Features Examples chapter for some examples of how features can be used. The [features] section Features are defined in the [features] table in Cargo.toml . Each feature specifies an array of other features or optional dependencies that it enables. The following examples illustrate how features could be used for a 2D image processing library where support for different image formats can be optionally included: [features] # Defines a feature named `webp` that does not enable any other features. webp = [] With this feature defined, cfg expressions can be used to conditionally include code to support the requested feature at compile time. For example, inside lib.rs of the package could include this: #![allow(unused)] fn main() { // This conditionally includes a module which implements WEBP support. #[cfg(feature = "webp")] pub mod webp; } Cargo sets features in the package using the rustc --cfg flag , and code can test for their presence with the cfg attribute or the cfg macro . Features can list other features to enable. For example, the ICO image format can contain BMP and PNG images, so when it is enabled, it should make sure those other features are enabled, too: [features] bmp = [] png = [] ico = ["bmp", "png"] webp = [] Feature names may include characters from the Unicode XID standard (which includes most letters), and additionally allows starting with _ or digits 0 through 9 , and after the first character may also contain - , + , or . . Note : crates.io imposes additional constraints on feature name syntax that they must only be ASCII alphanumeric characters or _ , - , or + . The default feature By default, all features are disabled unless explicitly enabled. This can be changed by specifying the default feature: [features] default = ["ico", "webp"] bmp = [] png = [] ico = ["bmp", "png"] webp = [] When the package is built, the default feature is enabled which in turn enables the listed features. This behavior can be changed by: The --no-default-features command-line flag disables the default features of the package. The default-features = false option can be specified in a dependency declaration . Note : Be careful about choosing the default feature set. The default features are a convenience that make it easier to use a package without forcing the user to carefully select which features to enable for common use, but there are some drawbacks. Dependencies automatically enable default features unless default-features = false is specified. This can make it difficult to ensure that the default features are not enabled, especially for a dependency that appears multiple times in the dependency graph. Every package must ensure that default-features = false is specified to avoid enabling them. Another issue is that it can be a SemVer incompatible change to remove a feature from the default set, so you should be confident that you will keep those features. Optional dependencies Dependencies can be marked “optional”, which means they will not be compiled by default. For example, let’s say that our 2D image processing library uses an external package to handle GIF images. This can be expressed like this: [dependencies] gif = { version = "0.11.1", optional = true } By default, this optional dependency implicitly defines a feature that looks like this: [features] gif = ["dep:gif"] This means that this dependency will only be included if the gif feature is enabled. The same cfg(feature = "gif") syntax can be used in the code, and the dependency can be enabled just like any feature such as --features gif (see Command-line feature options below). In some cases, you may not want to expose a feature that has the same name as the optional dependency. For example, perhaps the optional dependency is an internal detail, or you want to group multiple optional dependencies together, or you just want to use a better name. If you specify the optional dependency with the dep: prefix anywhere in the [features] table, that disables the implicit feature. Note : The dep: syntax is only available starting with Rust 1.60. Previous versions can only use the implicit feature name. For example, let’s say in order to support the AVIF image format, our library needs two other dependencies to be enabled: [dependencies] ravif = { version = "0.6.3", optional = true } rgb = { version = "0.8.25", optional = true } [features] avif = ["dep:ravif", "dep:rgb"] In this example, the avif feature will enable the two listed dependencies. This also avoids creating the implicit ravif and rgb features, since we don’t want users to enable those individually as they are internal details to our crate. Note : Another way to optionally include a dependency is to use platform-specific dependencies . Instead of using features, these are conditional based on the target platform. Dependency features Features of dependencies can be enabled within the dependency declaration. The features key indicates which features to enable: [dependencies] # Enables the `derive` feature of serde. serde = { version = "1.0.118", features = ["derive"] } The default features can be disabled using default-features = false : [dependencies] flate2 = { version = "1.0.3", default-features = false, features = ["zlib-rs"] } Note : This may not ensure the default features are disabled. If another dependency includes flate2 without specifying default-features = false , then the default features will be enabled. See feature unification below for more details. Features of dependencies can also be enabled in the [features] table. The syntax is "package-name/feature-name" . For example: [dependencies] jpeg-decoder = { version = "0.1.20", default-features = false } [features] # Enables parallel processing support by enabling the "rayon" feature of jpeg-decoder. parallel = ["jpeg-decoder/rayon"] The "package-name/feature-name" syntax will also enable package-name if it is an optional dependency. Often this is not what you want. You can add a ? as in "package-name?/feature-name" which will only enable the given feature if something else enables the optional dependency. Note : The ? syntax is only available starting with Rust 1.60. For example, let’s say we have added some serialization support to our library, and it requires enabling a corresponding feature in some optional dependencies. That can be done like this: [dependencies] serde = { version = "1.0.133", optional = true } rgb = { version = "0.8.25", optional = true } [features] serde = ["dep:serde", "rgb?/serde"] In this example, enabling the serde feature will enable the serde dependency. It will also enable the serde feature for the rgb dependency, but only if something else has enabled the rgb dependency. Command-line feature options The following command-line flags can be used to control which features are enabled: --features FEATURES : Enables the listed features. Multiple features may be separated with commas or spaces. If using spaces, be sure to use quotes around all the features if running Cargo from a shell (such as --features "foo bar" ). If building multiple packages in a workspace , the package-name/feature-name syntax can be used to specify features for specific workspace members. --all-features : Activates all features of all packages selected on the command line. --no-default-features : Does not activate the default feature of the selected packages. NOTE : check the individual subcommand documentation for details. Not all flags are available for all subcommands. Feature unification Features are unique to the package that defines them. Enabling a feature on a package does not enable a feature of the same name on other packages. When a dependency is used by multiple packages, Cargo will use the union of all features enabled on that dependency when building it. This helps ensure that only a single copy of the dependency is used. See the features section of the resolver documentation for more details. For example, let’s look at the winapi package which uses a large number of features. If your package depends on a package foo which enables the “fileapi” and “handleapi” features of winapi , and another dependency bar which enables the “std” and “winnt” features of winapi , then winapi will be built with all four of those features enabled. A consequence of this is that features should be additive . That is, enabling a feature should not disable functionality, and it should usually be safe to enable any combination of features. A feature should not introduce a SemVer-incompatible change . For example, if you want to optionally support no_std environments, do not use a no_std feature. Instead, use a std feature that enables std . For example: #![allow(unused)] #![no_std] fn main() { #[cfg(feature = "std")] extern crate std; #[cfg(feature = "std")] pub fn function_that_requires_std() { // ... } } Mutually exclusive features There are rare cases where features may be mutually incompatible with one another. This should be avoided if at all possible, because it requires coordinating all uses of the package in the dependency graph to cooperate to avoid enabling them together. If it is not possible, consider adding a compile error to detect this scenario. For example: #[cfg(all(feature = "foo", feature = "bar"))] compile_error!("feature \"foo\" and feature \"bar\" cannot be enabled at the same time"); Instead of using mutually exclusive features, consider some other options: Split the functionality into separate packages. When there is a conflict, choose one feature over another . The cfg-if package can help with writing more complex cfg expressions. Architect the code to allow the features to be enabled concurrently, and use runtime options to control which is used. For example, use a config file, command-line argument, or environment variable to choose which behavior to enable. Inspecting resolved features In complex dependency graphs, it can sometimes be difficult to understand how different features get enabled on various packages. The cargo tree command offers several options to help inspect and visualize which features are enabled. Some options to try: cargo tree -e features : This will show features in the dependency graph. Each feature will appear showing which package enabled it. cargo tree -f "{p} {f}" : This is a more compact view that shows a comma-separated list of features enabled on each package. cargo tree -e features -i foo : This will invert the tree, showing how features flow into the given package “foo”. This can be useful because viewing the entire graph can be quite large and overwhelming. Use this when you are trying to figure out which features are enabled on a specific package and why. See the example at the bottom of the cargo tree page on how to read this. Feature resolver version 2 A different feature resolver can be specified with the resolver field in Cargo.toml , like this: [package] name = "my-package" version = "1.0.0" resolver = "2" See the resolver versions section for more detail on specifying resolver versions. The version "2" resolver avoids unifying features in a few situations where that unification can be unwanted. The exact situations are described in the resolver chapter , but in short, it avoids unifying in these situations: Features enabled on platform-specific dependencies for target architectures not currently being built are ignored. Build-dependencies and proc-macros do not share features with normal dependencies. Dev-dependencies do not activate features unless building a Cargo target that needs them (like tests or examples). Avoiding the unification is necessary for some situations. For example, if a build-dependency enables a std feature, and the same dependency is used as a normal dependency for a no_std environment, enabling std would break the build. However, one drawback is that this can increase build times because the dependency is built multiple times (each with different features). When using the version "2" resolver, it is recommended to check for dependencies that are built multiple times to reduce overall build time. If it is not required to build those duplicated packages with separate features, consider adding features to the features list in the dependency declaration so that the duplicates end up with the same features (and thus Cargo will build it only once). You can detect these duplicate dependencies with the cargo tree --duplicates command. It will show which packages are built multiple times; look for any entries listed with the same version. See Inspecting resolved features for more on fetching information on the resolved features. For build dependencies, this is not necessary if you are cross-compiling with the --target flag because build dependencies are always built separately from normal dependencies in that scenario. Resolver version 2 command-line flags The resolver = "2" setting also changes the behavior of the --features and --no-default-features command-line options . With version "1" , you can only enable features for the package in the current working directory. For example, in a workspace with packages foo and bar , and you are in the directory for package foo , and ran the command cargo build -p bar --features bar-feat , this would fail because the --features flag only allowed enabling features on foo . With resolver = "2" , the features flags allow enabling features for any of the packages selected on the command-line with -p and --workspace flags. For example: # This command is allowed with resolver = "2", regardless of which directory # you are in. cargo build -p foo -p bar --features foo-feat,bar-feat # This explicit equivalent works with any resolver version: cargo build -p foo -p bar --features foo/foo-feat,bar/bar-feat Additionally, with resolver = "1" , the --no-default-features flag only disables the default feature for the package in the current directory. With version “2”, it will disable the default features for all workspace members. Build scripts Build scripts can detect which features are enabled on the package by inspecting the CARGO_FEATURE_<name> environment variable, where <name> is the feature name converted to uppercase and - converted to _ . Required features The required-features field can be used to disable specific Cargo targets if a feature is not enabled. See the linked documentation for more details. SemVer compatibility Enabling a feature should not introduce a SemVer-incompatible change. For example, the feature shouldn’t change an existing API in a way that could break existing uses. More details about what changes are compatible can be found in the SemVer Compatibility chapter . Care should be taken when adding and removing feature definitions and optional dependencies, as these can sometimes be backwards-incompatible changes. More details can be found in the Cargo section of the SemVer Compatibility chapter. In short, follow these rules: The following is usually safe to do in a minor release: Add a new feature or optional dependency . Change the features used on a dependency . The following should usually not be done in a minor release: Remove a feature or optional dependency . Moving existing public code behind a feature . Remove a feature from a feature list . See the links for caveats and examples. Feature documentation and discovery You are encouraged to document which features are available in your package. This can be done by adding doc comments at the top of lib.rs . As an example, see the regex crate source , which when rendered can be viewed on docs.rs . If you have other documentation, such as a user guide, consider adding the documentation there (for example, see serde.rs ). If you have a binary project, consider documenting the features in the README or other documentation for the project (for example, see sccache ). Clearly documenting the features can set expectations about features that are considered “unstable” or otherwise shouldn’t be used. For example, if there is an optional dependency, but you don’t want users to explicitly list that optional dependency as a feature, exclude it from the documented list. Documentation published on docs.rs can use metadata in Cargo.toml to control which features are enabled when the documentation is built. See docs.rs metadata documentation for more details. Note : Rustdoc has experimental support for annotating the documentation to indicate which features are required to use certain APIs. See the doc_cfg documentation for more details. An example is the syn documentation , where you can see colored boxes which note which features are required to use it. Discovering features When features are documented in the library API, this can make it easier for your users to discover which features are available and what they do. If the feature documentation for a package isn’t readily available, you can look at the Cargo.toml file, but sometimes it can be hard to track it down. The crate page on crates.io has a link to the source repository if available. Tools like cargo vendor or cargo-clone-crate can be used to download the source and inspect it. Feature combinations Because features are a form of conditional compilation, they require an exponential number of configurations and test cases to be 100% covered. By default, tests, docs, and other tooling such as Clippy will only run with the default set of features. We encourage you to consider your strategy and tooling in regards to different feature combinations — Every project will have different requirements in conjunction with time, resources, and the cost-benefit of covering specific scenarios. Common configurations may be with / without default features, specific combinations of features, or all combinations of features. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/edition-guide/editions/index.html#what-this-guide-covers | What are editions? - The Rust Edition Guide Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Edition Guide What are Editions? In May 2015, the release of Rust 1.0 established " stability without stagnation " as a core Rust axiom. Since then, Rust has committed to a pivotal rule: once a feature is released through stable , contributors will continue to support that feature for all future releases. However, there are times when it's useful to make backwards-incompatible changes to the language. A common example is the introduction of a new keyword. For instance, early versions of Rust didn't feature the async and await keywords. If Rust had suddenly introduced these new keywords, some code would have broken: let async = 1; would no longer work. Rust uses editions to solve this problem. When there are backwards-incompatible changes, they are pushed into the next edition. Since editions are opt-in, existing crates won't use the changes unless they explicitly migrate into the new edition. For example, the latest version of Rust doesn't treat async as a keyword unless edition 2018 or later is chosen. Each crate chooses its edition within its Cargo.toml file . When creating a new crate with Cargo, it will automatically select the newest stable edition. Editions do not split the ecosystem When creating editions, there is one most consequential rule: crates in one edition must seamlessly interoperate with those compiled with other editions. In other words, each crate can decide when to migrate to a new edition independently. This decision is 'private' - it won't affect other crates in the ecosystem. For Rust, this required compatibility implies some limits on the kinds of changes that can be featured in an edition. As a result, changes found in new Rust editions tend to be 'skin deep'. All Rust code - regardless of edition - will ultimately compile down to the same internal representation within the compiler. Edition migration is easy and largely automated Rust aims to make upgrading to a new edition an easy process. When a new edition releases, crate authors may use automatic migration tooling within cargo to migrate. Cargo will then make minor changes to the code to make it compatible with the new version. For example, when migrating to Rust 2018, anything named async will now use the equivalent raw identifier syntax : r#async . Cargo's automatic migrations aren't perfect: there may still be corner cases where manual changes are required. It aims to avoid changes to semantics that could affect the correctness or performance of the code. What this guide covers In addition to tooling, this Rust Edition Guide also covers the changes that are part of each edition. It describes each change and links to additional details, if available. It also covers corner cases or tricky details crate authors should be aware of. Crate authors should find: An overview of editions A migration guide for specific editions A quick troubleshooting reference when automated tooling isn't working. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/std/fmt/trait.Display.html | Display in std::fmt - Rust This old browser is unsupported and will most likely display funky things. Display std 1.92.0 (ded5c06cf 2025-12-08) Display Sections Completeness and parseability Internationalization Examples Required Methods fmt Implementors In std:: fmt std :: fmt Trait Display Copy item path 1.0.0 · Source pub trait Display { // Required method fn fmt (&self, f: &mut Formatter <'_>) -> Result < () , Error >; } Expand description Format trait for an empty format, {} . Implementing this trait for a type will automatically implement the ToString trait for the type, allowing the usage of the .to_string() method. Prefer implementing the Display trait for a type, rather than ToString . Display is similar to Debug , but Display is for user-facing output, and so cannot be derived. For more information on formatters, see the module-level documentation . § Completeness and parseability Display for a type might not necessarily be a lossless or complete representation of the type. It may omit internal state, precision, or other information the type does not consider important for user-facing output, as determined by the type. As such, the output of Display might not be possible to parse, and even if it is, the result of parsing might not exactly match the original value. However, if a type has a lossless Display implementation whose output is meant to be conveniently machine-parseable and not just meant for human consumption, then the type may wish to accept the same format in FromStr , and document that usage. Having both Display and FromStr implementations where the result of Display cannot be parsed with FromStr may surprise users. § Internationalization Because a type can only have one Display implementation, it is often preferable to only implement Display when there is a single most “obvious” way that values can be formatted as text. This could mean formatting according to the “invariant” culture and “undefined” locale, or it could mean that the type display is designed for a specific culture/locale, such as developer logs. If not all values have a justifiably canonical textual format or if you want to support alternative formats not covered by the standard set of possible formatting traits , the most flexible approach is display adapters: methods like str::escape_default or Path::display which create a wrapper implementing Display to output the specific display format. § Examples Implementing Display on a type: use std::fmt; struct Point { x: i32, y: i32, } impl fmt::Display for Point { fn fmt( & self , f: &mut fmt::Formatter< '_ >) -> fmt::Result { write! (f, "({}, {})" , self .x, self .y) } } let origin = Point { x: 0 , y: 0 }; assert_eq! ( format! ( "The origin is: {origin}" ), "The origin is: (0, 0)" ); Required Methods § 1.0.0 · Source fn fmt (&self, f: &mut Formatter <'_>) -> Result < () , Error > Formats the value using the given formatter. § Errors This function should return Err if, and only if, the provided Formatter returns Err . String formatting is considered an infallible operation; this function only returns a Result because writing to the underlying stream might fail and it must provide a way to propagate the fact that an error has occurred back up the stack. § Examples use std::fmt; struct Position { longitude: f32, latitude: f32, } impl fmt::Display for Position { fn fmt( & self , f: &mut fmt::Formatter< '_ >) -> fmt::Result { write! (f, "({}, {})" , self .longitude, self .latitude) } } assert_eq! ( "(1.987, 2.983)" , format! ( "{}" , Position { longitude: 1.987 , latitude: 2.983 , }), ); Implementors § Source § impl Display for AsciiChar 1.34.0 · Source § impl Display for Infallible 1.0.0 · Source § impl Display for VarError 1.17.0 · Source § impl Display for FromBytesWithNulError 1.89.0 · Source § impl Display for std::fs:: TryLockError 1.60.0 · Source § impl Display for ErrorKind 1.7.0 · Source § impl Display for IpAddr 1.0.0 · Source § impl Display for SocketAddr 1.86.0 · Source § impl Display for GetDisjointMutError 1.15.0 · Source § impl Display for RecvTimeoutError 1.0.0 · Source § impl Display for TryRecvError 1.0.0 · Source § impl Display for bool 1.0.0 · Source § impl Display for char 1.0.0 · Source § impl Display for f16 1.0.0 · Source § impl Display for f32 1.0.0 · Source § impl Display for f64 1.0.0 · Source § impl Display for i8 1.0.0 · Source § impl Display for i16 1.0.0 · Source § impl Display for i32 1.0.0 · Source § impl Display for i64 1.0.0 · Source § impl Display for i128 1.0.0 · Source § impl Display for isize Source § impl Display for ! 1.0.0 · Source § impl Display for str 1.0.0 · Source § impl Display for u8 1.0.0 · Source § impl Display for u16 1.0.0 · Source § impl Display for u32 1.0.0 · Source § impl Display for u64 1.0.0 · Source § impl Display for u128 1.0.0 · Source § impl Display for usize 1.26.0 · Source § impl Display for PanicInfo <'_> 1.81.0 · Source § impl Display for PanicMessage <'_> Source § impl Display for AllocError 1.28.0 · Source § impl Display for LayoutError 1.35.0 · Source § impl Display for TryFromSliceError 1.39.0 · Source § impl Display for std::ascii:: EscapeDefault 1.65.0 · Source § impl Display for Backtrace Source § impl Display for ByteStr Source § impl Display for ByteString 1.13.0 · Source § impl Display for BorrowError 1.13.0 · Source § impl Display for BorrowMutError 1.34.0 · Source § impl Display for CharTryFromError 1.9.0 · Source § impl Display for DecodeUtf16Error 1.20.0 · Source § impl Display for std::char:: EscapeDebug 1.16.0 · Source § impl Display for std::char:: EscapeDefault 1.16.0 · Source § impl Display for std::char:: EscapeUnicode 1.20.0 · Source § impl Display for ParseCharError 1.16.0 · Source § impl Display for ToLowercase 1.16.0 · Source § impl Display for ToUppercase 1.59.0 · Source § impl Display for TryFromCharError Source § impl Display for UnorderedKeyError 1.57.0 · Source § impl Display for TryReserveError 1.0.0 · Source § impl Display for JoinPathsError 1.87.0 · Source § impl Display for std::ffi::os_str:: Display <'_> 1.69.0 · Source § impl Display for FromBytesUntilNulError 1.58.0 · Source § impl Display for FromVecWithNulError 1.7.0 · Source § impl Display for IntoStringError 1.0.0 · Source § impl Display for NulError 1.0.0 · Source § impl Display for std::io:: Error 1.56.0 · Source § impl Display for WriterPanicked 1.4.0 · Source § impl Display for AddrParseError 1.0.0 · Source § impl Display for Ipv4Addr 1.0.0 · Source § impl Display for Ipv6Addr Writes an Ipv6Addr, conforming to the canonical style described by RFC 5952 . 1.0.0 · Source § impl Display for SocketAddrV4 1.0.0 · Source § impl Display for SocketAddrV6 1.0.0 · Source § impl Display for ParseFloatError 1.0.0 · Source § impl Display for ParseIntError 1.34.0 · Source § impl Display for TryFromIntError 1.63.0 · Source § impl Display for InvalidHandleError Available on Windows only. 1.63.0 · Source § impl Display for NullHandleError Available on Windows only. 1.26.0 · Source § impl Display for Location <'_> 1.26.0 · Source § impl Display for PanicHookInfo <'_> 1.0.0 · Source § impl Display for std::path:: Display <'_> Source § impl Display for NormalizeError 1.7.0 · Source § impl Display for StripPrefixError 1.0.0 · Source § impl Display for ExitStatus Source § impl Display for ExitStatusError 1.0.0 · Source § impl Display for ParseBoolError 1.0.0 · Source § impl Display for Utf8Error 1.0.0 · Source § impl Display for FromUtf8Error 1.0.0 · Source § impl Display for FromUtf16Error 1.0.0 · Source § impl Display for String 1.0.0 · Source § impl Display for RecvError Source § impl Display for WouldBlock 1.26.0 · Source § impl Display for AccessError 1.8.0 · Source § impl Display for SystemTimeError 1.66.0 · Source § impl Display for TryFromFloatSecsError 1.0.0 · Source § impl Display for Arguments <'_> 1.0.0 · Source § impl Display for std::fmt:: Error 1.60.0 · Source § impl<'a> Display for EscapeAscii <'a> 1.34.0 · Source § impl<'a> Display for std::str:: EscapeDebug <'a> 1.34.0 · Source § impl<'a> Display for std::str:: EscapeDefault <'a> 1.34.0 · Source § impl<'a> Display for std::str:: EscapeUnicode <'a> Source § impl<'a, K, V, A> Display for std::collections::btree_map:: OccupiedError <'a, K, V, A> where K: Debug + Ord , V: Debug , A: Allocator + Clone , Source § impl<'a, K: Debug , V: Debug > Display for std::collections::hash_map:: OccupiedError <'a, K, V> 1.0.0 · Source § impl<B> Display for Cow <'_, B> where B: Display + ToOwned + ? Sized , <B as ToOwned >:: Owned : Display , Source § impl<E> Display for Report <E> where E: Error , Source § impl<F> Display for FromFn <F> where F: Fn (&mut Formatter <'_>) -> Result < () , Error >, 1.33.0 · Source § impl<Ptr> Display for Pin <Ptr> where Ptr: Display , 1.0.0 · Source § impl<T> Display for std::sync:: TryLockError <T> Source § impl<T> Display for SendTimeoutError <T> 1.0.0 · Source § impl<T> Display for TrySendError <T> 1.0.0 · Source § impl<T> Display for &T where T: Display + ? Sized , 1.0.0 · Source § impl<T> Display for &mut T where T: Display + ? Sized , Source § impl<T> Display for ThinBox <T> where T: Display + ? Sized , 1.20.0 · Source § impl<T> Display for Ref <'_, T> where T: Display + ? Sized , 1.20.0 · Source § impl<T> Display for RefMut <'_, T> where T: Display + ? Sized , 1.28.0 · Source § impl<T> Display for NonZero <T> where T: ZeroablePrimitive + Display , 1.74.0 · Source § impl<T> Display for Saturating <T> where T: Display , 1.10.0 · Source § impl<T> Display for Wrapping <T> where T: Display , 1.0.0 · Source § impl<T> Display for SendError <T> 1.0.0 · Source § impl<T> Display for PoisonError <T> 1.0.0 · Source § impl<T, A> Display for Box <T, A> where T: Display + ? Sized , A: Allocator , 1.0.0 · Source § impl<T, A> Display for Rc <T, A> where T: Display + ? Sized , A: Allocator , Source § impl<T, A> Display for UniqueRc <T, A> where T: Display + ? Sized , A: Allocator , 1.0.0 · Source § impl<T, A> Display for Arc <T, A> where T: Display + ? Sized , A: Allocator , Source § impl<T, A> Display for UniqueArc <T, A> where T: Display + ? Sized , A: Allocator , Source § impl<T: Display + ? Sized > Display for ReentrantLockGuard <'_, T> Source § impl<T: ? Sized + Display > Display for std::sync::nonpoison:: MappedMutexGuard <'_, T> Source § impl<T: ? Sized + Display > Display for std::sync::nonpoison:: MappedRwLockReadGuard <'_, T> Source § impl<T: ? Sized + Display > Display for std::sync::nonpoison:: MappedRwLockWriteGuard <'_, T> Source § impl<T: ? Sized + Display > Display for std::sync::nonpoison:: MutexGuard <'_, T> Source § impl<T: ? Sized + Display > Display for std::sync::nonpoison:: RwLockReadGuard <'_, T> Source § impl<T: ? Sized + Display > Display for std::sync::nonpoison:: RwLockWriteGuard <'_, T> Source § impl<T: ? Sized + Display > Display for std::sync:: MappedMutexGuard <'_, T> Source § impl<T: ? Sized + Display > Display for std::sync:: MappedRwLockReadGuard <'_, T> Source § impl<T: ? Sized + Display > Display for std::sync:: MappedRwLockWriteGuard <'_, T> 1.20.0 · Source § impl<T: ? Sized + Display > Display for std::sync:: MutexGuard <'_, T> 1.20.0 · Source § impl<T: ? Sized + Display > Display for std::sync:: RwLockReadGuard <'_, T> 1.20.0 · Source § impl<T: ? Sized + Display > Display for std::sync:: RwLockWriteGuard <'_, T> 1.0.0 · Source § impl<W> Display for IntoInnerError <W> | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/edition-guide/editions/index.html#what-are-editions | What are editions? - The Rust Edition Guide Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Edition Guide What are Editions? In May 2015, the release of Rust 1.0 established " stability without stagnation " as a core Rust axiom. Since then, Rust has committed to a pivotal rule: once a feature is released through stable , contributors will continue to support that feature for all future releases. However, there are times when it's useful to make backwards-incompatible changes to the language. A common example is the introduction of a new keyword. For instance, early versions of Rust didn't feature the async and await keywords. If Rust had suddenly introduced these new keywords, some code would have broken: let async = 1; would no longer work. Rust uses editions to solve this problem. When there are backwards-incompatible changes, they are pushed into the next edition. Since editions are opt-in, existing crates won't use the changes unless they explicitly migrate into the new edition. For example, the latest version of Rust doesn't treat async as a keyword unless edition 2018 or later is chosen. Each crate chooses its edition within its Cargo.toml file . When creating a new crate with Cargo, it will automatically select the newest stable edition. Editions do not split the ecosystem When creating editions, there is one most consequential rule: crates in one edition must seamlessly interoperate with those compiled with other editions. In other words, each crate can decide when to migrate to a new edition independently. This decision is 'private' - it won't affect other crates in the ecosystem. For Rust, this required compatibility implies some limits on the kinds of changes that can be featured in an edition. As a result, changes found in new Rust editions tend to be 'skin deep'. All Rust code - regardless of edition - will ultimately compile down to the same internal representation within the compiler. Edition migration is easy and largely automated Rust aims to make upgrading to a new edition an easy process. When a new edition releases, crate authors may use automatic migration tooling within cargo to migrate. Cargo will then make minor changes to the code to make it compatible with the new version. For example, when migrating to Rust 2018, anything named async will now use the equivalent raw identifier syntax : r#async . Cargo's automatic migrations aren't perfect: there may still be corner cases where manual changes are required. It aims to avoid changes to semantics that could affect the correctness or performance of the code. What this guide covers In addition to tooling, this Rust Edition Guide also covers the changes that are part of each edition. It describes each change and links to additional details, if available. It also covers corner cases or tricky details crate authors should be aware of. Crate authors should find: An overview of editions A migration guide for specific editions A quick troubleshooting reference when automated tooling isn't working. | 2026-01-13T09:29:13 |
https://aws.amazon.com/blogs/big-data/category/application-integration/amazon-eventbridge/#aws-page-content-main | Amazon EventBridge | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Category: Amazon EventBridge Automate and orchestrate Amazon EMR jobs using AWS Step Functions and Amazon EventBridge by Senthil Kamala Rathinam and Shashidhar Makkapati on 15 SEP 2025 in Advanced (300) , Amazon CloudWatch , Amazon EC2 , Amazon EMR , Amazon EventBridge , Analytics , AWS Step Functions , Technical How-to Permalink Comments Share In this post, we discuss how to build a fully automated, scheduled Spark processing pipeline using Amazon EMR on EC2, orchestrated with Step Functions and triggered by EventBridge. We walk through how to deploy this solution using AWS CloudFormation, processes COVID-19 public dataset data in Amazon Simple Storage Service (Amazon S3), and store the aggregated results in Amazon S3. Enhance Amazon EMR observability with automated incident mitigation using Amazon Bedrock and Amazon Managed Grafana by Yu-Ting Su on 14 AUG 2025 in Amazon Bedrock , Amazon Bedrock Agents , Amazon Bedrock Knowledge Bases , Amazon EMR , Amazon EventBridge , Amazon Managed Grafana , Amazon Managed Service for Prometheus , Amazon Nova , Analytics , AWS Big Data , AWS Lambda , Technical How-to Permalink Comments Share In this post, we demonstrate how to integrate real-time monitoring with AI-powered remediation suggestions, combining Amazon Managed Grafana for visualization, Amazon Bedrock for intelligent response recommendations, and AWS Systems Manager for automated remediation actions on Amazon Web Services (AWS). How Stifel built a modern data platform using AWS Glue and an event-driven domain architecture by Amit Maindola and Srinivas Kandi, Hossein Johari, Ahmad Rawashdeh, Lei Meng on 07 JUL 2025 in Advanced (300) , Amazon Athena , Amazon EventBridge , Amazon Simple Storage Service (S3) , Analytics , Architecture , AWS Glue , AWS Lake Formation , Best Practices , Experience-Based Acceleration , Technical How-to , Thought Leadership Permalink Comments Share In this post, we show you how Stifel implemented a modern data platform using AWS services and open data standards, building an event-driven architecture for domain data products while centralizing the metadata to facilitate discovery and sharing of data products. How Open Universities Australia modernized their data platform and significantly reduced their ETL costs with AWS Cloud Development Kit and AWS Step Functions by Michael Davies and Emma Arrigo on 30 JAN 2025 in Amazon AppFlow , Amazon EventBridge , Amazon Redshift , Amazon Redshift , Amazon Simple Storage Service (S3) , Asia Pacific , AWS Glue , AWS Lambda , AWS Serverless Application Model , AWS Step Functions , Customer Solutions , Education , Higher education Permalink Comments Share At Open Universities Australia (OUA), we empower students to explore a vast array of degrees from renowned Australian universities, all delivered through online learning. In this post, we show you how we used AWS services to replace our existing third-party ETL tool, improving the team’s productivity and producing a significant reduction in our ETL operational costs. How MuleSoft achieved cloud excellence through an event-driven Amazon Redshift lakehouse architecture by Sean Zou , Terry Quan , Audrey Yuan , Avijit Goswami , and Rueben Jimenez on 28 JAN 2025 in Amazon EventBridge , Amazon Redshift , Amazon Simple Storage Service (S3) , AWS Glue , AWS Trusted Advisor , AWS Well-Architected , Customer Solutions Permalink Comments Share In our previous thought leadership blog post Why a Cloud Operating Model we defined a COE Framework and showed why MuleSoft implemented it and the benefits they received from it. In this post, we’ll dive into the technical implementation describing how MuleSoft used Amazon EventBridge, Amazon Redshift, Amazon Redshift Spectrum, Amazon S3, & AWS Glue to implement it. Implement a custom subscription workflow for unmanaged Amazon S3 assets published with Amazon DataZone by Somdeb Bhattacharjee and Sam Yates on 19 DEC 2024 in Advanced (300) , Amazon DataZone , Amazon EventBridge , Technical How-to Permalink Comments Share In this post, we demonstrate how to implement a custom subscription workflow using Amazon DataZone, Amazon EventBridge, and AWS Lambda to automate the fulfillment process for unmanaged data assets, such as unstructured data stored in Amazon S3. This solution enhances governance and simplifies access to unstructured data assets across the organization. Automate data loading from your database into Amazon Redshift using AWS Database Migration Service (DMS), AWS Step Functions, and the Redshift Data API by Ritesh Sinha , Praveen Kadipikonda , and Jagadish Kumar on 02 JUL 2024 in Amazon Database Migration Accelerator , Amazon EventBridge , Amazon Redshift , Analytics , AWS Big Data , AWS Step Functions Permalink Comments Share Amazon Redshift is a fast, scalable, secure, and fully managed cloud data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing ETL (extract, transform, and load), business intelligence (BI), and reporting tools. Tens of thousands of customers use Amazon Redshift to process exabytes of data per […] Disaster recovery strategies for Amazon MWAA – Part 2 by Chandan Rupakheti and Parnab Basak on 17 JUN 2024 in Amazon EventBridge , Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , Amazon Simple Storage Service (S3) , AWS Lambda , AWS Step Functions , Technical How-to Permalink Comments Share Amazon Managed Workflows for Apache Airflow (Amazon MWAA) is a fully managed orchestration service that makes it straightforward to run data processing workflows at scale. Amazon MWAA takes care of operating and scaling Apache Airflow so you can focus on developing workflows. However, although Amazon MWAA provides high availability within an AWS Region through features […] Gain insights from historical location data using Amazon Location Service and AWS analytics services by Alan Peaty and Parag Srivastava on 13 MAR 2024 in Amazon Athena , Amazon Data Firehose , Amazon EventBridge , Amazon Location , AWS Glue , Best Practices , Technical How-to Permalink Comments Share Many organizations around the world rely on the use of physical assets, such as vehicles, to deliver a service to their end-customers. By tracking these assets in real time and storing the results, asset owners can derive valuable insights on how their assets are being used to continuously deliver business improvements and plan for future […] Disaster recovery strategies for Amazon MWAA – Part 1 by Parnab Basak , Chandan Rupakheti , Vinod Jayendra , and Rupesh Tiwari on 16 JAN 2024 in Amazon CloudWatch , Amazon EventBridge , Amazon Managed Workflows for Apache Airflow (Amazon MWAA) , Amazon Simple Storage Service (S3) , Architecture , AWS Lambda , AWS Step Functions , Intermediate (200) , Technical How-to Permalink Comments Share In the dynamic world of cloud computing, ensuring the resilience and availability of critical applications is paramount. Disaster recovery (DR) is the process by which an organization anticipates and addresses technology-related disasters. For organizations implementing critical workload orchestration using Amazon Managed Workflows for Apache Airflow (Amazon MWAA), it is crucial to have a DR plan […] ← Older posts @charset "UTF-8";[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4{position:relative;transition:box-shadow .3s ease}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_3d631df0,[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_b27cc003,[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_5962fadc:hover{box-shadow:var(--rg-shadow-gray-elevation-1, 1px 1px 20px rgba(0, 0, 0, .1))}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_3d631df0.rgft_e79955da,[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_b27cc003.rgft_e79955da,[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_5962fadc:hover.rgft_e79955da{box-shadow:var(--rg-shadow-gray-elevation-2, 1px 1px 24px rgba(0, 0, 0, .25))}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_b27cc003:hover{box-shadow:none}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde{position:relative;transform-style:preserve-3d;overflow:unset!important}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:before{content:"";position:absolute;inset:0;border-radius:inherit;transform:translateZ(-1px);pointer-events:none;transition-property:filter,inset;transition-duration:.3s;transition-timing-function:ease;background-clip:content-box!important;padding:1px}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_3d631df0:before{filter:blur(15px)}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_3d631df0.rgft_4df65418:hover:before{filter:blur(20px)}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_5962fadc:hover:before{filter:blur(15px)}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_b27cc003:before{filter:blur(15px)}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_b27cc003:hover:before{filter:none}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_e90ac70d:active:before{filter:blur(8px)!important}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_a4f580d2:before{filter:blur(8px)!important}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=fuchsia] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=fuchsia].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-fuchsia, linear-gradient(123deg, #fa6f00 0%, #e433ff 50%, #8575ff 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=fuchsia] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=fuchsia].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-fuchsia, linear-gradient(123deg, #d14600 0%, #c300e0 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=indigo] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=indigo].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-indigo, linear-gradient(123deg, #0099ff 0%, #5c7fff 50%, #8575ff 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=indigo] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=indigo].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-indigo, linear-gradient(123deg, #006ce0 0%, #295eff 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=orange] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=orange].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-orange, linear-gradient(123deg, #ff1ae0 0%, #ff386a 50%, #fa6f00 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=orange] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=orange].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-orange, linear-gradient(123deg, #d600ba 0%, #eb003b 50%, #d14600 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=teal] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=teal].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-teal, linear-gradient(123deg, #00bd6b 0%, #00a4bd 50%, #0099ff 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=teal] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=teal].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-teal, linear-gradient(123deg, #008559 0%, #007e94 50%, #006ce0 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=blue] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=blue].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-blue, linear-gradient(123deg, #00bd6b 0%, #0099ff 50%, #8575ff 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=blue] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=blue].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-blue, linear-gradient(123deg, #008559 0%, #006ce0 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=violet] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=violet].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-violet, linear-gradient(123deg, #ad5cff 0%, #0099ff 50%, #00a4bd 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=violet] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=violet].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-violet, linear-gradient(123deg, #962eff 0%, #006ce0 50%, #007e94 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=purple] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=purple].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-purple, linear-gradient(123deg, #ff1ae0 0%, #8575ff 50%, #00a4bd 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=purple] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=purple].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-purple, linear-gradient(123deg, #d600ba 0%, #6842ff 50%, #007e94 100%))}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:linear-gradient(123deg,#d14600,#c300e0,#6842ff)}[data-eb-6a8f3296] [data-rg-theme=fuchsia] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=fuchsia].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-fuchsia, linear-gradient(123deg, #d14600 0%, #c300e0 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-theme=indigo] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=indigo].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-indigo, linear-gradient(123deg, #006ce0 0%, #295eff 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-theme=orange] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=orange].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-orange, linear-gradient(123deg, #d600ba 0%, #eb003b 50%, #d14600 100%))}[data-eb-6a8f3296] [data-rg-theme=teal] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=teal].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-teal, linear-gradient(123deg, #008559 0%, #007e94 50%, #006ce0 100%))}[data-eb-6a8f3296] [data-rg-theme=blue] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=blue].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-blue, linear-gradient(123deg, #008559 0%, #006ce0 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-theme=violet] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=violet].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-violet, linear-gradient(123deg, #962eff 0%, #006ce0 50%, #007e94 100%))}[data-eb-6a8f3296] [data-rg-theme=purple] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=purple].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-purple, linear-gradient(123deg, #d600ba 0%, #6842ff 50%, #007e94 100%))}[data-eb-6a8f3296] a.rgft_f7822e54,[data-eb-6a8f3296] button.rgft_f7822e54{--button-size: 44px;--button-pad-h: 24px;--button-pad-borderless-h: 26px;border:2px solid var(--rg-color-background-page-inverted, #0F141A);padding:8px var(--button-pad-h, 24px);border-radius:40px!important;align-items:center;justify-content:center;display:inline-flex;height:var(--button-size, 44px);text-decoration:none!important;text-wrap:nowrap;cursor:pointer;position:relative;transition:all .3s ease}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_094d67e1,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_094d67e1{--button-size: 32px;--button-pad-h: 14px;--button-pad-borderless-h: 16px}[data-eb-6a8f3296] a.rgft_f7822e54>span,[data-eb-6a8f3296] button.rgft_f7822e54>span{color:var(--btn-text-color, inherit)!important}[data-eb-6a8f3296] a.rgft_f7822e54:focus-visible,[data-eb-6a8f3296] button.rgft_f7822e54:focus-visible{outline:2px solid var(--rg-color-focus-ring, #006CE0)!important;outline-offset:4px!important;transition:outline 0s}[data-eb-6a8f3296] a.rgft_f7822e54:focus:not(:focus-visible),[data-eb-6a8f3296] button.rgft_f7822e54:focus:not(:focus-visible){outline:none!important}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_303c672b,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_303c672b{--btn-text-color: var(--rg-color-text-utility-inverted, #FFFFFF);background-color:var(--rg-color-btn-primary-bg, #161D26);border:none;padding:10px var(--button-pad-borderless-h, 24px)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_18409398,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_18409398{--btn-text-color: var(--rg-color-text-utility, #161D26);background-color:var(--rg-color-btn-secondary-bg, #FFFFFF);border-color:var(--rg-color-background-page-inverted, #0F141A)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_090951dc{--btn-text-color: var(--rg-color-text-utility, #161D26);background-color:var(--rg-color-background-object, #F3F3F7);border:none;padding:10px var(--button-pad-borderless-h, 24px)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_15529d9f,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_15529d9f{--btn-text-color: var(--rg-color-text-utility, #161D26);background-color:var(--rg-color-btn-secondary-bg, #FFFFFF);border:none;padding:10px var(--button-pad-borderless-h, 24px)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb950a4e{--btn-text-color: var(--rg-color-text-utility, #161D26);border:none;padding:10px var(--button-pad-borderless-h, 24px)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb950a4e{background-image:linear-gradient(97deg,#ffc0ad,#f8c7ff 37.79%,#d2ccff 75.81%,#c2d1ff)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb950a4e{--rg-gradient-angle:97deg;background-image:var(--rg-gradient-a, linear-gradient(120deg, #f8c7ff 20.08%, #d2ccff 75.81%))}[data-eb-6a8f3296] [data-rg-mode=dark] a.rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] [data-rg-mode=dark] button.rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] a[data-rg-mode=dark].rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] button[data-rg-mode=dark].rgft_f7822e54.rgft_bb950a4e{background-image:var(--rg-gradient-a, linear-gradient(120deg, #78008a 24.25%, #b2008f 69.56%))}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb419678,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb419678{--btn-text-color: var(--rg-color-text-utility-inverted, #FFFFFF);background-color:var(--rg-color-btn-visited-bg, #656871);border-color:var(--rg-color-btn-visited-bg, #656871)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb419678.rgft_18409398,[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb419678.rgft_15529d9f,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb419678.rgft_18409398,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb419678.rgft_15529d9f{--btn-text-color: var(--rg-color-text-utility, #161D26);background-color:var(--rg-color-btn-secondary-visited-bg, #FFFFFF)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_badebaf5,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_badebaf5{--btn-text-color: var(--rg-color-btn-disabled-text, #B4B4BB);background-color:var(--rg-color-btn-disabled-bg, #F3F3F7);border-color:var(--rg-color-btn-disabled-bg, #F3F3F7);cursor:default}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_badebaf5.rgft_18409398,[data-eb-6a8f3296] a.rgft_f7822e54.rgft_badebaf5.rgft_15529d9f,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_badebaf5.rgft_18409398,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_badebaf5.rgft_15529d9f{border:none;padding:10px var(--button-pad-borderless-h, 24px)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_badebaf5.rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_badebaf5.rgft_090951dc{--btn-text-color: var(--rg-color-btn-tertiary-disabled-text, #B4B4BB);background-color:#0000}[data-eb-6a8f3296] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_18409398:not(.rgft_bb950a4e),[data-eb-6a8f3296] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_15529d9f:not(.rgft_bb950a4e),[data-eb-6a8f3296] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_18409398:not(.rgft_bb950a4e),[data-eb-6a8f3296] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_15529d9f:not(.rgft_bb950a4e){--btn-text-color: var(--rg-color-text-utility, #161D26);background-color:var(--rg-color-btn-secondary-bg, #FFFFFF)}[data-eb-6a8f3296] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc{box-shadow:none}[data-eb-6a8f3296] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc{background-image:linear-gradient(97deg,#ffc0ad80,#f8c7ff80 37.79%,#d2ccff80 75.81%,#c2d1ff80)}[data-eb-6a8f3296] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc{--rg-gradient-angle:97deg;background-image:var(--rg-gradient-a-50, linear-gradient(120deg, #f8c7ff 20.08%, #d2ccff 75.81%))}[data-eb-6a8f3296] [data-rg-mode=dark] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] [data-rg-mode=dark] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] a[data-rg-mode=dark].rgft_f7822e54.rgft_090951dc:hover:not(.rgft_badebaf5),[data-eb-6a8f3296] button[data-rg-mode=dark].rgft_f7822e54.rgft_090951dc:hover:not(.rgft_badebaf5){background-image:var(--rg-gradient-a-50, linear-gradient(120deg, #78008a 24.25%, #b2008f 69.56%))}[data-eb-6a8f3296] a.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc{box-shadow:none}[data-eb-6a8f3296] a.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc{background-image:linear-gradient(97deg,#ffc0ad,#f8c7ff 37.79%,#d2ccff 75.81%,#c2d1ff)}[data-eb-6a8f3296] a.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc{--rg-gradient-angle:97deg;background-image:var(--rg-gradient-a-pressed, linear-gradient(120deg, rgba(248, 199, 255, .5) 20.08%, #d2ccff 75.81%))}[data-eb-6a8f3296] [data-rg-mode=dark] a.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] [data-rg-mode=dark] button.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] a[data-rg-mode=dark].rgft_f7822e54.rgft_090951dc:active:not(.rgft_badebaf5),[data-eb-6a8f3296] button[data-rg-mode=dark].rgft_f7822e54.rgft_090951dc:active:not(.rgft_badebaf5){background-image:var(--rg-gradient-a-pressed, linear-gradient(120deg, rgba(120, 0, 138, .5) 24.25%, #b2008f 69.56%))}[data-eb-6a8f3296] .rgft_8711ccd9{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;background:#0000;border:none;margin:0}[data-eb-6a8f3296] .rgft_8711ccd9.rgft_5e58a6df{text-align:center}[data-eb-6a8f3296] .rgft_8711ccd9.rgft_b7ada98b{display:block}[data-eb-6a8f3296] .rgft_8711ccd9.rgft_beb26dc7{font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace}[data-eb-6a8f3296] .rgft_8711ccd9 a{display:inline;position:relative;cursor:pointer;text-decoration:none!important;color:var(--rg-color-link-default, #006CE0);background:linear-gradient(to right,currentcolor,currentcolor);background-size:100% .1em;background-position:0 100%;background-repeat:no-repeat}[data-eb-6a8f3296] .rgft_8711ccd9 a:focus-visible{color:var(--rg-color-link-focus, #006CE0)}[data-eb-6a8f3296] .rgft_8711ccd9 a:hover{color:var(--rg-color-link-hover, #003B8F);animation:rgft_d72bdead .3s cubic-bezier(0,0,.2,1)}[data-eb-6a8f3296] .rgft_8711ccd9 a:visited{color:var(--rg-color-link-visited, #6842FF)}@keyframes rgft_d72bdead{0%{background-size:0 .1em}to{background-size:100% .1em}}[data-eb-6a8f3296] .rgft_8711ccd9 b,[data-eb-6a8f3296] b.rgft_8711ccd9,[data-eb-6a8f3296] .rgft_8711ccd9 strong,[data-eb-6a8f3296] strong.rgft_8711ccd9{font-weight:700}[data-eb-6a8f3296] i.rgft_8711ccd9,[data-eb-6a8f3296] .rgft_8711ccd9 i,[data-eb-6a8f3296] em.rgft_8711ccd9,[data-eb-6a8f3296] .rgft_8711ccd9 em{font-style:italic}[data-eb-6a8f3296] u.rgft_8711ccd9,[data-eb-6a8f3296] .rgft_8711ccd9 u{text-decoration:underline}[data-eb-6a8f3296] code.rgft_8711ccd9,[data-eb-6a8f3296] .rgft_8711ccd9 code{font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace;border-radius:4px;border:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);color:var(--rg-color-text-secondary, #232B37);padding-top:var(--rg-padding-8);padding-right:var(--rg-padding-8);padding-bottom:var(--rg-padding-8);padding-left:var(--rg-padding-8)}[data-eb-6a8f3296] .rgft_12e1c6fa{display:inline!important;vertical-align:middle}[data-eb-6a8f3296] .rgft_8711ccd9 p img{aspect-ratio:16/9;height:100%;object-fit:cover;width:100%;border-radius:8px;order:1;margin-bottom:var(--rg-margin-4)}[data-eb-6a8f3296] .rgft_8711ccd9 table{table-layout:fixed;border-spacing:0;width:100%}[data-eb-6a8f3296] .rgft_8711ccd9 table td{font-size:14px;border-right:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);border-bottom:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);padding-top:var(--rg-padding-6);padding-right:var(--rg-padding-6);padding-bottom:var(--rg-padding-6);padding-left:var(--rg-padding-6)}[data-eb-6a8f3296] .rgft_8711ccd9 table td:first-of-type{border-left:1px solid var(--rg-color-border-lowcontrast, #CCCCD1)}[data-eb-6a8f3296] .rgft_8711ccd9 table thead tr:first-of-type>*:first-of-type,[data-eb-6a8f3296] .rgft_8711ccd9 table:not(:has(thead)) tr:first-of-type>*:first-of-type{border-top-left-radius:16px}[data-eb-6a8f3296] .rgft_8711ccd9 table thead tr:first-of-type>*:last-of-type,[data-eb-6a8f3296] .rgft_8711ccd9 table:not(:has(thead)) tr:first-of-type>*:last-of-type{border-top-right-radius:16px}[data-eb-6a8f3296] .rgft_8711ccd9 table tr:last-of-type td:first-of-type{border-bottom-left-radius:16px}[data-eb-6a8f3296] .rgft_8711ccd9 table tr:last-of-type td:last-of-type{border-bottom-right-radius:16px}[data-eb-6a8f3296] .rgft_8711ccd9 table:not(:has(thead),:has(th)) tr:first-of-type td{border-top:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);border-right:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);border-bottom:1px solid var(--rg-color-border-lowcontrast, #CCCCD1)}[data-eb-6a8f3296] .rgft_8711ccd9 table th{color:var(--rg-color-text-primary-inverted, #FFFFFF);min-width:280px;max-width:400px;padding:0;text-align:left;vertical-align:top;background-color:var(--rg-color-background-object-inverted, #232B37);border-left:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);border-right:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);border-bottom:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);padding-top:var(--rg-padding-6);padding-right:var(--rg-padding-6);padding-bottom:var(--rg-padding-6);padding-left:var(--rg-padding-6);row-gap:var(--rg-margin-5);column-gap:var(--rg-margin-5);max-width:100%;min-width:150px}@media (min-width: 480px) and (max-width: 767px){[data-eb-6a8f3296] .rgft_8711ccd9 table th{max-width:100%;min-width:150px}}@media (min-width: 768px) and (max-width: 1023px){[data-eb-6a8f3296] .rgft_8711ccd9 table th{max-width:240px;min-width:180px}}@media (min-width: 1024px) and (max-width: 1279px){[data-eb-6a8f3296] .rgft_8711ccd9 table th{max-width:350px;min-width:240px}}@media (min-width: 1280px) and (max-width: 1599px){[data-eb-6a8f3296] .rgft_8711ccd9 table th{max-width:400px;min-width:280px}}@media (min-width: 1600px){[data-eb-6a8f3296] .rgft_8711ccd9 table th{max-width:400px;min-width:280px}}[data-eb-6a8f3296] .rgft_8711ccd9 table th:first-of-type{border-top-left-radius:16px;border-top:0 solid var(--rg-color-border-lowcontrast, #CCCCD1);border-left:0 solid var(--rg-color-border-lowcontrast, #CCCCD1);border-right:0 solid var(--rg-color-border-lowcontrast, #CCCCD1)}[data-eb-6a8f3296] .rgft_8711ccd9 table th:nth-of-type(n+3){border-left:0 solid var(--rg-color-border-lowcontrast, #CCCCD1)}[data-eb-6a8f3296] .rgft_8711ccd9 table th:last-of-type{border-top-right-radius:16px;border-top:0 solid var(--rg-color-border-lowcontrast, #CCCCD1);border-right:0 solid var(--rg-color-border-lowcontrast, #CCCCD1)}[data-eb-6a8f3296] .rgft_a1b66739{display:inline-flex;flex-direction:column;align-items:center;justify-content:center;color:var(--rg-color-text-primary, #161D26);--icon-color: currentcolor}[data-eb-6a8f3296] .rgft_a1b66739.rgft_bc1a8743{height:16px;width:16px}[data-eb-6a8f3296] .rgft_a1b66739.rgft_c0cbb35d{height:20px;width:20px}[data-eb-6a8f3296] .rgft_a1b66739.rgft_bd40fe12{height:32px;width:32px}[data-eb-6a8f3296] .rgft_a1b66739.rgft_27320e58{height:48px;width:48px}[data-eb-6a8f3296] .rgft_a1b66739 svg{fill:none;stroke:none}[data-eb-6a8f3296] .rgft_a1b66739 path[data-fill]:not([fill]){fill:var(--icon-color)}[data-eb-6a8f3296] .rgft_a1b66739 path[data-stroke]{stroke-width:2}[data-eb-6a8f3296] .rgft_a1b66739 path[data-stroke]:not([stroke]){stroke:var(--icon-color)}[data-eb-6a8f3296] .rgft_3ed66ff4{display:inline-flex;flex-direction:column;align-items:center;justify-content:center;color:var(--rg-color-text-primary, #161D26)}[data-eb-6a8f3296] .rgft_3ed66ff4.rgft_9124b200{height:10px;width:10px}[data-eb-6a8f3296] .rgft_3ed66ff4.rgft_bc1a8743{height:16px;width:16px}[data-eb-6a8f3296] .rgft_3ed66ff4.rgft_c0cbb35d{height:20px;width:20px}[data-eb-6a8f3296] .rgft_3ed66ff4.rgft_bd40fe12{height:32px;width:32px}[data-eb-6a8f3296] .rgft_3ed66ff4.rgft_27320e58{height:48px;width:48px}[data-eb-6a8f3296] .rgft_98b54368{color:var(--rg-color-text-body, #232B37)}[data-eb-6a8f3296] .rgft_98b54368.rgft_275611e5{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_98b54368.rgft_275611e5{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_98b54368.rgft_275611e5{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_98b54368.rgft_275611e5{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_98b54368.rgft_275611e5{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_98b54368.rgft_275611e5{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_98b54368.rgft_275611e5{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_98b54368.rgft_275611e5{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_98b54368.rgft_007aef8b{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.429;font-weight:400;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_98b54368.rgft_007aef8b{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.429;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_98b54368.rgft_007aef8b{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.429;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_98b54368.rgft_007aef8b{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_98b54368.rgft_007aef8b{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_98b54368.rgft_007aef8b{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_98b54368.rgft_007aef8b{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_98b54368.rgft_007aef8b{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_98b54368.rgft_ff19c5f9{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:1.333;font-weight:400;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_98b54368.rgft_ff19c5f9{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:1.333;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_98b54368.rgft_ff19c5f9{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:1.333;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_98b54368.rgft_ff19c5f9{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_98b54368.rgft_ff19c5f9{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_98b54368.rgft_ff19c5f9{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_98b54368.rgft_ff19c5f9{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_98b54368.rgft_ff19c5f9{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_98b54368 ul{list-style-type:disc;margin-top:2rem}[data-eb-6a8f3296] .rgft_98b54368.rgft_2a7f98ee{display:inline;position:relative;cursor:pointer;text-decoration:none!important;color:var(--rg-color-link-default, #006CE0);background:linear-gradient(to right,currentcolor,currentcolor);background-size:100% .1em;background-position:0 100%;background-repeat:no-repeat}[data-eb-6a8f3296] .rgft_98b54368.rgft_2a7f98ee:focus-visible{color:var(--rg-color-link-focus, #006CE0)}[data-eb-6a8f3296] .rgft_98b54368.rgft_2a7f98ee:hover{color:var(--rg-color-link-hover, #003B8F);animation:rgft_9beb7cc5 .3s cubic-bezier(0,0,.2,1)}[data-eb-6a8f3296] .rgft_98b54368.rgft_2a7f98ee:visited{color:var(--rg-color-link-visited, #6842FF)}@keyframes rgft_9beb7cc5{0%{background-size:0 .1em}to{background-size:100% .1em}}[data-eb-6a8f3296] .rgft_d835af5c{color:var(--rg-color-text-title, #161D26)}[data-eb-6a8f3296] .rgft_d835af5c.rgft_3e9243e1{font-size:calc(4.5rem * var(--font-size-multiplier, 1.6));line-height:1.111;font-weight:500;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_3e9243e1{font-size:calc(3.75rem * var(--font-size-multiplier, 1.6));line-height:1.133;font-weight:500}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_3e9243e1{font-size:calc(3rem * var(--font-size-multiplier, 1.6));line-height:1.167;font-weight:500}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_d835af5c.rgft_3e9243e1{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_d835af5c.rgft_3e9243e1{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_d835af5c.rgft_3e9243e1{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_d835af5c.rgft_3e9243e1{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_d835af5c.rgft_3e9243e1{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_d835af5c.rgft_54816d41{font-size:calc(3.75rem * var(--font-size-multiplier, 1.6));line-height:1.133;font-weight:500;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_54816d41{font-size:calc(3rem * var(--font-size-multiplier, 1.6));line-height:1.167;font-weight:500}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_54816d41{font-size:calc(2.5rem * var(--font-size-multiplier, 1.6));line-height:1.2;font-weight:500}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_d835af5c.rgft_54816d41{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_d835af5c.rgft_54816d41{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_d835af5c.rgft_54816d41{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_d835af5c.rgft_54816d41{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_d835af5c.rgft_54816d41{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_d835af5c.rgft_852a8b78{font-size:calc(3rem * var(--font-size-multiplier, 1.6));line-height:1.167;font-weight:500;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_852a8b78{font-size:calc(2.5rem * var(--font-size-multiplier, 1.6));line-height:1.2;font-weight:500}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_852a8b78{font-size:calc(2rem * var(--font-size-multiplier, 1.6));line-height:1.25;font-weight:500}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_d835af5c.rgft_852a8b78{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_d835af5c.rgft_852a8b78{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_d835af5c.rgft_852a8b78{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_d835af5c.rgft_852a8b78{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_d835af5c.rgft_852a8b78{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_286fbc8d{letter-spacing:1.6px;text-transform:uppercase;color:var(--rg-color-text-eyebrow, #161D26)}[data-eb-6a8f3296] .rgft_286fbc8d.rgft_cf5cdf86{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400;font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_286fbc8d.rgft_cf5cdf86{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.714;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_286fbc8d.rgft_cf5cdf86{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:2;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_286fbc8d.rgft_cf5cdf86{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_286fbc8d.rgft_cf5cdf86{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_286fbc8d.rgft_cf5cdf86{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_286fbc8d.rgft_cf5cdf86{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_286fbc8d.rgft_cf5cdf86{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_286fbc8d.rgft_c6f92487{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.714;font-weight:400;font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_286fbc8d.rgft_c6f92487{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:2;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_286fbc8d.rgft_c6f92487{font-size:calc(.625rem * var(--font-size-multiplier, 1.6));line-height:2.4;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_286fbc8d.rgft_c6f92487{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_286fbc8d.rgft_c6f92487{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_286fbc8d.rgft_c6f92487{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_286fbc8d.rgft_c6f92487{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_286fbc8d.rgft_c6f92487{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_d27b4751{color:var(--rg-color-text-utility, #161D26)}[data-eb-6a8f3296] .rgft_d27b4751.rgft_927d7fd1{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400;font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_d27b4751.rgft_927d7fd1{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_d27b4751.rgft_927d7fd1{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_d27b4751.rgft_927d7fd1{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_d27b4751.rgft_927d7fd1{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_d27b4751.rgft_927d7fd1{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_d27b4751.rgft_927d7fd1{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_d27b4751.rgft_927d7fd1{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_d27b4751.rgft_100c8a76{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.429;font-weight:400;font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_d27b4751.rgft_100c8a76{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.429;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_d27b4751.rgft_100c8a76{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.429;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_d27b4751.rgft_100c8a76{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_d27b4751.rgft_100c8a76{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_d27b4751.rgft_100c8a76{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_d27b4751.rgft_100c8a76{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_d27b4751.rgft_100c8a76{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_d27b4751.rgft_453dc601{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:1.333;font-weight:400;font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_d27b4751.rgft_453dc601{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:1.333;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_d27b4751.rgft_453dc601{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:1.333;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_d27b4751.rgft_453dc601{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_d27b4751.rgft_453dc601{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_d27b4751.rgft_453dc601{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_d27b4751.rgft_453dc601{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_d27b4751.rgft_453dc601{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_d27b4751.rgft_949ed5ce{font-size:calc(.625rem * var(--font-size-multiplier, 1.6));line-height:1.2;font-weight:400;font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_d27b4751.rgft_949ed5ce{font-size:calc(.625rem * var(--font-size-multiplier, 1.6));line-height:1.2;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_d27b4751.rgft_949ed5ce{font-size:calc(.625rem * var(--font-size-multiplier, 1.6));line-height:1.2;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_d27b4751.rgft_949ed5ce{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-r | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/cargo/appendix/glossary.html#package-manager | Appendix: Glossary - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Glossary Artifact An artifact is the file or set of files created as a result of the compilation process. This includes linkable libraries, executable binaries, and generated documentation. Cargo Cargo is the Rust package manager , and the primary topic of this book. Cargo.lock See lock file . Cargo.toml See manifest . Crate A Rust crate is either a library or an executable program, referred to as either a library crate or a binary crate , respectively. Every target defined for a Cargo package is a crate . Loosely, the term crate may refer to either the source code of the target or to the compiled artifact that the target produces. It may also refer to a compressed package fetched from a registry . The source code for a given crate may be subdivided into modules . Edition A Rust edition is a developmental landmark of the Rust language. The edition of a package is specified in the Cargo.toml manifest , and individual targets can specify which edition they use. See the Edition Guide for more information. Feature The meaning of feature depends on the context: A feature is a named flag which allows for conditional compilation. A feature can refer to an optional dependency, or an arbitrary name defined in a Cargo.toml manifest that can be checked within source code. Cargo has unstable feature flags which can be used to enable experimental behavior of Cargo itself. The Rust compiler and Rustdoc have their own unstable feature flags (see The Unstable Book and The Rustdoc Book ). CPU targets have target features which specify capabilities of a CPU. Index The index is the searchable list of crates in a registry . Lock file The Cargo.lock lock file is a file that captures the exact version of every dependency used in a workspace or package . It is automatically generated by Cargo. See Cargo.toml vs Cargo.lock . Manifest A manifest is a description of a package or a workspace in a file named Cargo.toml . A virtual manifest is a Cargo.toml file that only describes a workspace, and does not include a package. Member A member is a package that belongs to a workspace . Module Rust’s module system is used to organize code into logical units called modules , which provide isolated namespaces within the code. The source code for a given crate may be subdivided into one or more separate modules. This is usually done to organize the code into areas of related functionality or to control the visible scope (public/private) of symbols within the source (structs, functions, and so on). A Cargo.toml file is primarily concerned with the package it defines, its crates, and the packages of the crates on which they depend. Nevertheless, you will see the term “module” often when working with Rust, so you should understand its relationship to a given crate. Package A package is a collection of source files and a Cargo.toml manifest file which describes the package. A package has a name and version which is used for specifying dependencies between packages. A package contains multiple targets , each of which is a crate . The Cargo.toml file describes the type of the crates (binary or library) within the package, along with some metadata about each one — how each is to be built, what their direct dependencies are, etc., as described throughout this book. The package root is the directory where the package’s Cargo.toml manifest is located. (Compare with workspace root .) The package ID specification , or SPEC , is a string used to uniquely reference a specific version of a package from a specific source. Small to medium sized Rust projects will only need a single package, though it is common for them to have multiple crates. Larger projects may involve multiple packages, in which case Cargo workspaces can be used to manage common dependencies and other related metadata between the packages. Package manager Broadly speaking, a package manager is a program (or collection of related programs) in a software ecosystem that automates the process of obtaining, installing, and upgrading artifacts. Within a programming language ecosystem, a package manager is a developer-focused tool whose primary functionality is to download library artifacts and their dependencies from some central repository; this capability is often combined with the ability to perform software builds (by invoking the language-specific compiler). Cargo is the package manager within the Rust ecosystem. Cargo downloads your Rust package ’s dependencies ( artifacts known as crates ), compiles your packages, makes distributable packages, and (optionally) uploads them to crates.io , the Rust community’s package registry . Package registry See registry . Project Another name for a package . Registry A registry is a service that contains a collection of downloadable crates that can be installed or used as dependencies for a package . The default registry in the Rust ecosystem is crates.io . The registry has an index which contains a list of all crates, and tells Cargo how to download the crates that are needed. Source A source is a provider that contains crates that may be included as dependencies for a package . There are several kinds of sources: Registry source — See registry . Local registry source — A set of crates stored as compressed files on the filesystem. See Local Registry Sources . Directory source — A set of crates stored as uncompressed files on the filesystem. See Directory Sources . Path source — An individual package located on the filesystem (such as a path dependency ) or a set of multiple packages (such as path overrides ). Git source — Packages located in a git repository (such as a git dependency or git source ). See Source Replacement for more information. Spec See package ID specification . Target The meaning of the term target depends on the context: Cargo Target — Cargo packages consist of targets which correspond to artifacts that will be produced. Packages can have library, binary, example, test, and benchmark targets. The list of targets are configured in the Cargo.toml manifest , often inferred automatically by the directory layout of the source files. Target Directory — Cargo places built artifacts in the target directory. By default this is a directory named target at the workspace root, or the package root if not using a workspace. The directory may be changed with the --target-dir command-line option, the CARGO_TARGET_DIR environment variable , or the build.target-dir config option . For more information see the build cache documentation. Target Architecture — The OS and machine architecture for the built artifacts are typically referred to as a target . Target Triple — A triple is a specific format for specifying a target architecture. Triples may be referred to as a target triple which is the architecture for the artifact produced, and the host triple which is the architecture that the compiler is running on. The target triple can be specified with the --target command-line option or the build.target config option . The general format of the triple is <arch><sub>-<vendor>-<sys>-<abi> where: arch = The base CPU architecture, for example x86_64 , i686 , arm , thumb , mips , etc. sub = The CPU sub-architecture, for example arm has v7 , v7s , v5te , etc. vendor = The vendor, for example unknown , apple , pc , nvidia , etc. sys = The system name, for example linux , windows , darwin , etc. none is typically used for bare-metal without an OS. abi = The ABI, for example gnu , android , eabi , etc. Some parameters may be omitted. Run rustc --print target-list for a list of supported targets. Test Targets Cargo test targets generate binaries which help verify proper operation and correctness of code. There are two types of test artifacts: Unit test — A unit test is an executable binary compiled directly from a library or a binary target. It contains the entire contents of the library or binary code, and runs #[test] annotated functions, intended to verify individual units of code. Integration test target — An integration test target is an executable binary compiled from a test target which is a distinct crate whose source is located in the tests directory or specified by the [[test]] table in the Cargo.toml manifest . It is intended to only test the public API of a library, or execute a binary to verify its operation. Workspace A workspace is a collection of one or more packages that share common dependency resolution (with a shared Cargo.lock lock file ), output directory, and various settings such as profiles. A virtual workspace is a workspace where the root Cargo.toml manifest does not define a package, and only lists the workspace members . The workspace root is the directory where the workspace’s Cargo.toml manifest is located. (Compare with package root .) | 2026-01-13T09:29:13 |
https://aws.amazon.com/blogs/big-data/create-aws-glue-data-catalog-views-using-cross-account-definer-roles/#Comments | Create AWS Glue Data Catalog views using cross-account definer roles | AWS Big Data Blog Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions AWS Big Data Blog Create AWS Glue Data Catalog views using cross-account definer roles by Aarthi Srinivasan and Sundeep Kumar on 08 JAN 2026 in Advanced (300) , Analytics , AWS Glue , Technical How-to Permalink Comments Share With AWS Glue Data Catalog views you can create a SQL view in the Data Catalog that references one or more base tables. These multi-dialect views support various SQL query engines, providing consistent access across multiple Amazon Web Services (AWS) services including Amazon Athena , Amazon Redshift Spectrum, and Apache Spark in both Amazon EMR and AWS Glue 5.0 . You can now create Data Catalog views using a cross-account AWS Identity and Access Management (IAM) definer role. A definer role is an IAM role used to create the Data Catalog view and has SELECT permissions on all columns of the underlying base tables. This definer role is assumed by AWS Glue and AWS Lake Formation service principals to vend credentials to the base tables’ data whenever the view is queried. The definer role allows the Data Catalog view to be shared to principals or AWS accounts so that you can share a filtered subset of data without sharing the base tables. Previously, Data Catalog views required a definer role within the same AWS account as the base tables. The introduction of cross-account definer roles enables Data Catalog view creation in enterprise data mesh architectures. In this setup, database and table metadata is centralized in a governance account, and individual data owner accounts maintain control over table creation and management through their IAM roles. Data owner accounts can now create and manage Data Catalog views in the central governance accounts using their existing continuous integration and continuous delivery (CI/CD) pipeline roles. In this post, we show you a cross-account scenario involving two AWS accounts: a central governance account containing the tables and hosting the views and a data owner (producer) account with the IAM role used to create and manage views. We provide implementation details for both SPARK dialect using AWS SDK code samples and ATHENA dialect using SQL commands. Using this approach, you can implement sophisticated data governance models at enterprise scale while maintaining operational efficiency across your AWS environment. Key benefits Key benefits for cross-account definer roles are as follows: Enhanced data mesh support – Enterprises with multi-account data lakehouse architectures can now maintain their existing operational model where data owner accounts manage table creation and updates using their established IAM roles. These same roles can now create and manage Data Catalog views across account boundaries. Strengthened security controls – By keeping table and view management within data owner account roles: Security posture is enhanced through proper separation of duties. Audit trails become more comprehensive and meaningful. Access controls follow the principle of least privilege. Elimination of data duplication – Data owner accounts can create views in central accounts that: Provide access to specific data subsets without duplicating tables. Reduce storage costs and management overhead. Maintain a single source of truth while enabling targeted data sharing. Solution overview An example customer has a database with two transaction tables in their central account, where the catalog and permissions are maintained. With the database shared to the data owner (producer) account, we create a Data Catalog view in the central account on these two tables, using the producer’s definer role. The view from the central account can be shared to additional consumer accounts and queried. We illustrate creating the SPARK dialect using create-table CLI , and add the ATHENA dialect for the same view from the Athena console . We also provide the AWS SDK sample code for CreateTable() and UpdateTable() , with view definition and a sample pySpark script to read and verify the view in AWS Glue. The following diagram shows the table, view, and definer IAM role placements between a central governance account and data producer account. Prerequisites To perform this solution, you need to have the following prerequisites: Two AWS accounts with AWS Lake Formation set up. For details, refer to Set up AWS Lake Formation . The Lake Formation setup includes registering your IAM admin role as Lake Formation administrator. In the Data Catalog settings , shown in the following screenshot, Default permissions for newly created databases and tables is set to use Lake Formation permissions only. Cross-account version settings is set to Version 4 . Create an IAM role Data-Analyst in the producer account. For the IAM permissions on this role, refer to Data analyst permissions . This role will also be used as the view definer role. Add the permissions to this definer role from the Prerequisites for creating views . Create database and tables in the central account In this step, you create two tables in the central governance account and populate them with few rows of data: Sign in to the central account as admin user. Open the Athena console and set up the Athena query results bucket . Run the following queries to create two sample Iceberg tables, representing bank customer transactions data: /* Check if the Database exists, if not create new database. */ CREATE DATABASE IF NOT EXISTS bankdata_icebergdb; /*Create transaction_table1*/ Replace the bucket name CREATE TABLE bankdata_icebergdb.transaction_table1 ( transaction_id string, transaction_type string, transaction_amount double) LOCATION 's3://<bucket-name>/bankdata_icebergdb/transaction-table1' TBLPROPERTIES ( 'table_type'='iceberg', 'write_compression'='zstd' ); /*Create transaction_table2*/ CREATE TABLE bankdata_icebergdb.transaction_table2 ( transaction_id string, transaction_location string, transaction_date date) LOCATION 's3://<bucket-name>/bankdata_icebergdb/transaction-table2' TBLPROPERTIES ( 'table_type'='iceberg', 'write_compression'='zstd' ); INSERT INTO bankdata_icebergdb.transaction_table1 (transaction_id, transaction_type, transaction_amount) VALUES ('T001', 'purchase', 50.0), ('T002', 'purchase', 120.0), ('T003', 'refund', 200.5), ('T004', 'purchase', 80.0), ('T005', 'withdrawal', 500.0), ('T006', 'purchase', 300.0), ('T007', 'deposit', 1000.0), ('T008', 'refund', 20.0), ('T009', 'purchase', 150.0), ('T010', 'withdrawal', 75.0); INSERT INTO bankdata_icebergdb.transaction_table2 (transaction_id, transaction_location, transaction_date) VALUES ('T001', 'Charlotte', DATE '2024-10-01'), ('T002', 'Seattle', DATE '2024-10-02'), ('T003', 'Chicago', DATE '2024-10-03'), ('T004', 'Miami', DATE '2024-10-04'), ('T005', 'New York', DATE '2024-10-05'), ('T006', 'Austin', DATE '2024-10-06'), ('T007', 'Denver', DATE '2024-10-07'), ('T008', 'Boston', DATE '2024-10-08'), ('T009', 'San Jose', DATE '2024-10-09'), ('T010', 'Phoenix', DATE '2024-10-10'); Verify the created tables in Athena query editor by running a preview. Share the database and tables from central to producer account In the central governance account, you share the database and the two tables to the producer account and the Data-Analyst role in producer. Sign in to the Lake Formation console as the Lake Formation admin role. In the navigation pane, choose Data permissions . Choose Grant and provide the following information: For Principals , select External accounts and enter the producer account ID, as shown in the following screenshot. For Named Data Catalog Resources , select the default catalog and database bankdata_icebergdb , as shown in the following screenshot. Under Database permissions , select Describe . For Grantable permissions , select Describe . Choose Grant . Repeat the preceding steps to grant access to the producer account definer role Data-Analyst on the database bankdata_icebergdb and the two tables transaction_table1 and transaction_table2 as follows. Under Database permissions , grant Create table and Describe permissions. Under Table permissions , grant Select and Describe on all columns. With these steps, the central governance account data admin steward has shared the database and tables to the producer account definer role. Steps for producer account Follow these steps for the producer account: Sign in to the Lake Formation console on the producer account as the Lake Formation administrator. In the left navigation pane, choose Databases . A blue banner will appear on the console, showing pending invitations from AWS Resource Access Manager (AWS RAM). Open the AWS RAM console and review the AWS RAM shares under Shared with me. You will see the AWS RAM shares in pending state. Select the pending AWS RAM share from central account and choose Accept resource share . After the resource share request is accepted, the shared database shows up in the producer account. On the Lake Formation console, select the database. On the Create dropdown list, choose Resource link . Provide a name rl_bank_iceberg and choose Create . Let’s grant Describe permission on the resource link to the Data-Analyst role in the producer account in the following steps. In the left navigation pane, choose Data permissions . Choose the Data-Analyst role. Select the resource link rl_bank_iceberg for the database as shown in the following screenshot. Grant Describe permission on the resource link. Note: Cross-account Data Catalog views can’t be created using a resource link, although a resource link is needed for the SDK use of SPARK dialect. Next, add the central account Data Catalog as a Data Source in Athena from producer account: Open the Athena console. On the left navigation pane, choose Data sources and catalogs . Choose Create data source . Select S3-AWS Glue Data Catalog . Choose AWS – Glue Data Catalog in another account and name the data source as centraladmin . Choose Next and then create data source. After the data source is created, navigate to the Query editor and verify the Data source centraladmin appears, as shown in the following screenshot. The definer role can also now access and query the central catalog database. Create SPARK dialect view In this step, you create a view with SPARK dialect, using AWS Glue CLI command create-table : Sign in to the AWS console in the producer account as Data-Analyst role. Enter the following command in your CLI environment, such as AWS CloudShell , to create a SPARK DIALECT: aws glue create-table --cli-input-json '{ "DatabaseName": "rl_bank_iceberg", "TableInput": { "Name": "mdv_transaction1", "StorageDescriptor": { "Columns": [ { "Name": "transaction_id", "Type": "string" }, { "Name": "transaction_type", "Type": "string" }, { "Name": "transaction_amount", "Type": "float" }, { "Name": "transaction_location", "Type": "string" }, { "Name": "transaction_date", "Type": "date" } ], "SerdeInfo": {} }, "ViewDefinition": { "SubObjects": [ "arn:aws:glue:<your-region>:<your-central-account-id>:table/bankdata_icebergdb/transaction_table1", "arn:aws:glue:<your-region>:<your-central-account-id>:table/bankdata_icebergdb/transaction_table2" ], "IsProtected": true, "Representations": [ { "Dialect": "SPARK", "DialectVersion": "1.0", "ViewOriginalText": "SELECT t1.transaction_id, t1.transaction_type, t1.transaction_amount, t2.transaction_location, t2.transaction_date FROM transaction_table1 t1 JOIN transaction_table2 t2 ON t1.transaction_id = t2.transaction_id WHERE t1.transaction_amount > 100;", "ViewExpandedText": "SELECT t1.transaction_id, t1.transaction_type, t1.transaction_amount, t2.transaction_location, t2.transaction_date FROM transaction_table1 t1 JOIN transaction_table2 t2 ON t1.transaction_id = t2.transaction_id WHERE t1.transaction_amount > 100;" } ] } } }' Open the Lake Formation console and verify if the view is created. Verify the dialect of the view on the SQL definitions tab for the view details. Add ATHENA dialect To add ATHENA dialect, follow these steps: On the Athena console, select centraladmin from the Data source . Enter the following SQL script to create the ATHENA dialect for the same view: ALTER VIEW mdv_transaction1 FORCE ADD DIALECT AS SELECT t1.transaction_id, t1.transaction_type, t1.transaction_amount, t2.transaction_location, t2.transaction_date FROM transaction_table1 t1 JOIN transaction_table2 t2 ON t1.transaction_id = t2.transaction_id WHERE t1.transaction_amount > 100 We can’t use the resource link rl_bank_iceberg in the Athena query editor to create or alter a view in the central account. Verify the added dialect by running a preview in Athena. For running the query, you can use either the resource link rl_bank_iceberg from the producer account catalog or use the centraladmin catalog. The following screenshot shows querying using the resource link of the database in the producer account catalog. The following screenshot shows querying the view from the producer using the connected catalog centraladmin as the data source. Verify the dialects on the view by inspecting the table in the Lake Formation console. You can now query the view as the Data-Analyst role in the producer account, using both Athena and Spark. The view will also show in the central account as shown in the following code example, with access to the Lake Formation admin. You can also create the view with ATHENA dialect and add the SPARK dialect. The SQL syntax to create the view in ATHENA dialect is shown in the following example: create protected multi dialect view mdv_transaction1 security definer as SELECT t1.transaction_id, t1.transaction_type, t1.transaction_amount, t2.transaction_location, t2.transaction_date FROM transaction_table1 t1 JOIN transaction_table2 t2 ON t1.transaction_id = t2.transaction_id WHERE t1.transaction_amount > 100; The update-table CLI to add the corresponding SPARK dialect is shown in the following example: aws glue update-table --cli-input-json '{ "DatabaseName": "rl_bankdatadb", "ViewUpdateAction": "ADD", "Force": true, "TableInput": { "Name": " mdv_transaction1", "StorageDescriptor": { "Columns": [ { "Name": "transaction_id", "Type": "string" }, { "Name": "transaction_type", "Type": "string" }, { "Name": "transaction_amount", "Type": "float" }, { "Name": "transaction_location", "Type": "string" }, { "Name": "transaction_date", "Type": "date" } ], "SerdeInfo": {} }, "ViewDefinition": { "SubObjects": [ " "arn:aws:glue:<your-region>:<your-central-account-id>:table/bankdata_icebergdb/transaction_table1", "arn:aws:glue:<your-region>:<your-central-account-id>:table/bankdata_icebergdb/transaction_table2" ], "IsProtected": true, "Representations": [ { "Dialect": "SPARK", "DialectVersion": "1.0", "ViewOriginalText": " SELECT t1.transaction_id, t1.transaction_type, t1.transaction_amount, t2.transaction_location, t2.transaction_date FROM transaction_table1 t1 JOIN transaction_table2 t2 ON t1.transaction_id = t2.transaction_id WHERE t1.transaction_amount > 100", "ViewExpandedText": " SELECT t1.transaction_id, t1.transaction_type, t1.transaction_amount, t2.transaction_location, t2.transaction_date FROM transaction_table1 t1 JOIN transaction_table2 t2 ON t1.transaction_id = t2.transaction_id WHERE t1.transaction_amount > 100" } ] } } }' The following is a sample Python script to create a SPARK dialect view: glueview-createtable.py . The following code block is a sample AWS Glue extract, transfer, and load (ETL) script to access the Spark dialect of the view from AWS Glue 5.0 from the central account. The AWS Glue job execution role should have Lake Formation SELECT permission on the AWS Glue view: from pyspark.context import SparkContext from pyspark.sql import SparkSession aws_region = "<your-region>" aws_account_id = "<your-central-account-id>" local_catalogname = "spark_catalog" warehouse_path = "s3://<your-bucket-name>/bankdata_icebergdb/transaction-table1" spark = SparkSession.builder.appName('query_glue_view') \ .config('spark.sql.extensions','org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions') \ .config(f'spark.sql.catalog.{local_catalogname}', 'org.apache.iceberg.spark.SparkSessionCatalog') \ .config(f'spark.sql.catalog.{local_catalogname}.catalog-impl', 'org.apache.iceberg.aws.glue.GlueCatalog') \ .config(f'spark.sql.catalog.{local_catalogname}.client.region', aws_region) \ .config(f'spark.sql.catalog.{local_catalogname}.glue.account-id', aws_account_id) \ .config(f'spark.sql.catalog.{local_catalogname}.io-impl', 'org.apache.iceberg.aws.s3.S3FileIO') \ .config(f'spark.sql.catalog.{local_catalogname}.warehouse',warehouse_path) \ .getOrCreate() spark.sql(f"show databases").show() spark.sql(f"SHOW TABLES IN {local_catalogname}.bankdata_icebergdb").show() spark.sql(f"SELECT * FROM {local_catalogname}.bankdata_icebergdb. mdv_transaction1").show() In the AWS Glue job-details, for Lake Formation managed tables and for Iceberg tables, set additional parameters respectively as follows: --enable-lakeformation-fine-grained-access = true --datalake-formats = iceberg Cleanup To avoid incurring costs, clean up the resources you used for this post: Revoke the Lake Formation permissions granted to the Data-Analyst role and Producer account Drop the Athena tables Delete the Athena query results from your Amazon Simple Storage Service (Amazon S3) bucket Delete the Data-Analyst role from IAM Conclusion In this post, we demonstrated how to use cross-account IAM definer roles with AWS Glue Data Catalog views . We showed how data owner accounts can create and manage views in a central governance account while maintaining security and control over their data assets. This feature enables enterprises to implement sophisticated data mesh architectures without compromising on security or requiring data duplication. The ability to use cross-account definer roles with Data Catalog views provides several key advantages: Streamlines view management in multi-account environments Maintains existing CI/CD workflows and automation Enhances security through centralized governance Reduces operational overhead by eliminating the need for data duplication As organizations continue to build and scale their data lakehouse architectures across multiple AWS accounts, cross-account definer roles for Data Catalog views provide a crucial capability for implementing efficient, secure, and well-governed data sharing patterns. About the authors Aarthi Srinivasan Aarthi is a Senior Big Data Architect at Amazon Web Services (AWS). She works with AWS customers and partners to architect data lake solutions, enhance product features, and establish best practices for data governance. Sundeep Kumar Sundeep is a Sr. Specialist Solutions Architect at Amazon Web Services (AWS), helping customers build data lake and analytics platforms and solutions. When not building and designing data lakes, Sundeep enjoys listening to music and playing guitar. Loading comments… Resources Amazon Athena Amazon EMR Amazon Kinesis Amazon MSK Amazon QuickSight Amazon Redshift AWS Glue Follow Twitter Facebook LinkedIn Twitch Email Updates Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center SDKs & Tools .NET on AWS Python on AWS Java on AWS PHP on AWS JavaScript on AWS Help Contact Us File a Support Ticket AWS re:Post Knowledge Center AWS Support Overview Get Expert Help AWS Accessibility Legal English Back to top Amazon is an Equal Opportunity Employer: Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation / Age. x facebook linkedin instagram twitch <svg role | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/reference/attributes.html#r-attributes.input | Attributes - The Rust Reference Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Reference [attributes] Attributes [attributes .syntax] Syntax InnerAttribute → # ! [ Attr ] OuterAttribute → # [ Attr ] Attr → SimplePath AttrInput ? | unsafe ( SimplePath AttrInput ? ) AttrInput → DelimTokenTree | = Expression Show Railroad InnerAttribute # ! [ Attr ] OuterAttribute # [ Attr ] Attr SimplePath AttrInput unsafe ( SimplePath AttrInput ) AttrInput DelimTokenTree = Expression [attributes .intro] An attribute is a general, free-form metadatum that is interpreted according to name, convention, language, and compiler version. Attributes are modeled on Attributes in ECMA-335 , with the syntax coming from ECMA-334 (C#). [attributes .inner] Inner attributes , written with a bang ( ! ) after the hash ( # ), apply to the item that the attribute is declared within. Outer attributes , written without the bang after the hash, apply to the thing that follows the attribute. [attributes .input] The attribute consists of a path to the attribute, followed by an optional delimited token tree whose interpretation is defined by the attribute. Attributes other than macro attributes also allow the input to be an equals sign ( = ) followed by an expression. See the meta item syntax below for more details. [attributes .safety] An attribute may be unsafe to apply. To avoid undefined behavior when using these attributes, certain obligations that cannot be checked by the compiler must be met. To assert these have been, the attribute is wrapped in unsafe(..) , e.g. #[unsafe(no_mangle)] . The following attributes are unsafe: export_name link_section naked no_mangle [attributes .kind] Attributes can be classified into the following kinds: Built-in attributes Proc macro attributes Derive macro helper attributes Tool attributes [attributes .allowed-position] Attributes may be applied to many things in the language: All item declarations accept outer attributes while external blocks , functions , implementations , and modules accept inner attributes. Most statements accept outer attributes (see Expression Attributes for limitations on expression statements). Block expressions accept outer and inner attributes, but only when they are the outer expression of an expression statement or the final expression of another block expression. Enum variants and struct and union fields accept outer attributes. Match expression arms accept outer attributes. Generic lifetime or type parameter accept outer attributes. Expressions accept outer attributes in limited situations, see Expression Attributes for details. Function , closure and function pointer parameters accept outer attributes. This includes attributes on variadic parameters denoted with ... in function pointers and external blocks . Some examples of attributes: #![allow(unused)] fn main() { // General metadata applied to the enclosing module or crate. #![crate_type = "lib"] // A function marked as a unit test #[test] fn test_foo() { /* ... */ } // A conditionally-compiled module #[cfg(target_os = "linux")] mod bar { /* ... */ } // A lint attribute used to suppress a warning/error #[allow(non_camel_case_types)] type int8_t = i8; // Inner attribute applies to the entire function. fn some_unused_variables() { #![allow(unused_variables)] let x = (); let y = (); let z = (); } } [attributes .meta] Meta item attribute syntax [attributes .meta .intro] A “meta item” is the syntax used for the Attr rule by most built-in attributes . It has the following grammar: [attributes .meta .syntax] Syntax MetaItem → SimplePath | SimplePath = Expression | SimplePath ( MetaSeq ? ) MetaSeq → MetaItemInner ( , MetaItemInner ) * , ? MetaItemInner → MetaItem | Expression Show Railroad MetaItem SimplePath SimplePath = Expression SimplePath ( MetaSeq ) MetaSeq MetaItemInner , MetaItemInner , MetaItemInner MetaItem Expression [attributes .meta .literal-expr] Expressions in meta items must macro-expand to literal expressions, which must not include integer or float type suffixes. Expressions which are not literal expressions will be syntactically accepted (and can be passed to proc-macros), but will be rejected after parsing. [attributes .meta .order] Note that if the attribute appears within another macro, it will be expanded after that outer macro. For example, the following code will expand the Serialize proc-macro first, which must preserve the include_str! call in order for it to be expanded: #[derive(Serialize)] struct Foo { #[doc = include_str!("x.md")] x: u32 } [attributes .meta .order-macro] Additionally, macros in attributes will be expanded only after all other attributes applied to the item: #[macro_attr1] // expanded first #[doc = mac!()] // `mac!` is expanded fourth. #[macro_attr2] // expanded second #[derive(MacroDerive1, MacroDerive2)] // expanded third fn foo() {} [attributes .meta .builtin] Various built-in attributes use different subsets of the meta item syntax to specify their inputs. The following grammar rules show some commonly used forms: [attributes .meta .builtin .syntax] Syntax MetaWord → IDENTIFIER MetaNameValueStr → IDENTIFIER = ( STRING_LITERAL | RAW_STRING_LITERAL ) MetaListPaths → IDENTIFIER ( ( SimplePath ( , SimplePath ) * , ? ) ? ) MetaListIdents → IDENTIFIER ( ( IDENTIFIER ( , IDENTIFIER ) * , ? ) ? ) MetaListNameValueStr → IDENTIFIER ( ( MetaNameValueStr ( , MetaNameValueStr ) * , ? ) ? ) Show Railroad MetaWord IDENTIFIER MetaNameValueStr IDENTIFIER = STRING_LITERAL RAW_STRING_LITERAL MetaListPaths IDENTIFIER ( SimplePath , SimplePath , ) MetaListIdents IDENTIFIER ( IDENTIFIER , IDENTIFIER , ) MetaListNameValueStr IDENTIFIER ( MetaNameValueStr , MetaNameValueStr , ) Some examples of meta items are: Style Example MetaWord no_std MetaNameValueStr doc = "example" MetaListPaths allow(unused, clippy::inline_always) MetaListIdents macro_use(foo, bar) MetaListNameValueStr link(name = "CoreFoundation", kind = "framework") [attributes .activity] Active and inert attributes [attributes .activity .intro] An attribute is either active or inert. During attribute processing, active attributes remove themselves from the thing they are on while inert attributes stay on. The cfg and cfg_attr attributes are active. Attribute macros are active. All other attributes are inert. [attributes .tool] Tool attributes [attributes .tool .intro] The compiler may allow attributes for external tools where each tool resides in its own module in the tool prelude . The first segment of the attribute path is the name of the tool, with one or more additional segments whose interpretation is up to the tool. [attributes .tool .ignored] When a tool is not in use, the tool’s attributes are accepted without a warning. When the tool is in use, the tool is responsible for processing and interpretation of its attributes. [attributes .tool .prelude] Tool attributes are not available if the no_implicit_prelude attribute is used. #![allow(unused)] fn main() { // Tells the rustfmt tool to not format the following element. #[rustfmt::skip] struct S { } // Controls the "cyclomatic complexity" threshold for the clippy tool. #[clippy::cyclomatic_complexity = "100"] pub fn f() {} } Note rustc currently recognizes the tools “clippy”, “rustfmt”, “diagnostic”, “miri” and “rust_analyzer”. [attributes .builtin] Built-in attributes index The following is an index of all built-in attributes. Conditional compilation cfg — Controls conditional compilation. cfg_attr — Conditionally includes attributes. Testing test — Marks a function as a test. ignore — Disables a test function. should_panic — Indicates a test should generate a panic. Derive derive — Automatic trait implementations. automatically_derived — Marker for implementations created by derive . Macros macro_export — Exports a macro_rules macro for cross-crate usage. macro_use — Expands macro visibility, or imports macros from other crates. proc_macro — Defines a function-like macro. proc_macro_derive — Defines a derive macro. proc_macro_attribute — Defines an attribute macro. Diagnostics allow , expect , warn , deny , forbid — Alters the default lint level. deprecated — Generates deprecation notices. must_use — Generates a lint for unused values. diagnostic::on_unimplemented — Hints the compiler to emit a certain error message if a trait is not implemented. diagnostic::do_not_recommend — Hints the compiler to not show a certain trait impl in error messages. ABI, linking, symbols, and FFI link — Specifies a native library to link with an extern block. link_name — Specifies the name of the symbol for functions or statics in an extern block. link_ordinal — Specifies the ordinal of the symbol for functions or statics in an extern block. no_link — Prevents linking an extern crate. repr — Controls type layout. crate_type — Specifies the type of crate (library, executable, etc.). no_main — Disables emitting the main symbol. export_name — Specifies the exported symbol name for a function or static. link_section — Specifies the section of an object file to use for a function or static. no_mangle — Disables symbol name encoding. used — Forces the compiler to keep a static item in the output object file. crate_name — Specifies the crate name. Code generation inline — Hint to inline code. cold — Hint that a function is unlikely to be called. naked — Prevent the compiler from emitting a function prologue and epilogue. no_builtins — Disables use of certain built-in functions. target_feature — Configure platform-specific code generation. track_caller — Pass the parent call location to std::panic::Location::caller() . instruction_set — Specify the instruction set used to generate a functions code Documentation doc — Specifies documentation. See The Rustdoc Book for more information. Doc comments are transformed into doc attributes. Preludes no_std — Removes std from the prelude. no_implicit_prelude — Disables prelude lookups within a module. Modules path — Specifies the filename for a module. Limits recursion_limit — Sets the maximum recursion limit for certain compile-time operations. type_length_limit — Sets the maximum size of a polymorphic type. Runtime panic_handler — Sets the function to handle panics. global_allocator — Sets the global memory allocator. windows_subsystem — Specifies the windows subsystem to link with. Features feature — Used to enable unstable or experimental compiler features. See The Unstable Book for features implemented in rustc . Type System non_exhaustive — Indicate that a type will have more fields/variants added in future. Debugger debugger_visualizer — Embeds a file that specifies debugger output for a type. collapse_debuginfo — Controls how macro invocations are encoded in debuginfo. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/std/marker/trait.Copy.html | Copy in std::marker - Rust This old browser is unsupported and will most likely display funky things. Copy std 1.92.0 (ded5c06cf 2025-12-08) Copy Sections How can I implement Copy ? What’s the difference between Copy and Clone ? When can my type be Copy ? When can’t my type be Copy ? When should my type be Copy ? Additional implementors Dyn Compatibility Implementors In std:: marker std :: marker Trait Copy Copy item path 1.0.0 · Source pub trait Copy: Clone { } Expand description Types whose values can be duplicated simply by copying bits. By default, variable bindings have ‘move semantics.’ In other words: #[derive(Debug)] struct Foo; let x = Foo; let y = x; // `x` has moved into `y`, and so cannot be used // println!("{x:?}"); // error: use of moved value However, if a type implements Copy , it instead has ‘copy semantics’: // We can derive a `Copy` implementation. `Clone` is also required, as it's // a supertrait of `Copy`. #[derive(Debug, Copy, Clone)] struct Foo; let x = Foo; let y = x; // `y` is a copy of `x` println! ( "{x:?}" ); // A-OK! It’s important to note that in these two examples, the only difference is whether you are allowed to access x after the assignment. Under the hood, both a copy and a move can result in bits being copied in memory, although this is sometimes optimized away. § How can I implement Copy ? There are two ways to implement Copy on your type. The simplest is to use derive : #[derive(Copy, Clone)] struct MyStruct; You can also implement Copy and Clone manually: struct MyStruct; impl Copy for MyStruct { } impl Clone for MyStruct { fn clone( & self ) -> MyStruct { * self } } There is a small difference between the two. The derive strategy will also place a Copy bound on type parameters: #[derive(Clone)] struct MyStruct<T>(T); impl <T: Copy> Copy for MyStruct<T> { } This isn’t always desired. For example, shared references ( &T ) can be copied regardless of whether T is Copy . Likewise, a generic struct containing markers such as PhantomData could potentially be duplicated with a bit-wise copy. § What’s the difference between Copy and Clone ? Copies happen implicitly, for example as part of an assignment y = x . The behavior of Copy is not overloadable; it is always a simple bit-wise copy. Cloning is an explicit action, x.clone() . The implementation of Clone can provide any type-specific behavior necessary to duplicate values safely. For example, the implementation of Clone for String needs to copy the pointed-to string buffer in the heap. A simple bitwise copy of String values would merely copy the pointer, leading to a double free down the line. For this reason, String is Clone but not Copy . Clone is a supertrait of Copy , so everything which is Copy must also implement Clone . If a type is Copy then its Clone implementation only needs to return *self (see the example above). § When can my type be Copy ? A type can implement Copy if all of its components implement Copy . For example, this struct can be Copy : #[derive(Copy, Clone)] struct Point { x: i32, y: i32, } A struct can be Copy , and i32 is Copy , therefore Point is eligible to be Copy . By contrast, consider struct PointList { points: Vec<Point>, } The struct PointList cannot implement Copy , because Vec<T> is not Copy . If we attempt to derive a Copy implementation, we’ll get an error: the trait `Copy` cannot be implemented for this type; field `points` does not implement `Copy` Shared references ( &T ) are also Copy , so a type can be Copy , even when it holds shared references of types T that are not Copy . Consider the following struct, which can implement Copy , because it only holds a shared reference to our non- Copy type PointList from above: #[derive(Copy, Clone)] struct PointListWrapper< 'a > { point_list_ref: & 'a PointList, } § When can’t my type be Copy ? Some types can’t be copied safely. For example, copying &mut T would create an aliased mutable reference. Copying String would duplicate responsibility for managing the String ’s buffer, leading to a double free. Generalizing the latter case, any type implementing Drop can’t be Copy , because it’s managing some resource besides its own size_of::<T> bytes. If you try to implement Copy on a struct or enum containing non- Copy data, you will get the error E0204 . § When should my type be Copy ? Generally speaking, if your type can implement Copy , it should. Keep in mind, though, that implementing Copy is part of the public API of your type. If the type might become non- Copy in the future, it could be prudent to omit the Copy implementation now, to avoid a breaking API change. § Additional implementors In addition to the implementors listed below , the following types also implement Copy : Function item types (i.e., the distinct types defined for each function) Function pointer types (e.g., fn() -> i32 ) Closure types, if they capture no value from the environment or if all such captured values implement Copy themselves. Note that variables captured by shared reference always implement Copy (even if the referent doesn’t), while variables captured by mutable reference never implement Copy . Dyn Compatibility § This trait is not dyn compatible . In older versions of Rust, dyn compatibility was called "object safety", so this trait is not object safe. Implementors § Source § impl Copy for AsciiChar 1.0.0 · Source § impl Copy for std::cmp:: Ordering 1.34.0 · Source § impl Copy for Infallible 1.64.0 · Source § impl Copy for FromBytesWithNulError 1.28.0 · Source § impl Copy for std::fmt:: Alignment Source § impl Copy for DebugAsHex Source § impl Copy for Sign 1.0.0 · Source § impl Copy for ErrorKind 1.0.0 · Source § impl Copy for SeekFrom 1.7.0 · Source § impl Copy for IpAddr Source § impl Copy for Ipv6MulticastScope 1.0.0 · Source § impl Copy for Shutdown 1.0.0 · Source § impl Copy for SocketAddr 1.0.0 · Source § impl Copy for FpCategory 1.55.0 · Source § impl Copy for IntErrorKind Source § impl Copy for BacktraceStyle Source § impl Copy for SearchStep 1.0.0 · Source § impl Copy for std::sync::atomic:: Ordering 1.12.0 · Source § impl Copy for RecvTimeoutError 1.0.0 · Source § impl Copy for TryRecvError 1.0.0 · Source § impl Copy for bool 1.0.0 · Source § impl Copy for char 1.0.0 · Source § impl Copy for f16 1.0.0 · Source § impl Copy for f32 1.0.0 · Source § impl Copy for f64 1.0.0 · Source § impl Copy for f128 1.0.0 · Source § impl Copy for i8 1.0.0 · Source § impl Copy for i16 1.0.0 · Source § impl Copy for i32 1.0.0 · Source § impl Copy for i64 1.0.0 · Source § impl Copy for i128 1.0.0 · Source § impl Copy for isize Source § impl Copy for ! 1.0.0 · Source § impl Copy for u8 1.0.0 · Source § impl Copy for u16 1.0.0 · Source § impl Copy for u32 1.0.0 · Source § impl Copy for u64 1.0.0 · Source § impl Copy for u128 1.0.0 · Source § impl Copy for usize 1.27.0 · Source § impl Copy for CpuidResult 1.27.0 · Source § impl Copy for __m128 1.89.0 · Source § impl Copy for __m128bh 1.27.0 · Source § impl Copy for __m128d Source § impl Copy for __m128h 1.27.0 · Source § impl Copy for __m128i 1.27.0 · Source § impl Copy for __m256 1.89.0 · Source § impl Copy for __m256bh 1.27.0 · Source § impl Copy for __m256d Source § impl Copy for __m256h 1.27.0 · Source § impl Copy for __m256i 1.72.0 · Source § impl Copy for __m512 1.89.0 · Source § impl Copy for __m512bh 1.72.0 · Source § impl Copy for __m512d Source § impl Copy for __m512h 1.72.0 · Source § impl Copy for __m512i Source § impl Copy for bf16 Source § impl Copy for AllocError Source § impl Copy for Global 1.28.0 · Source § impl Copy for Layout 1.28.0 · Source § impl Copy for System 1.0.0 · Source § impl Copy for TypeId 1.34.0 · Source § impl Copy for TryFromSliceError 1.34.0 · Source § impl Copy for CharTryFromError 1.59.0 · Source § impl Copy for TryFromCharError 1.0.0 · Source § impl Copy for Error Source § impl Copy for FormattingOptions 1.75.0 · Source § impl Copy for FileTimes 1.1.0 · Source § impl Copy for FileType 1.0.0 · Source § impl Copy for Empty 1.0.0 · Source § impl Copy for Sink Source § impl Copy for Assume 1.0.0 · Source § impl Copy for Ipv4Addr 1.0.0 · Source § impl Copy for Ipv6Addr 1.0.0 · Source § impl Copy for SocketAddrV4 1.0.0 · Source § impl Copy for SocketAddrV6 1.34.0 · Source § impl Copy for TryFromIntError 1.0.0 · Source § impl Copy for RangeFull Source § impl Copy for UCred Available on Unix only. 1.61.0 · Source § impl Copy for ExitCode 1.0.0 · Source § impl Copy for ExitStatus Source § impl Copy for ExitStatusError Source § impl Copy for std::ptr:: Alignment Source § impl Copy for DefaultRandomSource 1.0.0 · Source § impl Copy for Utf8Error 1.0.0 · Source § impl Copy for RecvError 1.5.0 · Source § impl Copy for WaitTimeoutResult 1.36.0 · Source § impl Copy for RawWakerVTable 1.26.0 · Source § impl Copy for AccessError 1.19.0 · Source § impl Copy for ThreadId 1.3.0 · Source § impl Copy for Duration 1.8.0 · Source § impl Copy for Instant 1.8.0 · Source § impl Copy for SystemTime 1.33.0 · Source § impl Copy for PhantomPinned 1.0.0 · Source § impl<'a> Copy for Component <'a> 1.0.0 · Source § impl<'a> Copy for Prefix <'a> Source § impl<'a> Copy for Utf8Pattern <'a> 1.0.0 · Source § impl<'a> Copy for Arguments <'a> 1.36.0 · Source § impl<'a> Copy for IoSlice <'a> 1.10.0 · Source § impl<'a> Copy for Location <'a> 1.28.0 · Source § impl<'a> Copy for Ancestors <'a> 1.0.0 · Source § impl<'a> Copy for PrefixComponent <'a> Source § impl<'a> Copy for PhantomContravariantLifetime <'a> Source § impl<'a> Copy for PhantomCovariantLifetime <'a> Source § impl<'a> Copy for PhantomInvariantLifetime <'a> Source § impl<'a, T, const N: usize > Copy for ArrayWindows <'a, T, N> where T: Copy + 'a, 1.63.0 · Source § impl<'fd> Copy for BorrowedFd <'fd> Available on Unix or HermitCore or target_os=trusty or WASI or target_os=motor only. 1.63.0 · Source § impl<'handle> Copy for BorrowedHandle <'handle> Available on Windows only. 1.63.0 · Source § impl<'socket> Copy for BorrowedSocket <'socket> Available on Windows only. 1.55.0 · Source § impl<B, C> Copy for ControlFlow <B, C> where B: Copy , C: Copy , Source § impl<Dyn> Copy for DynMetadata <Dyn> where Dyn: ? Sized , 1.28.0 · Source § impl<F> Copy for RepeatWith <F> where F: Copy , 1.0.0 · Source § impl<Idx> Copy for RangeTo <Idx> where Idx: Copy , 1.26.0 · Source § impl<Idx> Copy for std::ops:: RangeToInclusive <Idx> where Idx: Copy , Source § impl<Idx> Copy for Range <Idx> where Idx: Copy , Source § impl<Idx> Copy for RangeFrom <Idx> where Idx: Copy , Source § impl<Idx> Copy for RangeInclusive <Idx> where Idx: Copy , Source § impl<Idx> Copy for std::range:: RangeToInclusive <Idx> where Idx: Copy , 1.33.0 · Source § impl<Ptr> Copy for Pin <Ptr> where Ptr: Copy , 1.17.0 · Source § impl<T> Copy for Bound <T> where T: Copy , 1.0.0 · Source § impl<T> Copy for Option <T> where T: Copy , 1.36.0 · Source § impl<T> Copy for Poll <T> where T: Copy , 1.0.0 · Source § impl<T> Copy for *const T where T: ? Sized , 1.0.0 · Source § impl<T> Copy for *mut T where T: ? Sized , 1.0.0 · Source § impl<T> Copy for &T where T: ? Sized , Shared references can be copied, but mutable references cannot ! 1.19.0 · Source § impl<T> Copy for Reverse <T> where T: Copy , 1.21.0 · Source § impl<T> Copy for Discriminant <T> 1.20.0 · Source § impl<T> Copy for ManuallyDrop <T> where T: Copy + ? Sized , 1.28.0 · Source § impl<T> Copy for NonZero <T> where T: ZeroablePrimitive , 1.74.0 · Source § impl<T> Copy for Saturating <T> where T: Copy , 1.0.0 · Source § impl<T> Copy for Wrapping <T> where T: Copy , 1.25.0 · Source § impl<T> Copy for NonNull <T> where T: ? Sized , Source § impl<T> Copy for Exclusive <T> where T: Sync + Copy , Source § impl<T> Copy for PhantomContravariant <T> where T: ? Sized , Source § impl<T> Copy for PhantomCovariant <T> where T: ? Sized , 1.0.0 · Source § impl<T> Copy for PhantomData <T> where T: ? Sized , Source § impl<T> Copy for PhantomInvariant <T> where T: ? Sized , 1.36.0 · Source § impl<T> Copy for MaybeUninit <T> where T: Copy , 1.0.0 · Source § impl<T, E> Copy for Result <T, E> where T: Copy , E: Copy , 1.58.0 · Source § impl<T, const N: usize > Copy for [T; N] where T: Copy , Source § impl<T, const N: usize > Copy for Mask <T, N> where T: MaskElement , LaneCount <N>: SupportedLaneCount , Source § impl<T, const N: usize > Copy for Simd <T, N> where LaneCount <N>: SupportedLaneCount , T: SimdElement , Source § impl<T: Copy > Copy for SendTimeoutError <T> 1.0.0 · Source § impl<T: Copy > Copy for TrySendError <T> 1.0.0 · Source § impl<T: Copy > Copy for SendError <T> Source § impl<Y, R> Copy for CoroutineState <Y, R> where Y: Copy , R: Copy , | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/std/iter/trait.FromIterator.html | FromIterator in std::iter - Rust This old browser is unsupported and will most likely display funky things. FromIterator std 1.92.0 (ded5c06cf 2025-12-08) From Iterator Sections Examples Required Methods from_iter Dyn Compatibility Implementors In std:: iter std :: iter Trait From Iterator Copy item path 1.0.0 · Source pub trait FromIterator<A>: Sized { // Required method fn from_iter <T>(iter: T) -> Self where T: IntoIterator <Item = A> ; } Expand description Conversion from an Iterator . By implementing FromIterator for a type, you define how it will be created from an iterator. This is common for types which describe a collection of some kind. If you want to create a collection from the contents of an iterator, the Iterator::collect() method is preferred. However, when you need to specify the container type, FromIterator::from_iter() can be more readable than using a turbofish (e.g. ::<Vec<_>>() ). See the Iterator::collect() documentation for more examples of its use. See also: IntoIterator . § Examples Basic usage: let five_fives = std::iter::repeat( 5 ).take( 5 ); let v = Vec::from_iter(five_fives); assert_eq! (v, vec! [ 5 , 5 , 5 , 5 , 5 ]); Using Iterator::collect() to implicitly use FromIterator : let five_fives = std::iter::repeat( 5 ).take( 5 ); let v: Vec<i32> = five_fives.collect(); assert_eq! (v, vec! [ 5 , 5 , 5 , 5 , 5 ]); Using FromIterator::from_iter() as a more readable alternative to Iterator::collect() : use std::collections::VecDeque; let first = ( 0 .. 10 ).collect::<VecDeque<i32>>(); let second = VecDeque::from_iter( 0 .. 10 ); assert_eq! (first, second); Implementing FromIterator for your type: // A sample collection, that's just a wrapper over Vec<T> #[derive(Debug)] struct MyCollection(Vec<i32>); // Let's give it some methods so we can create one and add things // to it. impl MyCollection { fn new() -> MyCollection { MyCollection(Vec::new()) } fn add( &mut self , elem: i32) { self . 0 .push(elem); } } // and we'll implement FromIterator impl FromIterator<i32> for MyCollection { fn from_iter<I: IntoIterator<Item=i32>>(iter: I) -> Self { let mut c = MyCollection::new(); for i in iter { c.add(i); } c } } // Now we can make a new iterator... let iter = ( 0 .. 5 ).into_iter(); // ... and make a MyCollection out of it let c = MyCollection::from_iter(iter); assert_eq! (c. 0 , vec! [ 0 , 1 , 2 , 3 , 4 ]); // collect works too! let iter = ( 0 .. 5 ).into_iter(); let c: MyCollection = iter.collect(); assert_eq! (c. 0 , vec! [ 0 , 1 , 2 , 3 , 4 ]); Required Methods § 1.0.0 · Source fn from_iter <T>(iter: T) -> Self where T: IntoIterator <Item = A>, Creates a value from an iterator. See the module-level documentation for more. § Examples let five_fives = std::iter::repeat( 5 ).take( 5 ); let v = Vec::from_iter(five_fives); assert_eq! (v, vec! [ 5 , 5 , 5 , 5 , 5 ]); Dyn Compatibility § This trait is not dyn compatible . In older versions of Rust, dyn compatibility was called "object safety", so this trait is not object safe. Implementors § Source § impl FromIterator < AsciiChar > for String 1.80.0 · Source § impl FromIterator < char > for Box < str > Source § impl FromIterator < char > for ByteString 1.0.0 · Source § impl FromIterator < char > for String Source § impl FromIterator < u8 > for ByteString 1.23.0 · Source § impl FromIterator < () > for () Collapses all unit items from an iterator into one. This is more useful when combined with higher-level abstractions, like collecting to a Result<(), E> where you only care about errors: use std::io:: * ; let data = vec! [ 1 , 2 , 3 , 4 , 5 ]; let res: Result <()> = data.iter() .map(|x| writeln! (stdout(), "{x}" )) .collect(); assert! (res.is_ok()); Source § impl FromIterator < ByteString > for ByteString 1.52.0 · Source § impl FromIterator < OsString > for OsString 1.80.0 · Source § impl FromIterator < String > for Box < str > 1.4.0 · Source § impl FromIterator < String > for String Source § impl<'a> FromIterator <&'a AsciiChar > for String 1.80.0 · Source § impl<'a> FromIterator <&'a char > for Box < str > 1.17.0 · Source § impl<'a> FromIterator <&'a char > for String 1.80.0 · Source § impl<'a> FromIterator <&'a str > for Box < str > Source § impl<'a> FromIterator <&'a str > for ByteString 1.0.0 · Source § impl<'a> FromIterator <&'a str > for String Source § impl<'a> FromIterator <&'a ByteStr > for ByteString 1.52.0 · Source § impl<'a> FromIterator <&'a OsStr > for OsString Source § impl<'a> FromIterator <&'a [ u8 ]> for ByteString Source § impl<'a> FromIterator < AsciiChar > for Cow <'a, str > 1.80.0 · Source § impl<'a> FromIterator < Cow <'a, str >> for Box < str > 1.19.0 · Source § impl<'a> FromIterator < Cow <'a, str >> for String 1.52.0 · Source § impl<'a> FromIterator < Cow <'a, OsStr >> for OsString 1.12.0 · Source § impl<'a> FromIterator < char > for Cow <'a, str > 1.12.0 · Source § impl<'a> FromIterator < String > for Cow <'a, str > 1.12.0 · Source § impl<'a, 'b> FromIterator <&'b str > for Cow <'a, str > 1.0.0 · Source § impl<'a, T> FromIterator <T> for Cow <'a, [T] > where T: Clone , 1.80.0 · Source § impl<A> FromIterator < Box < str , A>> for Box < str > where A: Allocator , 1.45.0 · Source § impl<A> FromIterator < Box < str , A>> for String where A: Allocator , 1.0.0 · Source § impl<A, E, V> FromIterator < Result <A, E>> for Result <V, E> where V: FromIterator <A>, 1.0.0 · Source § impl<A, V> FromIterator < Option <A>> for Option <V> where V: FromIterator <A>, 1.32.0 · Source § impl<I> FromIterator <I> for Box < [I] > 1.0.0 · Source § impl<K, V> FromIterator < (K, V) > for BTreeMap <K, V> where K: Ord , 1.0.0 · Source § impl<K, V, S> FromIterator < (K, V) > for HashMap <K, V, S> where K: Eq + Hash , S: BuildHasher + Default , 1.0.0 · Source § impl<P: AsRef < Path >> FromIterator <P> for PathBuf 1.0.0 · Source § impl<T> FromIterator <T> for BTreeSet <T> where T: Ord , 1.0.0 · Source § impl<T> FromIterator <T> for BinaryHeap <T> where T: Ord , 1.0.0 · Source § impl<T> FromIterator <T> for LinkedList <T> 1.0.0 · Source § impl<T> FromIterator <T> for VecDeque <T> 1.37.0 · Source § impl<T> FromIterator <T> for Rc < [T] > 1.37.0 · Source § impl<T> FromIterator <T> for Arc < [T] > 1.0.0 · Source § impl<T> FromIterator <T> for Vec <T> Collects an iterator into a Vec, commonly called via Iterator::collect() § Allocation behavior In general Vec does not guarantee any particular growth or allocation strategy. That also applies to this trait impl. Note: This section covers implementation details and is therefore exempt from stability guarantees. Vec may use any or none of the following strategies, depending on the supplied iterator: preallocate based on Iterator::size_hint() and panic if the number of items is outside the provided lower/upper bounds use an amortized growth strategy similar to pushing one item at a time perform the iteration in-place on the original allocation backing the iterator The last case warrants some attention. It is an optimization that in many cases reduces peak memory consumption and improves cache locality. But when big, short-lived allocations are created, only a small fraction of their items get collected, no further use is made of the spare capacity and the resulting Vec is moved into a longer-lived structure, then this can lead to the large allocations having their lifetimes unnecessarily extended which can result in increased memory footprint. In cases where this is an issue, the excess capacity can be discarded with Vec::shrink_to() , Vec::shrink_to_fit() or by collecting into Box<[T]> instead, which additionally reduces the size of the long-lived struct. static LONG_LIVED: Mutex<Vec<Vec<u16>>> = Mutex::new(Vec::new()); for i in 0 .. 10 { let big_temporary: Vec<u16> = ( 0 .. 1024 ).collect(); // discard most items let mut result: Vec< _ > = big_temporary.into_iter().filter(|i| i % 100 == 0 ).collect(); // without this a lot of unused capacity might be moved into the global result.shrink_to_fit(); LONG_LIVED.lock().unwrap().push(result); } 1.79.0 · Source § impl<T, ExtendT> FromIterator < (T₁, T₂, …, Tₙ) > for (ExtendT₁, ExtendT₂, …, ExtendTₙ) where ExtendT: Default + Extend <T>, This implementation turns an iterator of tuples into a tuple of types which implement Default and Extend . This is similar to Iterator::unzip , but is also composable with other FromIterator implementations: let string = "1,2,123,4" ; // Example given for a 2-tuple, but 1- through 12-tuples are supported let (numbers, lengths): (Vec< _ >, Vec< _ >) = string .split( ',' ) .map(|s| s.parse().map(|n: u32| (n, s.len()))) .collect::< Result < _ , _ >>() ? ; assert_eq! (numbers, [ 1 , 2 , 123 , 4 ]); assert_eq! (lengths, [ 1 , 1 , 3 , 1 ]); 1.0.0 · Source § impl<T, S> FromIterator <T> for HashSet <T, S> where T: Eq + Hash , S: BuildHasher + Default , | 2026-01-13T09:29:13 |
https://man.freebsd.org/cgi/man.cgi?query=fish-doc&sektion=1&manpath=freebsd-ports#content | fish-doc Skip site navigation (1) Skip section navigation (2) Header And Logo Peripheral Links . Donate to FreeBSD . Search Site Navigation Home About Introduction Features Advocacy Marketing Get FreeBSD Release Information Release Engineering Documentation FAQ Handbook Porter's Handbook Developer's Handbook Manual Pages Documentation Project Primer All Books and Articles Community Mailing Lists Forums User Groups Events Developers Project Ideas GIT Repository Support Vendors Security Information Bug Reports Submit Bug-report Foundation Donate FreeBSD Manual Pages man apropos All Sections 1 - General Commands 2 - System Calls 3 - Subroutines 4 - Special Files 5 - File Formats 6 - Games 7 - Macros and Conventions 8 - Maintenance Commands 9 - Kernel Interface n - New Commands FreeBSD 16.0-CURRENT FreeBSD 15.0-RELEASE FreeBSD 15.0-RELEASE and Ports FreeBSD 15.0-STABLE FreeBSD 14.3-RELEASE FreeBSD 14.3-RELEASE and Ports FreeBSD 14.3-STABLE FreeBSD 14.2-RELEASE FreeBSD 14.2-RELEASE and Ports FreeBSD 14.1-RELEASE FreeBSD 14.1-RELEASE and Ports FreeBSD 14.0-RELEASE FreeBSD 14.0-RELEASE and Ports FreeBSD 13.5-RELEASE FreeBSD 13.5-RELEASE and Ports FreeBSD 13.5-STABLE FreeBSD 13.4-RELEASE FreeBSD 13.4-RELEASE and Ports FreeBSD 13.3-RELEASE FreeBSD 13.3-RELEASE and Ports FreeBSD 13.2-RELEASE FreeBSD 13.2-RELEASE and Ports FreeBSD 13.1-RELEASE FreeBSD 13.1-RELEASE and Ports FreeBSD 13.0-RELEASE FreeBSD 13.0-RELEASE and Ports FreeBSD 12.4-RELEASE FreeBSD 12.4-RELEASE and Ports FreeBSD 12.3-RELEASE FreeBSD 12.3-RELEASE and Ports FreeBSD 12.2-RELEASE FreeBSD 12.2-RELEASE and Ports FreeBSD 12.1-RELEASE FreeBSD 12.1-RELEASE and Ports FreeBSD 12.0-RELEASE FreeBSD 12.0-RELEASE and Ports FreeBSD 11.4-RELEASE FreeBSD 11.4-RELEASE and Ports FreeBSD 11.3-RELEASE FreeBSD 11.3-RELEASE and Ports FreeBSD 11.2-RELEASE FreeBSD 11.2-RELEASE and Ports FreeBSD 11.1-RELEASE FreeBSD 11.1-RELEASE and Ports FreeBSD 11.0-RELEASE FreeBSD 11.0-RELEASE and Ports FreeBSD 10.4-RELEASE FreeBSD 10.4-RELEASE and Ports FreeBSD 10.3-RELEASE FreeBSD 10.3-RELEASE and Ports FreeBSD 10.2-RELEASE FreeBSD 10.2-RELEASE and Ports FreeBSD 10.1-RELEASE FreeBSD 10.1-RELEASE and Ports FreeBSD 10.0-RELEASE FreeBSD 10.0-RELEASE and Ports FreeBSD 9.3-RELEASE FreeBSD 9.3-RELEASE and Ports FreeBSD 9.2-RELEASE FreeBSD 9.2-RELEASE and Ports FreeBSD 9.1-RELEASE FreeBSD 9.1-RELEASE and Ports FreeBSD 9.0-RELEASE FreeBSD 9.0-RELEASE and Ports FreeBSD 8.4-RELEASE FreeBSD 8.4-RELEASE and Ports FreeBSD 8.3-RELEASE FreeBSD 8.3-RELEASE and Ports FreeBSD 8.2-RELEASE FreeBSD 8.2-RELEASE and Ports FreeBSD 8.1-RELEASE FreeBSD 8.1-RELEASE and Ports FreeBSD 8.0-RELEASE FreeBSD 8.0-RELEASE and Ports FreeBSD 7.4-RELEASE FreeBSD 7.4-RELEASE and Ports FreeBSD 7.3-RELEASE FreeBSD 7.3-RELEASE and Ports FreeBSD 7.2-RELEASE FreeBSD 7.2-RELEASE and Ports FreeBSD 7.1-RELEASE FreeBSD 7.1-RELEASE and Ports FreeBSD 7.0-RELEASE FreeBSD 6.4-RELEASE FreeBSD 6.4-RELEASE and Ports FreeBSD 6.3-RELEASE FreeBSD 6.3-RELEASE and Ports FreeBSD 6.2-RELEASE FreeBSD 6.1-RELEASE FreeBSD 6.0-RELEASE FreeBSD 6.0-RELEASE and Ports FreeBSD 5.5-RELEASE FreeBSD 5.5-RELEASE and Ports FreeBSD 5.4-RELEASE FreeBSD 5.4-RELEASE and Ports FreeBSD 5.3-RELEASE FreeBSD 5.3-RELEASE and Ports FreeBSD 5.2.1-RELEASE FreeBSD 5.2.1-RELEASE and Ports FreeBSD 5.2-RELEASE FreeBSD 5.2-RELEASE and Ports FreeBSD 5.1-RELEASE FreeBSD 5.0-RELEASE FreeBSD 4.11-RELEASE FreeBSD 4.11-RELEASE and Ports FreeBSD 4.10-RELEASE FreeBSD 4.10-RELEASE and Ports FreeBSD 4.9-RELEASE FreeBSD 4.9-RELEASE and Ports FreeBSD 4.8-RELEASE FreeBSD 4.8-RELEASE and Ports FreeBSD 4.7-RELEASE FreeBSD 4.6.2-RELEASE FreeBSD 4.6.2-RELEASE and Ports FreeBSD 4.6-RELEASE FreeBSD 4.6-RELEASE and Ports FreeBSD 4.5-RELEASE FreeBSD 4.5-RELEASE and Ports FreeBSD 4.4-RELEASE FreeBSD 4.3-RELEASE FreeBSD 4.3-RELEASE and Ports FreeBSD 4.2-RELEASE FreeBSD 4.2-RELEASE and Ports FreeBSD 4.1.1-RELEASE FreeBSD 4.1.1-RELEASE and Ports FreeBSD 4.1-RELEASE FreeBSD 4.0-RELEASE FreeBSD 3.5.1-RELEASE FreeBSD 3.5.1-RELEASE and Ports FreeBSD 3.5-RELEASE and Ports FreeBSD 3.4-RELEASE FreeBSD 3.4-RELEASE and Ports FreeBSD 3.3-RELEASE FreeBSD 3.2-RELEASE FreeBSD 3.1-RELEASE FreeBSD 3.0-RELEASE FreeBSD 2.2.8-RELEASE FreeBSD 2.2.8-RELEASE and Ports FreeBSD 2.2.7-RELEASE FreeBSD 2.2.6-RELEASE FreeBSD 2.2.5-RELEASE FreeBSD 2.2.2-RELEASE FreeBSD 2.2.1-RELEASE FreeBSD 2.1.7.1-RELEASE FreeBSD 2.1.6.1-RELEASE FreeBSD 2.1.5-RELEASE FreeBSD 2.1.0-RELEASE FreeBSD 2.0.5-RELEASE FreeBSD 2.0-RELEASE FreeBSD 1.1.5.1-RELEASE FreeBSD 1.1-RELEASE FreeBSD 1.0-RELEASE FreeBSD Ports 15.0 FreeBSD Ports 14.3 FreeBSD Ports 14.3.quarterly FreeBSD Ports 14.2 FreeBSD Ports 14.1 FreeBSD Ports 14.0 FreeBSD Ports 13.5 FreeBSD Ports 13.4 FreeBSD Ports 13.3 FreeBSD Ports 13.2 FreeBSD Ports 13.1 FreeBSD Ports 13.0 FreeBSD Ports 12.4 FreeBSD Ports 12.3 FreeBSD Ports 12.2 FreeBSD Ports 12.1 FreeBSD Ports 12.0 FreeBSD Ports 11.4 FreeBSD Ports 11.3 FreeBSD Ports 11.2 FreeBSD Ports 11.1 FreeBSD Ports 11.0 FreeBSD Ports 10.4 FreeBSD Ports 10.3 FreeBSD Ports 10.2 FreeBSD Ports 10.1 FreeBSD Ports 10.0 FreeBSD Ports 9.3 FreeBSD Ports 9.2 FreeBSD Ports 9.1 FreeBSD Ports 9.0 FreeBSD Ports 8.4 FreeBSD Ports 8.3 FreeBSD Ports 8.2 FreeBSD Ports 8.1 FreeBSD Ports 8.0 FreeBSD Ports 7.4 FreeBSD Ports 7.3 FreeBSD Ports 7.2 FreeBSD Ports 7.1 FreeBSD Ports 7.0 FreeBSD Ports 6.4 FreeBSD Ports 6.3 FreeBSD Ports 6.2 FreeBSD Ports 6.0 FreeBSD Ports 5.5 FreeBSD Ports 5.4 FreeBSD Ports 5.3 FreeBSD Ports 5.2.1 FreeBSD Ports 5.2 FreeBSD Ports 5.1 FreeBSD Ports 4.11 FreeBSD Ports 4.10 FreeBSD Ports 4.9 FreeBSD Ports 4.8 FreeBSD Ports 4.7 FreeBSD Ports 4.6.2 FreeBSD Ports 4.6 FreeBSD Ports 4.5 FreeBSD Ports 4.3 FreeBSD Ports 4.2 FreeBSD Ports 4.1.1 FreeBSD Ports 3.5.1 FreeBSD Ports 3.5 FreeBSD Ports 3.4 FreeBSD Ports 2.2.8 4.4BSD Lite2 4.3BSD NET/2 4.3BSD Reno 2.11 BSD 2.10 BSD 2.9.1 BSD 2.8 BSD 386BSD 0.1 386BSD 0.0 CentOS 7.9 CentOS 7.8 CentOS 7.7 CentOS 7.6 CentOS 7.5 CentOS 7.4 CentOS 7.3 CentOS 7.2 CentOS 7.1 CentOS 7.0 CentOS 6.10 CentOS 6.9 CentOS 6.8 CentOS 6.7 CentOS 6.6 CentOS 6.5 CentOS 6.4 CentOS 6.3 CentOS 6.2 CentOS 6.1 CentOS 6.0 CentOS 5.11 CentOS 5.10 CentOS 5.9 CentOS 5.8 CentOS 5.7 CentOS 5.6 CentOS 5.5 CentOS 5.4 CentOS 4.8 CentOS 3.9 Darwin 8.0.1/ppc Darwin 7.0.1 Darwin 6.0.2/x86 Darwin 1.4.1/x86 Darwin 1.3.1/x86 Debian 14.0 unstable Debian 13.2.0 Debian 12.12.0 Debian 11.11.0 Debian 10.13.0 Debian 9.13.0 Debian 8.11.1 Debian 7.11.0 Debian 6.0.10 Debian 5.0.10 Debian 4.0.9 Debian 3.1.8 Debian 2.2.7 Debian 2.0.0 Dell UNIX SVR4 2.2 DragonFly 6.4.2 DragonFly 5.8.3 DragonFly 4.8.1 DragonFly 3.8.2 DragonFly 2.10.1 DragonFly 1.12.1 DragonFly 1.0A HP-UX 11.22 HP-UX 11.20 HP-UX 11.11 HP-UX 11.00 HP-UX 10.20 HP-UX 10.10 HP-UX 10.01 HP-UX 9.07 HP-UX 8.07 Inferno 4th Edition IRIX 6.5.30 Linux Slackware 3.1 MACH 2.5/i386 macOS 26.2 macOS 15.7.3 macOS 14.8.3 macOS 13.6.5 macOS 12.7.3 macOS 11.1 macOS 10.15.7 macOS 10.13.6 macOS 10.12.0 Minix 3.3.0 Minix 3.2.1 Minix 3.2.0 Minix 3.1.7 Minix 3.1.6 Minix 3.1.5 Minix 2.0.0 NetBSD 10.1 NetBSD 10.0 NetBSD 9.4 NetBSD 9.3 NetBSD 9.2 NetBSD 9.1 NetBSD 9.0 NetBSD 8.3 NetBSD 8.2 NetBSD 8.1 NetBSD 8.0 NetBSD 7.2 NetBSD 7.1.2 NetBSD 7.1 NetBSD 7.0 NetBSD 6.1.5 NetBSD 6.0 NetBSD 5.2.3 NetBSD 5.2 NetBSD 5.1 NetBSD 5.0 NetBSD 4.0.1 NetBSD 4.0 NetBSD 3.1 NetBSD 3.0 NetBSD 2.1 NetBSD 2.0.2 NetBSD 2.0 NetBSD 1.6.2 NetBSD 1.6.1 NetBSD 1.6 NetBSD 1.5.3 NetBSD 1.5.2 NetBSD 1.5.1 NetBSD 1.5 NetBSD 1.4.3 NetBSD 1.4.2 NetBSD 1.4.1 NetBSD 1.4 NetBSD 1.3.3 NetBSD 1.3.2 NetBSD 1.3.1 NetBSD 1.3 NetBSD 1.2.1 NetBSD 1.2 NetBSD 1.1 NetBSD 1.0 NeXTSTEP 3.3 OpenBSD 7.8 OpenBSD 7.7 OpenBSD 7.6 OpenBSD 7.5 OpenBSD 7.4 OpenBSD 7.3 OpenBSD 7.2 OpenBSD 7.1 OpenBSD 7.0 OpenBSD 6.9 OpenBSD 6.8 OpenBSD 6.7 OpenBSD 6.6 OpenBSD 6.5 OpenBSD 6.4 OpenBSD 6.3 OpenBSD 6.2 OpenBSD 6.1 OpenBSD 6.0 OpenBSD 5.9 OpenBSD 5.8 OpenBSD 5.7 OpenBSD 5.6 OpenBSD 5.5 OpenBSD 5.4 OpenBSD 5.3 OpenBSD 5.2 OpenBSD 5.1 OpenBSD 5.0 OpenBSD 4.9 OpenBSD 4.8 OpenBSD 4.7 OpenBSD 4.6 OpenBSD 4.5 OpenBSD 4.4 OpenBSD 4.3 OpenBSD 4.2 OpenBSD 4.1 OpenBSD 4.0 OpenBSD 3.9 OpenBSD 3.8 OpenBSD 3.7 OpenBSD 3.6 OpenBSD 3.5 OpenBSD 3.4 OpenBSD 3.3 OpenBSD 3.2 OpenBSD 3.1 OpenBSD 3.0 OpenBSD 2.9 OpenBSD 2.8 OpenBSD 2.7 OpenBSD 2.6 OpenBSD 2.5 OpenBSD 2.4 OpenBSD 2.3 OpenBSD 2.2 OpenBSD 2.1 OpenBSD 2.0 OpenDarwin 7.2.1 OpenDarwin 6.6.2/x86 OpenDarwin 6.6.1/x86 OpenDarwin 20030208pre4/ppc OpenIndiana 2024.10 OpenIndiana 2022.10 OpenIndiana 2020.10 OpenIndiana 2017.10 OpenIndiana 2015.10 OpenIndiana 2013.08 OpenSolaris 2010.03 OpenSolaris 2009.06 OpenStep 4.2 openSUSE 42.3 openSUSE 42.2 openSUSE 42.1 openSUSE 16.0 openSUSE 15.6 openSUSE 15.5 openSUSE 15.4 openSUSE 15.3 openSUSE 15.2 openSUSE 15.1 openSUSE 15.0 openSUSE 13.2 openSUSE 13.1 openSUSE 11.4 openSUSE 11.3 openSUSE 11.2 openSUSE 10.3 openSUSE 10.2 OSF1 V5.1/alpha OSF1 V4.0/alpha OSF1 V1.0/mips Plan 9 Red Hat 9.0 Red Hat 8.0 Red Hat 7.3 Red Hat 7.2 Red Hat 7.1 Red Hat 7.0 Red Hat 6.2 Red Hat 6.1 Red Hat 5.2 Red Hat 5.0 Red Hat 4.2 Rhapsody DR1 Rhapsody DR2 Rocky 10.0 Rocky 9.6 Rocky 9.5 Rocky 9.4 Rocky 9.3 Rocky 9.2 Rocky 9.1 Rocky 9.0 Rocky 8.10 Rocky 8.9 Rocky 8.8 Rocky 8.7 Rocky 8.6 Rocky 8.5 Rocky 8.4 Rocky 8.3 Sun UNIX 0.4 SunOS 5.10 SunOS 5.9 SunOS 5.8 SunOS 5.7 SunOS 5.6 SunOS 5.5.1 SunOS 4.1.3 SuSE 11.3 SuSE 11.2 SuSE 11.1 SuSE 11.0 SuSE 10.3 SuSE 10.2 SuSE 10.1 SuSE 10.0 SuSE 9.3 SuSE 9.2 SuSE 8.2 SuSE 8.1 SuSE 8.0 SuSE 7.3 SuSE 7.2 SuSE 7.1 SuSE 7.0 SuSE 6.4 SuSE 6.3 SuSE 6.1 SuSE 6.0 SuSE 5.3 SuSE 5.2 SuSE 5.0 SuSE 4.3 SuSE ES 10 SP1 Ubuntu 24.04 noble Ubuntu 23.10 mantic Ubuntu 22.04 jammy Ubuntu 20.04 focal Ubuntu 18.04 bionic Ubuntu 16.04 xenial Ubuntu 14.04 trusty ULTRIX 4.2 Ultrix-32 2.0/VAX Unix Seventh Edition X11R7.4 X11R7.3.2 X11R7.2 X11R6.9.0 X11R6.8.2 X11R6.7.0 XFree86 4.8.0 XFree86 4.7.0 XFree86 4.6.0 XFree86 4.5.0 XFree86 4.4.0 XFree86 4.3.0 XFree86 4.2.99.3 XFree86 4.2.0 XFree86 4.1.0 XFree86 4.0.2 XFree86 4.0.1 XFree86 4.0 XFree86 3.3.6 XFree86 3.3 XFree86 2.1 All Architectures html pdf ascii home | help FISH-DOC (1) fish-shell FISH-DOC (1) This is the documentation for fish , the f riendly i nteractive sh ell. A shell is a program that helps you operate your computer by starting other programs. fish offers a command-line interface focused on usability and inter- active use. Some of the special features of fish are: • Extensive UI : Syntax highlighting , Autosuggestions , tab completion and selection lists that can be navigated and filtered. • No configuration needed : fish is designed to be ready to use immedi- ately, without requiring extensive configuration. • Easy scripting : New functions can be added on the fly. The syntax is easy to learn and use. This page explains how to install and set up fish and where to get more information. WHERE TO GO? If this is your first time using fish, see the tutorial . If you are already familiar with other shells like bash and want to see the scripting differences, see Fish For Bash Users . For an overview of fish's scripting language, see The Fish Language . If it would be useful in a script file, it's here. For information on using fish interactively, see Interactive use . If it's about key presses, syntax highlighting or anything else that needs an interactive terminal session, look here. If you need to install fish first, read on, the rest of this document will tell you how to get, install and configure fish. INSTALLATION This section describes how to install, uninstall, start, and exit fish . It also explains how to make fish the default shell. Installation Up-to-date instructions for installing the latest version of fish are on the fish homepage < https://fishshell.com/ >. To install the development version of fish, see the instructions on the project's GitHub page < https://github.com/fish-shell/fish-shell >. Starting and Exiting Once fish has been installed, open a terminal. If fish is not the de- fault shell: • Type fish to start a shell: > fish • Type exit to end the session: > exit Default Shell There are multiple ways to switch to fish (or any other shell) as your default. The simplest method is to set your terminal emulator (eg GNOME Termi- nal, Apple's Terminal.app, or Konsole) to start fish directly. See its configuration and set the program to start to /usr/local/bin/fish (if that's where fish is installed - substitute another location as appro- priate). Alternatively, you can set fish as your login shell so that it will be started by all terminal logins, including SSH. WARNING: Setting fish as your login shell may cause issues, such as an incor- rect PATH . Some operating systems, including a number of Linux dis- tributions, require the login shell to be Bourne-compatible and to read configuration from /etc/profile . fish may not be suitable as a login shell on these systems. To change your login shell to fish: 1. Add the shell to /etc/shells with: > echo /usr/local/bin/fish | sudo tee -a /etc/shells 2. Change your default shell with: > chsh -s /usr/local/bin/fish Again, substitute the path to fish for /usr/local/bin/fish - see com- mand -s fish inside fish. To change it back to another shell, just sub- stitute /usr/local/bin/fish with /bin/bash , /bin/tcsh or /bin/zsh as appropriate in the steps above. Uninstalling For uninstalling fish: see FAQ: Uninstalling fish . Shebang Line Because shell scripts are written in many different languages, they need to carry information about which interpreter should be used to ex- ecute them. For this, they are expected to have a first line, the she- bang line, which names the interpreter executable. A script written in bash would need a first line like this: #!/bin/bash When the shell tells the kernel to execute the file, it will use the interpreter /bin/bash . For a script written in another language, just replace /bin/bash with the interpreter for that language. For example: /usr/bin/python for a python script, or /usr/local/bin/fish for a fish script, if that is where you have them installed. If you want to share your script with others, you might want to use env to allow for the interpreter to be installed in other locations. For example: #!/usr/bin/env fish echo Hello from fish $version This will call env , which then goes through PATH to find a program called "fish". This makes it work, whether fish is installed in (for example) /usr/local/bin/fish , /usr/bin/fish , or ~/.local/bin/fish , as long as that directory is in PATH . The shebang line is only used when scripts are executed without speci- fying the interpreter. For functions inside fish or when executing a script with fish /path/to/script , a shebang is not required (but it doesn't hurt!). When executing files without an interpreter, fish, like other shells, tries your system shell, typically /bin/sh . This is needed because some scripts are shipped without a shebang line. CONFIGURATION To store configuration write it to a file called ~/.config/fish/con- fig.fish . .fish scripts in ~/.config/fish/conf.d/ are also automatically executed before config.fish . These files are read on the startup of every shell, whether interactive and/or if they're login shells. Use status --is-interactive and status --is-login to do things only in interactive/login shells, respectively. This is the short version; for a full explanation, like for sysadmins or integration for developers of other software, see Configuration files . If you want to see what you changed over fish's defaults, see fish_delta . Examples: To add ~/linux/bin to PATH variable when using a login shell, add this to ~/.config/fish/config.fish file: if status --is-login set -gx PATH $PATH ~/linux/bin end This is just an example; using fish_add_path e.g. fish_add_path ~/linux/bin which only adds the path if it isn't included yet is eas- ier. To run commands on exit, use an event handler that is triggered by the exit of the shell: function on_exit --on-event fish_exit echo fish is now exiting end RESOURCES • The GitHub page < https://github.com/fish-shell/fish-shell/ > • The official Gitter channel < https://gitter.im/fish-shell/fish-shell > • The official mailing list at fish-users@lists.sourceforge.net < https://lists.sourceforge.net/lists/listinfo/fish-users > If you have an improvement for fish, you can submit it via the GitHub page. OTHER HELP PAGES Frequently asked questions What is the equivalent to this thing from bash (or other shells)? See Fish for bash users How do I set or clear an environment variable? Use the set command: set -x key value # typically set -gx key value set -e key Since fish 3.1 you can set an environment variable for just one command using the key=value some command syntax, like in other shells. The two lines below behave identically - unlike other shells, fish will output value both times: key=value echo $key begin; set -lx key value; echo $key; end Note that "exported" is not a scope , but an additional bit of state. A variable can be global and exported or local and exported or even uni- versal and exported. Typically it makes sense to make an exported vari- able global. How do I check whether a variable is defined? Use set -q var . For example, if set -q var; echo variable defined; end . To check multiple variables you can combine with and and or like so: if set -q var1; or set -q var2 echo either variable defined end Keep in mind that a defined variable could also be empty, either by having no elements (if set like set var ) or only empty elements (if set like set var "" ). Read on for how to deal with those. How do I check whether a variable is not empty? Use string length -q -- $var . For example, if string length -q -- $var; echo not empty; end . Note that string length will interpret a list of multiple variables as a disjunction (meaning any/or): if string length -q -- $var1 $var2 $var3 echo at least one of these variables is not empty end Alternatively, use test -n "$var" , but remember that the variable must be double-quoted . For example, if test -n "$var"; echo not empty; end . The test command provides its own and (-a) and or (-o): if test -n "$var1" -o -n "$var2" -o -n "$var3" echo at least one of these variables is not empty end If you want to know if a variable has no elements , use set -q var[1] . Why doesn't set -Ux (exported universal variables) seem to work? A global variable of the same name already exists. Environment variables such as EDITOR or TZ can be set universally using set -Ux . However, if there is an environment variable already set be- fore fish starts (such as by login scripts or system administrators), it is imported into fish as a global variable. The variable scopes are searched from the "inside out", which means that local variables are checked first, followed by global variables, and finally universal variables. This means that the global value takes precedence over the universal value. To avoid this problem, consider changing the setting which fish inher- its. If this is not possible, add a statement to your configuration file (usually ~/.config/fish/config.fish ): set -gx EDITOR vim How do I run a command every login? What's fish's equivalent to .bashrc or .profile? Edit the file ~/.config/fish/config.fish [1], creating it if it does not exist (Note the leading period). Unlike .bashrc and .profile, this file is always read, even in non-in- teractive or login shells. To do something only in interactive shells, check status is-interactive like: if status is-interactive # use the coolbeans theme fish_config theme choose coolbeans end [1] The "~/.config" part of this can be set via $XDG_CONFIG_HOME, that's just the default. How do I set my prompt? The prompt is the output of the fish_prompt function. Put it in ~/.con- fig/fish/functions/fish_prompt.fish . For example, a simple prompt is: function fish_prompt set_color $fish_color_cwd echo -n (prompt_pwd) set_color normal echo -n ' > ' end You can also use the Web configuration tool, fish_config , to preview and choose from a gallery of sample prompts. Or you can use fish_config from the commandline: > fish_config prompt show # displays all the prompts fish ships with > fish_config prompt choose disco # loads the disco prompt in the current shell > fish_config prompt save # makes the change permanent If you want to modify your existing prompt, you can use funced and funcsave like: >_ funced fish_prompt # This opens up your editor (set in $EDITOR). # Modify the function, # save the file and repeat to your liking. # Once you are happy with it: >_ funcsave fish_prompt This also applies to fish_right_prompt and fish_mode_prompt . Why does my prompt show a [I]? That's the fish_mode_prompt . It is displayed by default when you've ac- tivated vi mode using fish_vi_key_bindings . If you haven't activated vi mode on purpose, you might have installed a third-party theme or plugin that does it. If you want to change or disable this display, modify the fish_mode_prompt function, for instance via funced . How do I customize my syntax highlighting colors? Use the web configuration tool, fish_config , or alter the fish_color family of environment variables . You can also use fish_config on the commandline, like: > fish_config theme show # to demonstrate all the colorschemes > fish_config theme choose coolbeans # to load the "coolbeans" theme > fish_config theme save # to make the change permanent How do I change the greeting message? Change the value of the variable fish_greeting or create a fish_greeting function. For example, to remove the greeting use: set -U fish_greeting Or if you prefer not to use a universal variable, use: set -g fish_greeting in config.fish . How do I run a command from history? Type some part of the command, and then hit the up () or down () arrow keys to navigate through history matches, or press ctrl-r to open the history in a searchable pager. In this pager you can press ctrl-r or ctrl-s to move to older or younger history respectively. Additional default key bindings include ctrl-p (up) and ctrl-n (down). See Searchable command history for more information. Why doesn't history substitution ("!$" etc.) work? Because history substitution is an awkward interface that was invented before interactive line editing was even possible. Instead of adding this pseudo-syntax, fish opts for nice history searching and recall features. Switching requires a small change of habits: if you want to modify an old line/word, first recall it, then edit. As a special case, most of the time history substitution is used as sudo !! . In that case just press alt-s , and it will recall your last commandline with sudo prefixed (or toggle a sudo prefix on the current commandline if there is anything). In general, fish's history recall works like this: • Like other shells, the Up arrow, up recalls whole lines, starting from the last executed line. So instead of typing !! , you would just hit the up-arrow. • If the line you want is far back in the history, type any part of the line and then press Up one or more times. This will filter the re- called lines to ones that include this text, and you will get to the line you want much faster. This replaces "!vi", "!?bar.c" and the like. If you want to see more context, you can press ctrl-r to open the history in the pager. • alt-up recalls individual arguments, starting from the last argument in the last executed line. This can be used instead of "!$". See documentation for more details about line editing in fish. That being said, you can use Abbreviations to implement history substi- tution. Here's just !! : function last_history_item; echo $history[1]; end abbr -a !! --position anywhere --function last_history_item Run this and !! will be replaced with the last history entry, anywhere on the commandline. Put it into config.fish to keep it. How do I run a subcommand? The backtick doesn't work! fish uses parentheses for subcommands. For example: for i in (ls) echo $i end It also supports the familiar $() syntax, even in quotes. Backticks are not supported because they are discouraged even in POSIX shells. They nest poorly and are hard to tell from single quotes ( '' ). My command (pkg-config) gives its output as a single long string? Unlike other shells, fish splits command substitutions only on new- lines, not spaces or tabs or the characters in $IFS. That means if you run count (printf '%s ' a b c) It will print 1 , because the "a b c " is used in one piece. But if you do count (printf '%s\n' a b c) it will print 3 , because it gave count the arguments "a", "b" and "c" separately. In the overwhelming majority of cases, splitting on spaces is unwanted, so this is an improvement. This is why you hear about problems with filenames with spaces, after all. However sometimes, especially with pkg-config and related tools, split- ting on spaces is needed. In these cases use string split -n " " like: g++ example_01.cpp (pkg-config --cflags --libs gtk+-2.0 | string split -n " ") The -n is so empty elements are removed like POSIX shells would do. How do I get the exit status of a command? Use the $status variable. This replaces the $? variable used in other shells. somecommand if test $status -eq 7 echo "That's my lucky number!" end If you are just interested in success or failure, you can run the com- mand directly as the if-condition: if somecommand echo "Command succeeded" else echo "Command failed" end Or if you just want to do one command in case the first succeeded or failed, use and or or : somecommand or someothercommand See the Conditions and the documentation for test and if for more in- formation. My command prints "No matches for wildcard" but works in bash In short: quote or escape the wildcard: scp user@ip:/dir/"string-*" When fish sees an unquoted * , it performs wildcard expansion . That means it tries to match filenames to the given string. If the wildcard doesn't match any files, fish prints an error instead of running the command: > echo *this*does*not*exist fish: No matches for wildcard '*this*does*not*exist'. See `help expand`. echo *this*does*not*exist ^ Now, bash also tries to match files in this case, but when it doesn't find a match, it passes along the literal wildcard string instead. That means that commands like the above scp user@ip:/dir/string-* or apt install postgres-* appear to work, because most of the time the string doesn't match and so it passes along the string-* , which is then interpreted by the re- ceiving program. But it also means that these commands can stop working at any moment once a matching file is encountered (because it has been created or the command is executed in a different working directory), and to deal with that bash needs workarounds like for f in ./*.mpg; do # We need to test if the file really exists because # the wildcard might have failed to match. test -f "$f" || continue mympgviewer "$f" done (from http://mywiki.wooledge.org/BashFAQ/004 ) For these reasons, fish does not do this, and instead expects asterisks to be quoted or escaped if they aren't supposed to be expanded. This is similar to bash's "failglob" option. Why won't SSH/SCP/rsync connect properly when fish is my login shell? This problem may show up as messages like " Received message too long ", " open terminal failed: not a terminal ", " Bad packet length ", or " Con- nection refused " with strange output in ssh_exchange_identification messages in the debug log. This usually happens because fish reads the user configuration file ( ~/.config/fish/config.fish ) always , whether it's in an interactive or login or non-interactive or non-login shell. This simplifies matters, but it also means when config.fish generates output, it will do that even in non-interactive shells like the one ssh/scp/rsync start when they connect. Anything in config.fish that produces output should be guarded with status is-interactive (or status is-login if you prefer): if status is-interactive ... end The same applies for example when you start tmux in config.fish without guards, which will cause a message like sessions should be nested with care, unset $TMUX to force . I'm getting weird graphical glitches (a staircase effect, ghost characters, cursor in the wrong position,...)? In a terminal, the application running inside it and the terminal it- self need to agree on the width of characters in order to handle cursor movement. This is more important to fish than other shells because features like syntax highlighting and autosuggestions are implemented by moving the cursor. Sometimes, there is disagreement on the width. There are numerous causes and fixes for this: • It is possible the character is simply too new for your system to know - in this case you need to refrain from using it. • Fish or your terminal might not know about the character or handle it wrong - in this case fish or your terminal needs to be fixed, or you need to update to a fixed version. • The character has an "ambiguous" width and fish thinks that means a width of X while your terminal thinks it's Y. In this case you either need to change your terminal's configuration or set $fish_ambigu- ous_width to the correct value. • The character is an emoji and the host system only supports Unicode 8, while you are running the terminal on a system that uses Unicode >= 9. In this case set $fish_emoji_width to 2. This also means that a few things are unsupportable: • Non-monospace fonts - there is no way for fish to figure out what width a specific character has as it has no influence on the termi- nal's font rendering. • Different widths for multiple ambiguous width characters - there is no way for fish to know which width you assign to each character. Uninstalling fish If you want to uninstall fish, first make sure fish is not set as your shell. Run chsh -s /bin/bash if you are not sure. If you installed it with a package manager, just use that package man- ager's uninstall function. If you built fish yourself, assuming you in- stalled it to /usr/local, do this: rm -Rf /usr/local/etc/fish /usr/local/share/fish ~/.config/fish rm /usr/local/share/man/man1/fish*.1 cd /usr/local/bin rm -f fish fish_indent Interactive use Fish prides itself on being really nice to use interactively. That's down to a few features we'll explain in the next few sections. Fish is used by giving commands in the fish language, see The Fish Lan- guage for information on that. Help Fish has an extensive help system. Use the help command to obtain help on a specific subject or command. For instance, writing help syntax displays the syntax section of this documentation. Fish also has man pages for its commands, and translates the help pages to man pages. For example, man set will show the documentation for set as a man page. Help on a specific builtin can also be obtained with the -h parameter. For instance, to obtain help on the fg builtin, either type fg -h or help fg . The main page can be viewed via help index (or just help ) or man fish-doc . The tutorial can be viewed with help tutorial or man fish-tu- torial . Autosuggestions fish suggests commands as you type, based on command history , comple- tions, and valid file paths. As you type commands, you will see a sug- gestion offered after the cursor, in a muted gray color (which can be changed with the fish_color_autosuggestion variable). To accept the autosuggestion (replacing the command line contents), press right () or ctrl-f . To accept the first suggested word, press alt-right () or alt-f . If the autosuggestion is not what you want, just ignore it: it won't execute unless you accept it. Autosuggestions are a powerful way to quickly summon frequently entered commands, by typing the first few characters. They are also an effi- cient technique for navigating through directory hierarchies. If you don't like autosuggestions, you can disable them by setting $fish_autosuggestion_enabled to 0: set -g fish_autosuggestion_enabled 0 Tab Completion Tab completion is a time saving feature of any modern shell. When you type tab , fish tries to guess the rest of the word under the cursor. If it finds just one possibility, it inserts it. If it finds more, it in- serts the longest unambiguous part and then opens a menu (the "pager") that you can navigate to find what you're looking for. The pager can be navigated with the arrow keys, pageup / pagedown , tab or shift-tab . Pressing ctrl-s (the pager-toggle-search binding - / in vi mode) opens up a search menu that you can use to filter the list. Fish provides some general purpose completions, like for commands, variable names, usernames or files. It also provides a large number of program specific scripted comple- tions. Most of these completions are simple options like the -l option for ls , but a lot are more advanced. For example: • man and whatis show the installed manual pages as completions. • make uses targets in the Makefile in the current directory as comple- tions. • mount uses mount points specified in fstab as completions. • apt , rpm and yum show installed or installable packages You can also write your own completions or install some you got from someone else. For that, see Writing your own completions . Completion scripts are loaded on demand, just like functions are . The difference is the $fish_complete_path list is used instead of $fish_function_path . Typically you can drop new completions in ~/.con- fig/fish/completions/name-of-command.fish and fish will find them auto- matically. Syntax highlighting Fish interprets the command line as it is typed and uses syntax high- lighting to provide feedback. The most important feedback is the detec- tion of potential errors. By default, errors are marked red. Detected errors include: • Non-existing commands. • Reading from or appending to a non-existing file. • Incorrect use of output redirects • Mismatched parenthesis To customize the syntax highlighting, you can set the environment vari- ables listed in the Variables for changing highlighting colors section. Fish also provides pre-made color themes you can pick with fish_config . Running just fish_config opens a browser interface, or you can use fish_config theme in the terminal. For example, to disable nearly all coloring: fish_config theme choose None Or, to see all themes, right in your terminal: fish_config theme show Syntax highlighting variables The colors used by fish for syntax highlighting can be configured by changing the values of various variables. The value of these variables can be one of the colors accepted by the set_color command. The modi- fier switches accepted by set_color like --bold , --dim , --italics , --reverse and --underline are also accepted. Example: to make errors highlighted and red, use: set fish_color_error red --bold The following variables are available to change the highlighting colors in fish: +--------------------------------+----------------------------+ | Variable | Meaning | +--------------------------------+----------------------------+ | | default color | | fish_color_normal | | +--------------------------------+----------------------------+ | | commands like echo | | fish_color_command | | +--------------------------------+----------------------------+ | | keywords like if - this | | fish_color_keyword | falls back on the command | | | color if unset | +--------------------------------+----------------------------+ | | quoted text like "abc" | | fish_color_quote | | +--------------------------------+----------------------------+ | | IO redirections like | | fish_color_redirec- | >/dev/null | | tion | | +--------------------------------+----------------------------+ | | process separators like ; | | fish_color_end | and & | +--------------------------------+----------------------------+ | | syntax errors | | fish_color_error | | +--------------------------------+----------------------------+ | | ordinary command parame- | | fish_color_param | ters | +--------------------------------+----------------------------+ | | parameters that are file- | | fish_color_valid_path | names (if the file exists) | +--------------------------------+----------------------------+ | | options starting with "-", | | fish_color_option | up to the first "--" para- | | | meter | +--------------------------------+----------------------------+ | | comments like '# impor- | | fish_color_comment | tant' | +--------------------------------+----------------------------+ | | selected text in vi visual | | fish_color_selection | mode | +--------------------------------+----------------------------+ | | parameter expansion opera- | | fish_color_operator | tors like * and ~ | +--------------------------------+----------------------------+ | | character escapes like \n | | fish_color_escape | and \x70 | +--------------------------------+----------------------------+ | | autosuggestions (the pro- | | fish_color_autosug- | posed rest of a command) | | gestion | | +--------------------------------+----------------------------+ | | the current working direc- | | fish_color_cwd | tory in the default prompt | +--------------------------------+----------------------------+ | | the current working direc- | | fish_color_cwd_root | tory in the default prompt | | | for the root user | +--------------------------------+----------------------------+ | | the username in the de- | | fish_color_user | fault prompt | +--------------------------------+----------------------------+ | | the hostname in the de- | | fish_color_host | fault prompt | +--------------------------------+----------------------------+ | | the hostname in the de- | | fish_color_host_re- | fault prompt for remote | | mote | sessions (like ssh) | +--------------------------------+----------------------------+ | | the last command's nonzero | | fish_color_status | exit code in the default | | | prompt | +--------------------------------+----------------------------+ | | the '^C' indicator on a | | fish_color_cancel | canceled command | +--------------------------------+----------------------------+ | | history search matches and | | fish_color_search_match | selected pager items | | | (background only) | +--------------------------------+----------------------------+ | | the current position in | | fish_color_history_cur- | the history for commands | | rent | like dirh and cdh | +--------------------------------+----------------------------+ If a variable isn't set or is empty, fish usually tries $fish_color_normal , except for: • $fish_color_keyword , where it tries $fish_color_command first. • $fish_color_option , where it tries $fish_color_param first. • For $fish_color_valid_path , if that doesn't have a color, but only modifiers, it adds those to the color that would otherwise be used, like $fish_color_param . But if valid paths have a color, it uses that and adds in modifiers from the other color. Pager color variables fish will sometimes present a list of choices in a table, called the pager. Example: to set the background of each pager row, use: set fish_pager_color_background --background=white To have black text on alternating white and gray backgrounds: set fish_pager_color_prefix black set fish_pager_color_completion black set fish_pager_color_description black set fish_pager_color_background --background=white set fish_pager_color_secondary_background --background=brwhite Variables affecting the pager colors: +----------------------------------+----------------------------+ | Variable | Meaning | +----------------------------------+----------------------------+ | | the progress bar at the | | fish_pager_color_progress | bottom left corner | +----------------------------------+----------------------------+ | | the background color of a | | fish_pager_color_back- | line | | ground | | +----------------------------------+----------------------------+ | | the prefix string, i.e. | | fish_pager_color_prefix | the string that is to be | | | completed | +----------------------------------+----------------------------+ | | the completion itself, | | fish_pager_color_comple- | i.e. the proposed rest of | | tion | the string | +----------------------------------+----------------------------+ | | the completion description | | fish_pager_color_descrip- | | | tion | | +----------------------------------+----------------------------+ | | background of the selected | | fish_pager_color_se- | completion | | lected_background | | +----------------------------------+----------------------------+ | | prefix of the selected | | fish_pager_color_se- | completion | | lected_prefix | | +----------------------------------+----------------------------+ | | suffix of the selected | | fish_pager_color_se- | completion | | lected_completion | | +----------------------------------+----------------------------+ | | description of the se- | | fish_pager_color_se- | lected completion | | lected_description | | +----------------------------------+----------------------------+ | | background of every second | | fish_pager_color_sec- | unselected completion | | ondary_background | | +----------------------------------+----------------------------+ | | prefix of every second un- | | fish_pager_color_sec- | selected completion | | ondary_prefix | | +----------------------------------+----------------------------+ | | suffix of every second un- | | fish_pager_color_sec- | selected completion | | ondary_completion | | +----------------------------------+----------------------------+ | | description of every sec- | | fish_pager_color_sec- | ond unselected completion | | ondary_description | | +----------------------------------+----------------------------+ When the secondary or selected variables aren't set or are empty, the normal variables are used, except for $fish_pager_color_selected_back- ground , where the background of $fish_color_search_match is tried first. Abbreviations To avoid needless typing, a frequently-run command like git checkout can be abbreviated to gco using the abbr command. abbr -a gco git checkout After entering gco and pressing space or enter , a gco in command posi- tion will turn into git checkout in the command line. If you want to use a literal gco sometimes, use ctrl-space [1]. Abbreviations are a lot more powerful than just replacing literal strings. For example you can make going up a number of directories eas- ier with this: function multicd echo cd (string repeat -n (math (string length -- $argv[1]) - 1) ../) end abbr --add dotdot --regex '^\.\.+$' --function multicd Now, .. transforms to cd ../ , while ... turns into cd ../../ and .... expands to cd ../../../ . The advantage over aliases is that you can see the actual command be- fore using it, add to it or change it, and the actual command will be stored in history. [1] Any binding that executes the expand-abbr or execute bind function will expand abbreviations. By default ctrl-space is bound to just inserting a space. Programmable prompt When it is fish's turn to ask for input (like after it started or the command ended), it will show a prompt. Often this looks something like: you@hostname ~> This prompt is determined by running the fish_prompt and fish_right_prompt functions. The output of the former is displayed on the left and the latter's out- put on the right side of the terminal. For vi mode , the output of fish_mode_prompt will be prepended on the left. Fish ships with a few prompts which you can see with fish_config . If you run just fish_config it will open a web interface [2] where you'll be shown the prompts and can pick which one you want. fish_config prompt show will show you the prompts right in your terminal. For example fish_config prompt choose disco will temporarily select the "disco" prompt. If you like it and decide to keep it, run fish_config prompt save . You can also change these functions yourself by running funced fish_prompt and funcsave fish_prompt once you are happy with the result (or fish_right_prompt if you want to change that). [2] The web interface runs purely locally on your computer and re- quires python to be installed. Configurable greeting When it is started interactively, fish tries to run the fish_greeting function. The default fish_greeting prints a simple message. You can change its text by changing the $fish_greeting variable, for instance using a universal variable : set -U fish_greeting or you can set it globally in config.fish : set -g fish_greeting 'Hey, stranger!' or you can script it by changing the function: function fish_greeting random choice "Hello!" "Hi" "G'day" "Howdy" end save this in config.fish or a function file . You can also use funced and funcsave to edit it easily. Programmable title When using most terminals, it is possible to set the text displayed in the titlebar of the terminal window. Fish does this by running the fish_title function. It is executed before and after a command and the output is used as a titlebar message. The status current-command builtin will always return the name of the job to be put into the foreground (or fish if control is returning to the shell) when the fish_title function is called. The first argument will contain the most recently executed foreground command as a string. The default title shows the hostname if connected via ssh, the cur- rently running command (unless it is fish) and the current working di- rectory. All of this is shortened to not make the tab too wide. Examples: To show the last command and working directory in the title: function fish_title # `prompt_pwd` shortens the title. This helps prevent tabs from becoming very wide. echo $argv[1] (prompt_pwd) pwd end Command line editor The fish editor features copy and paste, a searchable history and many editor functions that can be bound to special keyboard shortcuts. Like bash and other shells, fish includes two sets of keyboard short- cuts (or key bindings): one inspired by the Emacs text editor, and one by the vi text editor. The default editing mode is Emacs. You can switch to vi mode by running fish_vi_key_bindings and switch back with fish_default_key_bindings . You can also make your own key bindings by creating a function and setting the fish_key_bindings variable to its name. For example: function fish_hybrid_key_bindings --description \ "Vi-style bindings that inherit emacs-style bindings in all modes" for mode in default insert visual fish_default_key_bindings -M $mode end fish_vi_key_bindings --no-erase end set -g fish_key_bindings fish_hybrid_key_bindings While the key bindings included with fish include many of the shortcuts popular from the respective text editors, they are not a complete im- plementation. They include a shortcut to open the current command line in your preferred editor ( alt-e by default) if you need the full power of your editor. Shared bindings Some bindings are common across Emacs and vi mode, because they aren't text editing bindings, or because what vi/Vim does for a particular key doesn't make sense for a shell. • tab completes the current token. shift-tab completes the current to- ken and starts the pager's search mode. tab is the same as ctrl-i . • left () and right () move the cursor left or right by one character. If the cursor is already at the end of the line, and an autosugges- tion is available, right () accepts the autosuggestion. • enter executes the current commandline or inserts a newline if it's not complete yet (e.g. a ) or end is missing). • alt-enter inserts a newline at the cursor position. This is useful to add a line to a commandline that's already complete. • alt-left () and alt-right () move the cursor one word left or right (to the next space or punctuation mark), or moves forward/backward in the directory history if the command line is empty. If the cursor is already at the end of the line, and an autosuggestion is available, alt-right () (or alt-f ) accepts the first word in the suggestion. • ctrl-left () and ctrl-right () move the cursor one word left or right. These accept one word of the autosuggestion - the part they'd move over. • shift-left () and shift-right () move the cursor one word left or right, without stopping on punctuation. These accept one big word of the autosuggestion. • up () and down () (or ctrl-p and ctrl-n for emacs aficionados) search the command history for the previous/next command containing the string that was specified on the commandline before the search was started. If the commandline was empty when the search started, all commands match. See the history section for more information on his- tory searching. • alt-up () and alt-down () search the command history for the previ- ous/next token containing the token under the cursor before the search was started. If the commandline was not on a token when the search started, all tokens match. See the history section for more information on history searching. • ctrl-c interrupts/kills whatever is running (SIGINT). • ctrl-d deletes one character to the right of the cursor. If the com- mand line is empty, ctrl-d will exit fish. • ctrl-u removes contents from the beginning of line to the cursor (moving it to the killring ). • ctrl-l clears and repaints the screen. • ctrl-w removes the previous path component (everything up to the pre- vious "/", ":" or "@") (moving it to the Copy and paste (Kill Ring) ). • ctrl-x copies the current buffer to the system's clipboard, ctrl-v inserts the clipboard contents. (see fish_clipboard_copy and fish_clipboard_paste ) • alt-d or ctrl-delete moves the next word to the Copy and paste (Kill Ring) . • alt-d lists the directory history if the command line is empty. • alt-delete moves the next argument to the Copy and paste (Kill Ring) . • shift-delete removes t | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/reference/attributes.html#r-attributes.inner | Attributes - The Rust Reference Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Reference [attributes] Attributes [attributes .syntax] Syntax InnerAttribute → # ! [ Attr ] OuterAttribute → # [ Attr ] Attr → SimplePath AttrInput ? | unsafe ( SimplePath AttrInput ? ) AttrInput → DelimTokenTree | = Expression Show Railroad InnerAttribute # ! [ Attr ] OuterAttribute # [ Attr ] Attr SimplePath AttrInput unsafe ( SimplePath AttrInput ) AttrInput DelimTokenTree = Expression [attributes .intro] An attribute is a general, free-form metadatum that is interpreted according to name, convention, language, and compiler version. Attributes are modeled on Attributes in ECMA-335 , with the syntax coming from ECMA-334 (C#). [attributes .inner] Inner attributes , written with a bang ( ! ) after the hash ( # ), apply to the item that the attribute is declared within. Outer attributes , written without the bang after the hash, apply to the thing that follows the attribute. [attributes .input] The attribute consists of a path to the attribute, followed by an optional delimited token tree whose interpretation is defined by the attribute. Attributes other than macro attributes also allow the input to be an equals sign ( = ) followed by an expression. See the meta item syntax below for more details. [attributes .safety] An attribute may be unsafe to apply. To avoid undefined behavior when using these attributes, certain obligations that cannot be checked by the compiler must be met. To assert these have been, the attribute is wrapped in unsafe(..) , e.g. #[unsafe(no_mangle)] . The following attributes are unsafe: export_name link_section naked no_mangle [attributes .kind] Attributes can be classified into the following kinds: Built-in attributes Proc macro attributes Derive macro helper attributes Tool attributes [attributes .allowed-position] Attributes may be applied to many things in the language: All item declarations accept outer attributes while external blocks , functions , implementations , and modules accept inner attributes. Most statements accept outer attributes (see Expression Attributes for limitations on expression statements). Block expressions accept outer and inner attributes, but only when they are the outer expression of an expression statement or the final expression of another block expression. Enum variants and struct and union fields accept outer attributes. Match expression arms accept outer attributes. Generic lifetime or type parameter accept outer attributes. Expressions accept outer attributes in limited situations, see Expression Attributes for details. Function , closure and function pointer parameters accept outer attributes. This includes attributes on variadic parameters denoted with ... in function pointers and external blocks . Some examples of attributes: #![allow(unused)] fn main() { // General metadata applied to the enclosing module or crate. #![crate_type = "lib"] // A function marked as a unit test #[test] fn test_foo() { /* ... */ } // A conditionally-compiled module #[cfg(target_os = "linux")] mod bar { /* ... */ } // A lint attribute used to suppress a warning/error #[allow(non_camel_case_types)] type int8_t = i8; // Inner attribute applies to the entire function. fn some_unused_variables() { #![allow(unused_variables)] let x = (); let y = (); let z = (); } } [attributes .meta] Meta item attribute syntax [attributes .meta .intro] A “meta item” is the syntax used for the Attr rule by most built-in attributes . It has the following grammar: [attributes .meta .syntax] Syntax MetaItem → SimplePath | SimplePath = Expression | SimplePath ( MetaSeq ? ) MetaSeq → MetaItemInner ( , MetaItemInner ) * , ? MetaItemInner → MetaItem | Expression Show Railroad MetaItem SimplePath SimplePath = Expression SimplePath ( MetaSeq ) MetaSeq MetaItemInner , MetaItemInner , MetaItemInner MetaItem Expression [attributes .meta .literal-expr] Expressions in meta items must macro-expand to literal expressions, which must not include integer or float type suffixes. Expressions which are not literal expressions will be syntactically accepted (and can be passed to proc-macros), but will be rejected after parsing. [attributes .meta .order] Note that if the attribute appears within another macro, it will be expanded after that outer macro. For example, the following code will expand the Serialize proc-macro first, which must preserve the include_str! call in order for it to be expanded: #[derive(Serialize)] struct Foo { #[doc = include_str!("x.md")] x: u32 } [attributes .meta .order-macro] Additionally, macros in attributes will be expanded only after all other attributes applied to the item: #[macro_attr1] // expanded first #[doc = mac!()] // `mac!` is expanded fourth. #[macro_attr2] // expanded second #[derive(MacroDerive1, MacroDerive2)] // expanded third fn foo() {} [attributes .meta .builtin] Various built-in attributes use different subsets of the meta item syntax to specify their inputs. The following grammar rules show some commonly used forms: [attributes .meta .builtin .syntax] Syntax MetaWord → IDENTIFIER MetaNameValueStr → IDENTIFIER = ( STRING_LITERAL | RAW_STRING_LITERAL ) MetaListPaths → IDENTIFIER ( ( SimplePath ( , SimplePath ) * , ? ) ? ) MetaListIdents → IDENTIFIER ( ( IDENTIFIER ( , IDENTIFIER ) * , ? ) ? ) MetaListNameValueStr → IDENTIFIER ( ( MetaNameValueStr ( , MetaNameValueStr ) * , ? ) ? ) Show Railroad MetaWord IDENTIFIER MetaNameValueStr IDENTIFIER = STRING_LITERAL RAW_STRING_LITERAL MetaListPaths IDENTIFIER ( SimplePath , SimplePath , ) MetaListIdents IDENTIFIER ( IDENTIFIER , IDENTIFIER , ) MetaListNameValueStr IDENTIFIER ( MetaNameValueStr , MetaNameValueStr , ) Some examples of meta items are: Style Example MetaWord no_std MetaNameValueStr doc = "example" MetaListPaths allow(unused, clippy::inline_always) MetaListIdents macro_use(foo, bar) MetaListNameValueStr link(name = "CoreFoundation", kind = "framework") [attributes .activity] Active and inert attributes [attributes .activity .intro] An attribute is either active or inert. During attribute processing, active attributes remove themselves from the thing they are on while inert attributes stay on. The cfg and cfg_attr attributes are active. Attribute macros are active. All other attributes are inert. [attributes .tool] Tool attributes [attributes .tool .intro] The compiler may allow attributes for external tools where each tool resides in its own module in the tool prelude . The first segment of the attribute path is the name of the tool, with one or more additional segments whose interpretation is up to the tool. [attributes .tool .ignored] When a tool is not in use, the tool’s attributes are accepted without a warning. When the tool is in use, the tool is responsible for processing and interpretation of its attributes. [attributes .tool .prelude] Tool attributes are not available if the no_implicit_prelude attribute is used. #![allow(unused)] fn main() { // Tells the rustfmt tool to not format the following element. #[rustfmt::skip] struct S { } // Controls the "cyclomatic complexity" threshold for the clippy tool. #[clippy::cyclomatic_complexity = "100"] pub fn f() {} } Note rustc currently recognizes the tools “clippy”, “rustfmt”, “diagnostic”, “miri” and “rust_analyzer”. [attributes .builtin] Built-in attributes index The following is an index of all built-in attributes. Conditional compilation cfg — Controls conditional compilation. cfg_attr — Conditionally includes attributes. Testing test — Marks a function as a test. ignore — Disables a test function. should_panic — Indicates a test should generate a panic. Derive derive — Automatic trait implementations. automatically_derived — Marker for implementations created by derive . Macros macro_export — Exports a macro_rules macro for cross-crate usage. macro_use — Expands macro visibility, or imports macros from other crates. proc_macro — Defines a function-like macro. proc_macro_derive — Defines a derive macro. proc_macro_attribute — Defines an attribute macro. Diagnostics allow , expect , warn , deny , forbid — Alters the default lint level. deprecated — Generates deprecation notices. must_use — Generates a lint for unused values. diagnostic::on_unimplemented — Hints the compiler to emit a certain error message if a trait is not implemented. diagnostic::do_not_recommend — Hints the compiler to not show a certain trait impl in error messages. ABI, linking, symbols, and FFI link — Specifies a native library to link with an extern block. link_name — Specifies the name of the symbol for functions or statics in an extern block. link_ordinal — Specifies the ordinal of the symbol for functions or statics in an extern block. no_link — Prevents linking an extern crate. repr — Controls type layout. crate_type — Specifies the type of crate (library, executable, etc.). no_main — Disables emitting the main symbol. export_name — Specifies the exported symbol name for a function or static. link_section — Specifies the section of an object file to use for a function or static. no_mangle — Disables symbol name encoding. used — Forces the compiler to keep a static item in the output object file. crate_name — Specifies the crate name. Code generation inline — Hint to inline code. cold — Hint that a function is unlikely to be called. naked — Prevent the compiler from emitting a function prologue and epilogue. no_builtins — Disables use of certain built-in functions. target_feature — Configure platform-specific code generation. track_caller — Pass the parent call location to std::panic::Location::caller() . instruction_set — Specify the instruction set used to generate a functions code Documentation doc — Specifies documentation. See The Rustdoc Book for more information. Doc comments are transformed into doc attributes. Preludes no_std — Removes std from the prelude. no_implicit_prelude — Disables prelude lookups within a module. Modules path — Specifies the filename for a module. Limits recursion_limit — Sets the maximum recursion limit for certain compile-time operations. type_length_limit — Sets the maximum size of a polymorphic type. Runtime panic_handler — Sets the function to handle panics. global_allocator — Sets the global memory allocator. windows_subsystem — Specifies the windows subsystem to link with. Features feature — Used to enable unstable or experimental compiler features. See The Unstable Book for features implemented in rustc . Type System non_exhaustive — Indicate that a type will have more fields/variants added in future. Debugger debugger_visualizer — Embeds a file that specifies debugger output for a type. collapse_debuginfo — Controls how macro invocations are encoded in debuginfo. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/cargo/appendix/glossary.html#glossary | Appendix: Glossary - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Glossary Artifact An artifact is the file or set of files created as a result of the compilation process. This includes linkable libraries, executable binaries, and generated documentation. Cargo Cargo is the Rust package manager , and the primary topic of this book. Cargo.lock See lock file . Cargo.toml See manifest . Crate A Rust crate is either a library or an executable program, referred to as either a library crate or a binary crate , respectively. Every target defined for a Cargo package is a crate . Loosely, the term crate may refer to either the source code of the target or to the compiled artifact that the target produces. It may also refer to a compressed package fetched from a registry . The source code for a given crate may be subdivided into modules . Edition A Rust edition is a developmental landmark of the Rust language. The edition of a package is specified in the Cargo.toml manifest , and individual targets can specify which edition they use. See the Edition Guide for more information. Feature The meaning of feature depends on the context: A feature is a named flag which allows for conditional compilation. A feature can refer to an optional dependency, or an arbitrary name defined in a Cargo.toml manifest that can be checked within source code. Cargo has unstable feature flags which can be used to enable experimental behavior of Cargo itself. The Rust compiler and Rustdoc have their own unstable feature flags (see The Unstable Book and The Rustdoc Book ). CPU targets have target features which specify capabilities of a CPU. Index The index is the searchable list of crates in a registry . Lock file The Cargo.lock lock file is a file that captures the exact version of every dependency used in a workspace or package . It is automatically generated by Cargo. See Cargo.toml vs Cargo.lock . Manifest A manifest is a description of a package or a workspace in a file named Cargo.toml . A virtual manifest is a Cargo.toml file that only describes a workspace, and does not include a package. Member A member is a package that belongs to a workspace . Module Rust’s module system is used to organize code into logical units called modules , which provide isolated namespaces within the code. The source code for a given crate may be subdivided into one or more separate modules. This is usually done to organize the code into areas of related functionality or to control the visible scope (public/private) of symbols within the source (structs, functions, and so on). A Cargo.toml file is primarily concerned with the package it defines, its crates, and the packages of the crates on which they depend. Nevertheless, you will see the term “module” often when working with Rust, so you should understand its relationship to a given crate. Package A package is a collection of source files and a Cargo.toml manifest file which describes the package. A package has a name and version which is used for specifying dependencies between packages. A package contains multiple targets , each of which is a crate . The Cargo.toml file describes the type of the crates (binary or library) within the package, along with some metadata about each one — how each is to be built, what their direct dependencies are, etc., as described throughout this book. The package root is the directory where the package’s Cargo.toml manifest is located. (Compare with workspace root .) The package ID specification , or SPEC , is a string used to uniquely reference a specific version of a package from a specific source. Small to medium sized Rust projects will only need a single package, though it is common for them to have multiple crates. Larger projects may involve multiple packages, in which case Cargo workspaces can be used to manage common dependencies and other related metadata between the packages. Package manager Broadly speaking, a package manager is a program (or collection of related programs) in a software ecosystem that automates the process of obtaining, installing, and upgrading artifacts. Within a programming language ecosystem, a package manager is a developer-focused tool whose primary functionality is to download library artifacts and their dependencies from some central repository; this capability is often combined with the ability to perform software builds (by invoking the language-specific compiler). Cargo is the package manager within the Rust ecosystem. Cargo downloads your Rust package ’s dependencies ( artifacts known as crates ), compiles your packages, makes distributable packages, and (optionally) uploads them to crates.io , the Rust community’s package registry . Package registry See registry . Project Another name for a package . Registry A registry is a service that contains a collection of downloadable crates that can be installed or used as dependencies for a package . The default registry in the Rust ecosystem is crates.io . The registry has an index which contains a list of all crates, and tells Cargo how to download the crates that are needed. Source A source is a provider that contains crates that may be included as dependencies for a package . There are several kinds of sources: Registry source — See registry . Local registry source — A set of crates stored as compressed files on the filesystem. See Local Registry Sources . Directory source — A set of crates stored as uncompressed files on the filesystem. See Directory Sources . Path source — An individual package located on the filesystem (such as a path dependency ) or a set of multiple packages (such as path overrides ). Git source — Packages located in a git repository (such as a git dependency or git source ). See Source Replacement for more information. Spec See package ID specification . Target The meaning of the term target depends on the context: Cargo Target — Cargo packages consist of targets which correspond to artifacts that will be produced. Packages can have library, binary, example, test, and benchmark targets. The list of targets are configured in the Cargo.toml manifest , often inferred automatically by the directory layout of the source files. Target Directory — Cargo places built artifacts in the target directory. By default this is a directory named target at the workspace root, or the package root if not using a workspace. The directory may be changed with the --target-dir command-line option, the CARGO_TARGET_DIR environment variable , or the build.target-dir config option . For more information see the build cache documentation. Target Architecture — The OS and machine architecture for the built artifacts are typically referred to as a target . Target Triple — A triple is a specific format for specifying a target architecture. Triples may be referred to as a target triple which is the architecture for the artifact produced, and the host triple which is the architecture that the compiler is running on. The target triple can be specified with the --target command-line option or the build.target config option . The general format of the triple is <arch><sub>-<vendor>-<sys>-<abi> where: arch = The base CPU architecture, for example x86_64 , i686 , arm , thumb , mips , etc. sub = The CPU sub-architecture, for example arm has v7 , v7s , v5te , etc. vendor = The vendor, for example unknown , apple , pc , nvidia , etc. sys = The system name, for example linux , windows , darwin , etc. none is typically used for bare-metal without an OS. abi = The ABI, for example gnu , android , eabi , etc. Some parameters may be omitted. Run rustc --print target-list for a list of supported targets. Test Targets Cargo test targets generate binaries which help verify proper operation and correctness of code. There are two types of test artifacts: Unit test — A unit test is an executable binary compiled directly from a library or a binary target. It contains the entire contents of the library or binary code, and runs #[test] annotated functions, intended to verify individual units of code. Integration test target — An integration test target is an executable binary compiled from a test target which is a distinct crate whose source is located in the tests directory or specified by the [[test]] table in the Cargo.toml manifest . It is intended to only test the public API of a library, or execute a binary to verify its operation. Workspace A workspace is a collection of one or more packages that share common dependency resolution (with a shared Cargo.lock lock file ), output directory, and various settings such as profiles. A virtual workspace is a workspace where the root Cargo.toml manifest does not define a package, and only lists the workspace members . The workspace root is the directory where the workspace’s Cargo.toml manifest is located. (Compare with package root .) | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/reference/attributes.html#meta-item-attribute-syntax | Attributes - The Rust Reference Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Reference [attributes] Attributes [attributes .syntax] Syntax InnerAttribute → # ! [ Attr ] OuterAttribute → # [ Attr ] Attr → SimplePath AttrInput ? | unsafe ( SimplePath AttrInput ? ) AttrInput → DelimTokenTree | = Expression Show Railroad InnerAttribute # ! [ Attr ] OuterAttribute # [ Attr ] Attr SimplePath AttrInput unsafe ( SimplePath AttrInput ) AttrInput DelimTokenTree = Expression [attributes .intro] An attribute is a general, free-form metadatum that is interpreted according to name, convention, language, and compiler version. Attributes are modeled on Attributes in ECMA-335 , with the syntax coming from ECMA-334 (C#). [attributes .inner] Inner attributes , written with a bang ( ! ) after the hash ( # ), apply to the item that the attribute is declared within. Outer attributes , written without the bang after the hash, apply to the thing that follows the attribute. [attributes .input] The attribute consists of a path to the attribute, followed by an optional delimited token tree whose interpretation is defined by the attribute. Attributes other than macro attributes also allow the input to be an equals sign ( = ) followed by an expression. See the meta item syntax below for more details. [attributes .safety] An attribute may be unsafe to apply. To avoid undefined behavior when using these attributes, certain obligations that cannot be checked by the compiler must be met. To assert these have been, the attribute is wrapped in unsafe(..) , e.g. #[unsafe(no_mangle)] . The following attributes are unsafe: export_name link_section naked no_mangle [attributes .kind] Attributes can be classified into the following kinds: Built-in attributes Proc macro attributes Derive macro helper attributes Tool attributes [attributes .allowed-position] Attributes may be applied to many things in the language: All item declarations accept outer attributes while external blocks , functions , implementations , and modules accept inner attributes. Most statements accept outer attributes (see Expression Attributes for limitations on expression statements). Block expressions accept outer and inner attributes, but only when they are the outer expression of an expression statement or the final expression of another block expression. Enum variants and struct and union fields accept outer attributes. Match expression arms accept outer attributes. Generic lifetime or type parameter accept outer attributes. Expressions accept outer attributes in limited situations, see Expression Attributes for details. Function , closure and function pointer parameters accept outer attributes. This includes attributes on variadic parameters denoted with ... in function pointers and external blocks . Some examples of attributes: #![allow(unused)] fn main() { // General metadata applied to the enclosing module or crate. #![crate_type = "lib"] // A function marked as a unit test #[test] fn test_foo() { /* ... */ } // A conditionally-compiled module #[cfg(target_os = "linux")] mod bar { /* ... */ } // A lint attribute used to suppress a warning/error #[allow(non_camel_case_types)] type int8_t = i8; // Inner attribute applies to the entire function. fn some_unused_variables() { #![allow(unused_variables)] let x = (); let y = (); let z = (); } } [attributes .meta] Meta item attribute syntax [attributes .meta .intro] A “meta item” is the syntax used for the Attr rule by most built-in attributes . It has the following grammar: [attributes .meta .syntax] Syntax MetaItem → SimplePath | SimplePath = Expression | SimplePath ( MetaSeq ? ) MetaSeq → MetaItemInner ( , MetaItemInner ) * , ? MetaItemInner → MetaItem | Expression Show Railroad MetaItem SimplePath SimplePath = Expression SimplePath ( MetaSeq ) MetaSeq MetaItemInner , MetaItemInner , MetaItemInner MetaItem Expression [attributes .meta .literal-expr] Expressions in meta items must macro-expand to literal expressions, which must not include integer or float type suffixes. Expressions which are not literal expressions will be syntactically accepted (and can be passed to proc-macros), but will be rejected after parsing. [attributes .meta .order] Note that if the attribute appears within another macro, it will be expanded after that outer macro. For example, the following code will expand the Serialize proc-macro first, which must preserve the include_str! call in order for it to be expanded: #[derive(Serialize)] struct Foo { #[doc = include_str!("x.md")] x: u32 } [attributes .meta .order-macro] Additionally, macros in attributes will be expanded only after all other attributes applied to the item: #[macro_attr1] // expanded first #[doc = mac!()] // `mac!` is expanded fourth. #[macro_attr2] // expanded second #[derive(MacroDerive1, MacroDerive2)] // expanded third fn foo() {} [attributes .meta .builtin] Various built-in attributes use different subsets of the meta item syntax to specify their inputs. The following grammar rules show some commonly used forms: [attributes .meta .builtin .syntax] Syntax MetaWord → IDENTIFIER MetaNameValueStr → IDENTIFIER = ( STRING_LITERAL | RAW_STRING_LITERAL ) MetaListPaths → IDENTIFIER ( ( SimplePath ( , SimplePath ) * , ? ) ? ) MetaListIdents → IDENTIFIER ( ( IDENTIFIER ( , IDENTIFIER ) * , ? ) ? ) MetaListNameValueStr → IDENTIFIER ( ( MetaNameValueStr ( , MetaNameValueStr ) * , ? ) ? ) Show Railroad MetaWord IDENTIFIER MetaNameValueStr IDENTIFIER = STRING_LITERAL RAW_STRING_LITERAL MetaListPaths IDENTIFIER ( SimplePath , SimplePath , ) MetaListIdents IDENTIFIER ( IDENTIFIER , IDENTIFIER , ) MetaListNameValueStr IDENTIFIER ( MetaNameValueStr , MetaNameValueStr , ) Some examples of meta items are: Style Example MetaWord no_std MetaNameValueStr doc = "example" MetaListPaths allow(unused, clippy::inline_always) MetaListIdents macro_use(foo, bar) MetaListNameValueStr link(name = "CoreFoundation", kind = "framework") [attributes .activity] Active and inert attributes [attributes .activity .intro] An attribute is either active or inert. During attribute processing, active attributes remove themselves from the thing they are on while inert attributes stay on. The cfg and cfg_attr attributes are active. Attribute macros are active. All other attributes are inert. [attributes .tool] Tool attributes [attributes .tool .intro] The compiler may allow attributes for external tools where each tool resides in its own module in the tool prelude . The first segment of the attribute path is the name of the tool, with one or more additional segments whose interpretation is up to the tool. [attributes .tool .ignored] When a tool is not in use, the tool’s attributes are accepted without a warning. When the tool is in use, the tool is responsible for processing and interpretation of its attributes. [attributes .tool .prelude] Tool attributes are not available if the no_implicit_prelude attribute is used. #![allow(unused)] fn main() { // Tells the rustfmt tool to not format the following element. #[rustfmt::skip] struct S { } // Controls the "cyclomatic complexity" threshold for the clippy tool. #[clippy::cyclomatic_complexity = "100"] pub fn f() {} } Note rustc currently recognizes the tools “clippy”, “rustfmt”, “diagnostic”, “miri” and “rust_analyzer”. [attributes .builtin] Built-in attributes index The following is an index of all built-in attributes. Conditional compilation cfg — Controls conditional compilation. cfg_attr — Conditionally includes attributes. Testing test — Marks a function as a test. ignore — Disables a test function. should_panic — Indicates a test should generate a panic. Derive derive — Automatic trait implementations. automatically_derived — Marker for implementations created by derive . Macros macro_export — Exports a macro_rules macro for cross-crate usage. macro_use — Expands macro visibility, or imports macros from other crates. proc_macro — Defines a function-like macro. proc_macro_derive — Defines a derive macro. proc_macro_attribute — Defines an attribute macro. Diagnostics allow , expect , warn , deny , forbid — Alters the default lint level. deprecated — Generates deprecation notices. must_use — Generates a lint for unused values. diagnostic::on_unimplemented — Hints the compiler to emit a certain error message if a trait is not implemented. diagnostic::do_not_recommend — Hints the compiler to not show a certain trait impl in error messages. ABI, linking, symbols, and FFI link — Specifies a native library to link with an extern block. link_name — Specifies the name of the symbol for functions or statics in an extern block. link_ordinal — Specifies the ordinal of the symbol for functions or statics in an extern block. no_link — Prevents linking an extern crate. repr — Controls type layout. crate_type — Specifies the type of crate (library, executable, etc.). no_main — Disables emitting the main symbol. export_name — Specifies the exported symbol name for a function or static. link_section — Specifies the section of an object file to use for a function or static. no_mangle — Disables symbol name encoding. used — Forces the compiler to keep a static item in the output object file. crate_name — Specifies the crate name. Code generation inline — Hint to inline code. cold — Hint that a function is unlikely to be called. naked — Prevent the compiler from emitting a function prologue and epilogue. no_builtins — Disables use of certain built-in functions. target_feature — Configure platform-specific code generation. track_caller — Pass the parent call location to std::panic::Location::caller() . instruction_set — Specify the instruction set used to generate a functions code Documentation doc — Specifies documentation. See The Rustdoc Book for more information. Doc comments are transformed into doc attributes. Preludes no_std — Removes std from the prelude. no_implicit_prelude — Disables prelude lookups within a module. Modules path — Specifies the filename for a module. Limits recursion_limit — Sets the maximum recursion limit for certain compile-time operations. type_length_limit — Sets the maximum size of a polymorphic type. Runtime panic_handler — Sets the function to handle panics. global_allocator — Sets the global memory allocator. windows_subsystem — Specifies the windows subsystem to link with. Features feature — Used to enable unstable or experimental compiler features. See The Unstable Book for features implemented in rustc . Type System non_exhaustive — Indicate that a type will have more fields/variants added in future. Debugger debugger_visualizer — Embeds a file that specifies debugger output for a type. collapse_debuginfo — Controls how macro invocations are encoded in debuginfo. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/cargo/reference/config.html#configuration | Configuration - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Configuration This document explains how Cargo’s configuration system works, as well as available keys or configuration. For configuration of a package through its manifest, see the manifest format . Hierarchical structure Cargo allows local configuration for a particular package as well as global configuration. It looks for configuration files in the current directory and all parent directories. If, for example, Cargo were invoked in /projects/foo/bar/baz , then the following configuration files would be probed for and unified in this order: /projects/foo/bar/baz/.cargo/config.toml /projects/foo/bar/.cargo/config.toml /projects/foo/.cargo/config.toml /projects/.cargo/config.toml /.cargo/config.toml $CARGO_HOME/config.toml which defaults to: Windows: %USERPROFILE%\.cargo\config.toml Unix: $HOME/.cargo/config.toml With this structure, you can specify configuration per-package, and even possibly check it into version control. You can also specify personal defaults with a configuration file in your home directory. If a key is specified in multiple config files, the values will get merged together. Numbers, strings, and booleans will use the value in the deeper config directory taking precedence over ancestor directories, where the home directory is the lowest priority. Arrays will be joined together with higher precedence items being placed later in the merged array. At present, when being invoked from a workspace, Cargo does not read config files from crates within the workspace. i.e. if a workspace has two crates in it, named /projects/foo/bar/baz/mylib and /projects/foo/bar/baz/mybin , and there are Cargo configs at /projects/foo/bar/baz/mylib/.cargo/config.toml and /projects/foo/bar/baz/mybin/.cargo/config.toml , Cargo does not read those configuration files if it is invoked from the workspace root ( /projects/foo/bar/baz/ ). Note: Cargo also reads config files without the .toml extension, such as .cargo/config . Support for the .toml extension was added in version 1.39 and is the preferred form. If both files exist, Cargo will use the file without the extension. Configuration format Configuration files are written in the TOML format (like the manifest), with simple key-value pairs inside of sections (tables). The following is a quick overview of all settings, with detailed descriptions found below. paths = ["/path/to/override"] # path dependency overrides [alias] # command aliases b = "build" c = "check" t = "test" r = "run" rr = "run --release" recursive_example = "rr --example recursions" space_example = ["run", "--release", "--", "\"command list\""] [build] jobs = 1 # number of parallel jobs, defaults to # of CPUs rustc = "rustc" # the rust compiler tool rustc-wrapper = "…" # run this wrapper instead of `rustc` rustc-workspace-wrapper = "…" # run this wrapper instead of `rustc` for workspace members rustdoc = "rustdoc" # the doc generator tool target = "triple" # build for the target triple (ignored by `cargo install`) target-dir = "target" # path of where to place generated artifacts build-dir = "target" # path of where to place intermediate build artifacts rustflags = ["…", "…"] # custom flags to pass to all compiler invocations rustdocflags = ["…", "…"] # custom flags to pass to rustdoc incremental = true # whether or not to enable incremental compilation dep-info-basedir = "…" # path for the base directory for targets in depfiles [credential-alias] # Provides a way to define aliases for credential providers. my-alias = ["/usr/bin/cargo-credential-example", "--argument", "value", "--flag"] [doc] browser = "chromium" # browser to use with `cargo doc --open`, # overrides the `BROWSER` environment variable [env] # Set ENV_VAR_NAME=value for any process run by Cargo ENV_VAR_NAME = "value" # Set even if already present in environment ENV_VAR_NAME_2 = { value = "value", force = true } # `value` is relative to the parent of `.cargo/config.toml`, env var will be the full absolute path ENV_VAR_NAME_3 = { value = "relative/path", relative = true } [future-incompat-report] frequency = 'always' # when to display a notification about a future incompat report [cache] auto-clean-frequency = "1 day" # How often to perform automatic cache cleaning [cargo-new] vcs = "none" # VCS to use ('git', 'hg', 'pijul', 'fossil', 'none') [http] debug = false # HTTP debugging proxy = "host:port" # HTTP proxy in libcurl format ssl-version = "tlsv1.3" # TLS version to use ssl-version.max = "tlsv1.3" # maximum TLS version ssl-version.min = "tlsv1.1" # minimum TLS version timeout = 30 # timeout for each HTTP request, in seconds low-speed-limit = 10 # network timeout threshold (bytes/sec) cainfo = "cert.pem" # path to Certificate Authority (CA) bundle proxy-cainfo = "cert.pem" # path to proxy Certificate Authority (CA) bundle check-revoke = true # check for SSL certificate revocation multiplexing = true # HTTP/2 multiplexing user-agent = "…" # the user-agent header [install] root = "/some/path" # `cargo install` destination directory [net] retry = 3 # network retries git-fetch-with-cli = true # use the `git` executable for git operations offline = true # do not access the network [net.ssh] known-hosts = ["..."] # known SSH host keys [patch.<registry>] # Same keys as for [patch] in Cargo.toml [profile.<name>] # Modify profile settings via config. inherits = "dev" # Inherits settings from [profile.dev]. opt-level = 0 # Optimization level. debug = true # Include debug info. split-debuginfo = '...' # Debug info splitting behavior. strip = "none" # Removes symbols or debuginfo. debug-assertions = true # Enables debug assertions. overflow-checks = true # Enables runtime integer overflow checks. lto = false # Sets link-time optimization. panic = 'unwind' # The panic strategy. incremental = true # Incremental compilation. codegen-units = 16 # Number of code generation units. rpath = false # Sets the rpath linking option. [profile.<name>.build-override] # Overrides build-script settings. # Same keys for a normal profile. [profile.<name>.package.<name>] # Override profile for a package. # Same keys for a normal profile (minus `panic`, `lto`, and `rpath`). [resolver] incompatible-rust-versions = "allow" # Specifies how resolver reacts to these [registries.<name>] # registries other than crates.io index = "…" # URL of the registry index token = "…" # authentication token for the registry credential-provider = "cargo:token" # The credential provider for this registry. [registries.crates-io] protocol = "sparse" # The protocol to use to access crates.io. [registry] default = "…" # name of the default registry token = "…" # authentication token for crates.io credential-provider = "cargo:token" # The credential provider for crates.io. global-credential-providers = ["cargo:token"] # The credential providers to use by default. [source.<name>] # source definition and replacement replace-with = "…" # replace this source with the given named source directory = "…" # path to a directory source registry = "…" # URL to a registry source local-registry = "…" # path to a local registry source git = "…" # URL of a git repository source branch = "…" # branch name for the git repository tag = "…" # tag name for the git repository rev = "…" # revision for the git repository [target.<triple>] linker = "…" # linker to use runner = "…" # wrapper to run executables rustflags = ["…", "…"] # custom flags for `rustc` rustdocflags = ["…", "…"] # custom flags for `rustdoc` [target.<cfg>] linker = "…" # linker to use runner = "…" # wrapper to run executables rustflags = ["…", "…"] # custom flags for `rustc` [target.<triple>.<links>] # `links` build script override rustc-link-lib = ["foo"] rustc-link-search = ["/path/to/foo"] rustc-flags = "-L /some/path" rustc-cfg = ['key="value"'] rustc-env = {key = "value"} rustc-cdylib-link-arg = ["…"] metadata_key1 = "value" metadata_key2 = "value" [term] quiet = false # whether cargo output is quiet verbose = false # whether cargo provides verbose output color = 'auto' # whether cargo colorizes output hyperlinks = true # whether cargo inserts links into output unicode = true # whether cargo can render output using non-ASCII unicode characters progress.when = 'auto' # whether cargo shows progress bar progress.width = 80 # width of progress bar progress.term-integration = true # whether cargo reports progress to terminal emulator Environment variables Cargo can also be configured through environment variables in addition to the TOML configuration files. For each configuration key of the form foo.bar the environment variable CARGO_FOO_BAR can also be used to define the value. Keys are converted to uppercase, dots and dashes are converted to underscores. For example the target.x86_64-unknown-linux-gnu.runner key can also be defined by the CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUNNER environment variable. Environment variables will take precedence over TOML configuration files. Currently only integer, boolean, string and some array values are supported to be defined by environment variables. Descriptions below indicate which keys support environment variables and otherwise they are not supported due to technical issues . In addition to the system above, Cargo recognizes a few other specific environment variables . Command-line overrides Cargo also accepts arbitrary configuration overrides through the --config command-line option. The argument should be in TOML syntax of KEY=VALUE or provided as a path to an extra configuration file: # With `KEY=VALUE` in TOML syntax cargo --config net.git-fetch-with-cli=true fetch # With a path to a configuration file cargo --config ./path/to/my/extra-config.toml fetch The --config option may be specified multiple times, in which case the values are merged in left-to-right order, using the same merging logic that is used when multiple configuration files apply. Configuration values specified this way take precedence over environment variables, which take precedence over configuration files. When the --config option is provided as an extra configuration file, The configuration file loaded this way follow the same precedence rules as other options specified directly with --config . Some examples of what it looks like using Bourne shell syntax: # Most shells will require escaping. cargo --config http.proxy=\"http://example.com\" … # Spaces may be used. cargo --config "net.git-fetch-with-cli = true" … # TOML array example. Single quotes make it easier to read and write. cargo --config 'build.rustdocflags = ["--html-in-header", "header.html"]' … # Example of a complex TOML key. cargo --config "target.'cfg(all(target_arch = \"arm\", target_os = \"none\"))'.runner = 'my-runner'" … # Example of overriding a profile setting. cargo --config profile.dev.package.image.opt-level=3 … Config-relative paths Paths in config files may be absolute, relative, or a bare name without any path separators. Paths for executables without a path separator will use the PATH environment variable to search for the executable. Paths for non-executables will be relative to where the config value is defined. In particular, rules are: For environment variables, paths are relative to the current working directory. For config values loaded directly from the --config KEY=VALUE option, paths are relative to the current working directory. For config files, paths are relative to the parent directory of the directory where the config files were defined, no matter those files are from either the hierarchical probing or the --config <path> option. Note: To maintain consistency with existing .cargo/config.toml probing behavior, it is by design that a path in a config file passed via --config <path> is also relative to two levels up from the config file itself. To avoid unexpected results, the rule of thumb is putting your extra config files at the same level of discovered .cargo/config.toml in your project. For instance, given a project /my/project , it is recommended to put config files under /my/project/.cargo or a new directory at the same level, such as /my/project/.config . # Relative path examples. [target.x86_64-unknown-linux-gnu] runner = "foo" # Searches `PATH` for `foo`. [source.vendored-sources] # Directory is relative to the parent where `.cargo/config.toml` is located. # For example, `/my/project/.cargo/config.toml` would result in `/my/project/vendor`. directory = "vendor" Executable paths with arguments Some Cargo commands invoke external programs, which can be configured as a path and some number of arguments. The value may be an array of strings like ['/path/to/program', 'somearg'] or a space-separated string like '/path/to/program somearg' . If the path to the executable contains a space, the list form must be used. If Cargo is passing other arguments to the program such as a path to open or run, they will be passed after the last specified argument in the value of an option of this format. If the specified program does not have path separators, Cargo will search PATH for its executable. Credentials Configuration values with sensitive information are stored in the $CARGO_HOME/credentials.toml file. This file is automatically created and updated by cargo login and cargo logout when using the cargo:token credential provider. Tokens are used by some Cargo commands such as cargo publish for authenticating with remote registries. Care should be taken to protect the tokens and to keep them secret. It follows the same format as Cargo config files. [registry] token = "…" # Access token for crates.io [registries.<name>] token = "…" # Access token for the named registry As with most other config values, tokens may be specified with environment variables. The token for crates.io may be specified with the CARGO_REGISTRY_TOKEN environment variable. Tokens for other registries may be specified with environment variables of the form CARGO_REGISTRIES_<name>_TOKEN where <name> is the name of the registry in all capital letters. Note: Cargo also reads and writes credential files without the .toml extension, such as .cargo/credentials . Support for the .toml extension was added in version 1.39. In version 1.68, Cargo writes to the file with the extension by default. However, for backward compatibility reason, when both files exist, Cargo will read and write the file without the extension. Configuration keys This section documents all configuration keys. The description for keys with variable parts are annotated with angled brackets like target.<triple> where the <triple> part can be any target triple like target.x86_64-pc-windows-msvc . paths Type: array of strings (paths) Default: none Environment: not supported An array of paths to local packages which are to be used as overrides for dependencies. For more information see the Overriding Dependencies guide . [alias] Type: string or array of strings Default: see below Environment: CARGO_ALIAS_<name> The [alias] table defines CLI command aliases. For example, running cargo b is an alias for running cargo build . Each key in the table is the subcommand, and the value is the actual command to run. The value may be an array of strings, where the first element is the command and the following are arguments. It may also be a string, which will be split on spaces into subcommand and arguments. The following aliases are built-in to Cargo: [alias] b = "build" c = "check" d = "doc" t = "test" r = "run" rm = "remove" Aliases are not allowed to redefine existing built-in commands. Aliases are recursive: [alias] rr = "run --release" recursive_example = "rr --example recursions" [build] The [build] table controls build-time operations and compiler settings. build.jobs Type: integer or string Default: number of logical CPUs Environment: CARGO_BUILD_JOBS Sets the maximum number of compiler processes to run in parallel. If negative, it sets the maximum number of compiler processes to the number of logical CPUs plus provided value. Should not be 0. If a string default is provided, it sets the value back to defaults. Can be overridden with the --jobs CLI option. build.rustc Type: string (program path) Default: "rustc" Environment: CARGO_BUILD_RUSTC or RUSTC Sets the executable to use for rustc . build.rustc-wrapper Type: string (program path) Default: none Environment: CARGO_BUILD_RUSTC_WRAPPER or RUSTC_WRAPPER Sets a wrapper to execute instead of rustc . The first argument passed to the wrapper is the path to the actual executable to use (i.e., build.rustc , if that is set, or "rustc" otherwise). build.rustc-workspace-wrapper Type: string (program path) Default: none Environment: CARGO_BUILD_RUSTC_WORKSPACE_WRAPPER or RUSTC_WORKSPACE_WRAPPER Sets a wrapper to execute instead of rustc , for workspace members only. When building a single-package project without workspaces, that package is considered to be the workspace. The first argument passed to the wrapper is the path to the actual executable to use (i.e., build.rustc , if that is set, or "rustc" otherwise). It affects the filename hash so that artifacts produced by the wrapper are cached separately. If both rustc-wrapper and rustc-workspace-wrapper are set, then they will be nested: the final invocation is $RUSTC_WRAPPER $RUSTC_WORKSPACE_WRAPPER $RUSTC . build.rustdoc Type: string (program path) Default: "rustdoc" Environment: CARGO_BUILD_RUSTDOC or RUSTDOC Sets the executable to use for rustdoc . build.target Type: string or array of strings Default: host platform Environment: CARGO_BUILD_TARGET The default target platform triples to compile to. Possible values: Any supported target in rustc --print target-list . "host-tuple" , which will internally be substituted by the host’s target. This can be particularly useful if you’re cross-compiling some crates, and don’t want to specify your host’s machine as a target (for instance, an xtask in a shared project that may be worked on by many hosts). A path to a custom target specification. See Custom Target Lookup Path for more information. Can be overridden with the --target CLI option. [build] target = ["x86_64-unknown-linux-gnu", "i686-unknown-linux-gnu"] build.target-dir Type: string (path) Default: "target" Environment: CARGO_BUILD_TARGET_DIR or CARGO_TARGET_DIR The path to where all compiler output is placed. The default if not specified is a directory named target located at the root of the workspace. Can be overridden with the --target-dir CLI option. For more information see the build cache documentation . build.build-dir Type: string (path) Default: Defaults to the value of build.target-dir Environment: CARGO_BUILD_BUILD_DIR The directory where intermediate build artifacts will be stored. Intermediate artifacts are produced by Rustc/Cargo during the build process. This option supports path templating. Available template variables: {workspace-root} resolves to root of the current workspace. {cargo-cache-home} resolves to CARGO_HOME {workspace-path-hash} resolves to a hash of the manifest path For more information see the build cache documentation . build.rustflags Type: string or array of strings Default: none Environment: CARGO_BUILD_RUSTFLAGS or CARGO_ENCODED_RUSTFLAGS or RUSTFLAGS Extra command-line flags to pass to rustc . The value may be an array of strings or a space-separated string. There are four mutually exclusive sources of extra flags. They are checked in order, with the first one being used: CARGO_ENCODED_RUSTFLAGS environment variable. RUSTFLAGS environment variable. All matching target.<triple>.rustflags and target.<cfg>.rustflags config entries joined together. build.rustflags config value. Additional flags may also be passed with the cargo rustc command. If the --target flag (or build.target ) is used, then the flags will only be passed to the compiler for the target. Things being built for the host, such as build scripts or proc macros, will not receive the args. Without --target , the flags will be passed to all compiler invocations (including build scripts and proc macros) because dependencies are shared. If you have args that you do not want to pass to build scripts or proc macros and are building for the host, pass --target with the host triple . It is not recommended to pass in flags that Cargo itself usually manages. For example, the flags driven by profiles are best handled by setting the appropriate profile setting. Caution : Due to the low-level nature of passing flags directly to the compiler, this may cause a conflict with future versions of Cargo which may issue the same or similar flags on its own which may interfere with the flags you specify. This is an area where Cargo may not always be backwards compatible. build.rustdocflags Type: string or array of strings Default: none Environment: CARGO_BUILD_RUSTDOCFLAGS or CARGO_ENCODED_RUSTDOCFLAGS or RUSTDOCFLAGS Extra command-line flags to pass to rustdoc . The value may be an array of strings or a space-separated string. There are four mutually exclusive sources of extra flags. They are checked in order, with the first one being used: CARGO_ENCODED_RUSTDOCFLAGS environment variable. RUSTDOCFLAGS environment variable. All matching target.<triple>.rustdocflags config entries joined together. build.rustdocflags config value. Additional flags may also be passed with the cargo rustdoc command. Caution : Due to the low-level nature of passing flags directly to the compiler, this may cause a conflict with future versions of Cargo which may issue the same or similar flags on its own which may interfere with the flags you specify. This is an area where Cargo may not always be backwards compatible. build.incremental Type: bool Default: from profile Environment: CARGO_BUILD_INCREMENTAL or CARGO_INCREMENTAL Whether or not to perform incremental compilation . The default if not set is to use the value from the profile . Otherwise this overrides the setting of all profiles. The CARGO_INCREMENTAL environment variable can be set to 1 to force enable incremental compilation for all profiles, or 0 to disable it. This env var overrides the config setting. build.dep-info-basedir Type: string (path) Default: none Environment: CARGO_BUILD_DEP_INFO_BASEDIR Strips the given path prefix from dep info file paths. This config setting is intended to convert absolute paths to relative paths for tools that require relative paths. The setting itself is a config-relative path. So, for example, a value of "." would strip all paths starting with the parent directory of the .cargo directory. build.pipelining This option is deprecated and unused. Cargo always has pipelining enabled. [credential-alias] Type: string or array of strings Default: empty Environment: CARGO_CREDENTIAL_ALIAS_<name> The [credential-alias] table defines credential provider aliases. These aliases can be referenced as an element of the registry.global-credential-providers array, or as a credential provider for a specific registry under registries.<NAME>.credential-provider . If specified as a string, the value will be split on spaces into path and arguments. For example, to define an alias called my-alias : [credential-alias] my-alias = ["/usr/bin/cargo-credential-example", "--argument", "value", "--flag"] See Registry Authentication for more information. [doc] The [doc] table defines options for the cargo doc command. doc.browser Type: string or array of strings ( program path with args ) Default: BROWSER environment variable, or, if that is missing, opening the link in a system specific way This option sets the browser to be used by cargo doc , overriding the BROWSER environment variable when opening documentation with the --open option. [cargo-new] The [cargo-new] table defines defaults for the cargo new command. cargo-new.name This option is deprecated and unused. cargo-new.email This option is deprecated and unused. cargo-new.vcs Type: string Default: "git" or "none" Environment: CARGO_CARGO_NEW_VCS Specifies the source control system to use for initializing a new repository. Valid values are git , hg (for Mercurial), pijul , fossil or none to disable this behavior. Defaults to git , or none if already inside a VCS repository. Can be overridden with the --vcs CLI option. [env] The [env] section allows you to set additional environment variables for build scripts, rustc invocations, cargo run and cargo build . [env] OPENSSL_DIR = "/opt/openssl" By default, the variables specified will not override values that already exist in the environment. This behavior can be changed by setting the force flag. Setting the relative flag evaluates the value as a config-relative path that is relative to the parent directory of the .cargo directory that contains the config.toml file. The value of the environment variable will be the full absolute path. [env] TMPDIR = { value = "/home/tmp", force = true } OPENSSL_DIR = { value = "vendor/openssl", relative = true } [future-incompat-report] The [future-incompat-report] table controls setting for future incompat reporting future-incompat-report.frequency Type: string Default: "always" Environment: CARGO_FUTURE_INCOMPAT_REPORT_FREQUENCY Controls how often we display a notification to the terminal when a future incompat report is available. Possible values: always (default): Always display a notification when a command (e.g. cargo build ) produces a future incompat report never : Never display a notification [cache] The [cache] table defines settings for cargo’s caches. Global caches When running cargo commands, Cargo will automatically track which files you are using within the global cache. Periodically, Cargo will delete files that have not been used for some period of time. It will delete files that have to be downloaded from the network if they have not been used in 3 months. Files that can be generated without network access will be deleted if they have not been used in 1 month. The automatic deletion of files only occurs when running commands that are already doing a significant amount of work, such as all of the build commands ( cargo build , cargo test , cargo check , etc.), and cargo fetch . Automatic deletion is disabled if cargo is offline such as with --offline or --frozen to avoid deleting artifacts that may need to be used if you are offline for a long period of time. Note : This tracking is currently only implemented for the global cache in Cargo’s home directory. This includes registry indexes and source files downloaded from registries and git dependencies. Support for tracking build artifacts is not yet implemented, and tracked in cargo#13136 . Additionally, there is an unstable feature to support manually triggering cache cleaning, and to further customize the configuration options. See the Unstable chapter for more information. cache.auto-clean-frequency Type: string Default: "1 day" Environment: CARGO_CACHE_AUTO_CLEAN_FREQUENCY This option defines how often Cargo will automatically delete unused files in the global cache. This does not define how old the files must be, those thresholds are described above . It supports the following settings: "never" — Never deletes old files. "always" — Checks to delete old files every time Cargo runs. An integer followed by “seconds”, “minutes”, “hours”, “days”, “weeks”, or “months” — Checks to delete old files at most the given time frame. [http] The [http] table defines settings for HTTP behavior. This includes fetching crate dependencies and accessing remote git repositories. http.debug Type: boolean Default: false Environment: CARGO_HTTP_DEBUG If true , enables debugging of HTTP requests. The debug information can be seen by setting the CARGO_LOG=network=debug environment variable (or use network=trace for even more information). Be wary when posting logs from this output in a public location. The output may include headers with authentication tokens which you don’t want to leak! Be sure to review logs before posting them. http.proxy Type: string Default: none Environment: CARGO_HTTP_PROXY or HTTPS_PROXY or https_proxy or http_proxy Sets an HTTP and HTTPS proxy to use. The format is in libcurl format as in [protocol://]host[:port] . If not set, Cargo will also check the http.proxy setting in your global git configuration. If none of those are set, the HTTPS_PROXY or https_proxy environment variables set the proxy for HTTPS requests, and http_proxy sets it for HTTP requests. http.timeout Type: integer Default: 30 Environment: CARGO_HTTP_TIMEOUT or HTTP_TIMEOUT Sets the timeout for each HTTP request, in seconds. http.cainfo Type: string (path) Default: none Environment: CARGO_HTTP_CAINFO Path to a Certificate Authority (CA) bundle file, used to verify TLS certificates. If not specified, Cargo attempts to use the system certificates. http.proxy-cainfo Type: string (path) Default: falls back to http.cainfo if not set Environment: CARGO_HTTP_PROXY_CAINFO Path to a Certificate Authority (CA) bundle file, used to verify proxy TLS certificates. http.check-revoke Type: boolean Default: true (Windows) false (all others) Environment: CARGO_HTTP_CHECK_REVOKE This determines whether or not TLS certificate revocation checks should be performed. This only works on Windows. http.ssl-version Type: string or min/max table Default: none Environment: CARGO_HTTP_SSL_VERSION This sets the minimum TLS version to use. It takes a string, with one of the possible values of "default" , "tlsv1" , "tlsv1.0" , "tlsv1.1" , "tlsv1.2" , or "tlsv1.3" . This may alternatively take a table with two keys, min and max , which each take a string value of the same kind that specifies the minimum and maximum range of TLS versions to use. The default is a minimum version of "tlsv1.0" and a max of the newest version supported on your platform, typically "tlsv1.3" . http.low-speed-limit Type: integer Default: 10 Environment: CARGO_HTTP_LOW_SPEED_LIMIT This setting controls timeout behavior for slow connections. If the average transfer speed in bytes per second is below the given value for http.timeout seconds (default 30 seconds), then the connection is considered too slow and Cargo will abort and retry. http.multiplexing Type: boolean Default: true Environment: CARGO_HTTP_MULTIPLEXING When true , Cargo will attempt to use the HTTP2 protocol with multiplexing. This allows multiple requests to use the same connection, usually improving performance when fetching multiple files. If false , Cargo will use HTTP 1.1 without pipelining. http.user-agent Type: string Default: Cargo’s version Environment: CARGO_HTTP_USER_AGENT Specifies a custom user-agent header to use. The default if not specified is a string that includes Cargo’s version. [install] The [install] table defines defaults for the cargo install command. install.root Type: string (path) Default: Cargo’s home directory Environment: CARGO_INSTALL_ROOT Sets the path to the root directory for installing executables for cargo install . Executables go into a bin directory underneath the root. To track information of installed executables, some extra files, such as .crates.toml and .crates2.json , are also created under this root. The default if not specified is Cargo’s home directory (default .cargo in your home directory). Can be overridden with the --root command-line option. [net] The [net] table controls networking configuration. net.retry Type: integer Default: 3 Environment: CARGO_NET_RETRY Number of times to retry possibly spurious network errors. net.git-fetch-with-cli Type: boolean Default: false Environment: CARGO_NET_GIT_FETCH_WITH_CLI If this is true , then Cargo will use the git executable to fetch registry indexes and git dependencies. If false , then it uses a built-in git library. Setting this to true can be helpful if you have special authentication requirements that Cargo does not support. See Git Authentication for more information about setting up git authentication. net.offline Type: boolean Default: false Environment: CARGO_NET_OFFLINE If this is true , then Cargo will avoid accessing the network, and attempt to proceed with locally cached data. If false , Cargo will access the network as needed, and generate an error if it encounters a network error. Can be overridden with the --offline command-line option. net.ssh The [net.ssh] table contains settings for SSH connections. net.ssh.known-hosts Type: array of strings Default: see description Environment: not supported The known-hosts array contains a list of SSH host keys that should be accepted as valid when connecting to an SSH server (such as for SSH git dependencies). Each entry should be a string in a format similar to OpenSSH known_hosts files. Each string should start with one or more hostnames separated by commas, a space, the key type name, a space, and the base64-encoded key. For example: [net.ssh] known-hosts = [ "example.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFO4Q5T0UV0SQevair9PFwoxY9dl4pQl3u5phoqJH3cF" ] Cargo will attempt to load known hosts keys from common locations supported in OpenSSH, and will join those with any listed in a Cargo configuration file. If any matching entry has the correct key, the connection will be allowed. Cargo comes with the host keys for github.com built-in. If those ever change, you can add the new keys to the config or known_hosts file. See Git Authentication for more details. [patch] Just as you can override dependencies using [patch] in Cargo.toml , you can override them in the cargo configuration file to apply those patches to any affected build. The format is identical to the one used in Cargo.toml . Since .cargo/config.toml files are not usually checked into source control, you should prefer patching using Cargo.toml where possible to ensure that other developers can compile your crate in their own environments. Patching through cargo configuration files is generally only appropriate when the patch section is automatically generated by an external build tool. If a given dependency is patched both in a cargo configuration file and a Cargo.toml file, the patch in the configuration file is used. If multiple configuration files patch the same dependency, standard cargo configuration merging is used, which prefers the value defined closest to the current directory, with $HOME/.cargo/config.toml taking the lowest precedence. Relative path dependencies in such a [patch] section are resolved relative to the configuration file they appear in. [profile] The [profile] table can be used to globally change profile settings, and override settings specified in Cargo.toml . It has the same syntax and options as profiles specified in Cargo.toml . See the Profiles chapter for details about the options. [profile.<name>.build-override] Environment: CARGO_PROFILE_<name>_BUILD_OVERRIDE_<key> The build-override table overrides settings for build scripts, proc macros, and their dependencies. It has the same keys as a normal profile. See the overrides section for more details. [profile.<name>.package.<name>] Environment: not supported The package table overrides settings for specific packages. It has the same keys as a normal profile, minus the panic , lto , and rpath settings. See the overrides section for more details. profile.<name>.codegen-units Type: integer Default: See profile docs. Environment: CARGO_PROFILE_<name>_CODEGEN_UNITS See codegen-units . profile.<name>.debug Type: integer or boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_DEBUG See debug . profile.<name>.split-debuginfo Type: string Default: See profile docs. Environment: CARGO_PROFILE_<name>_SPLIT_DEBUGINFO See split-debuginfo . profile.<name>.debug-assertions Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_DEBUG_ASSERTIONS See debug-assertions . profile.<name>.incremental Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_INCREMENTAL See incremental . profile.<name>.lto Type: string or boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_LTO See lto . profile.<name>.overflow-checks Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_OVERFLOW_CHECKS See overflow-checks . profile.<name>.opt-level Type: integer or string Default: See profile docs. Environment: CARGO_PROFILE_<name>_OPT_LEVEL See opt-level . profile.<name>.panic Type: string Default: See profile docs. Environment: CARGO_PROFILE_<name>_PANIC See panic . profile.<name>.rpath Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_RPATH See rpath . profile.<name>.strip Type: string or boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_STRIP See strip . [resolver] The [resolver] table overrides dependency resolution behavior for local development (e.g. excludes cargo install ). resolver.incompatible-rust-versions Type: string Default: See resolver docs Environment: CARGO_RESOLVER_INCOMPATIBLE_RUST_VERSIONS When resolving which version of a dependency to use, select how versions with incompatible package.rust-version s are treated. Values include: allow : treat rust-version -incompatible versions like any other version fallback : only consider rust-version -incompatible versions if no other version matched Can be overridden with --ignore-rust-version CLI option Setting the dependency’s version requirement higher than any version with a compatible rust-version Specifying the version to cargo update with --precise See the resolver chapter for more details. MSRV: allow is supported on any version fallback is respected as of 1.84 [registries] The [registries] table is used for specifying additional registries . It consists of a sub-table for each named registry. registries.<name>.index Type: string (url) Default: none Environment: CARGO_REGISTRIES_<name>_INDEX Specifies the URL of the index for the registry. registries.<name>.token Type: string Default: none Environment: CARGO_REGISTRIES_<name>_TOKEN Specifies the authentication token for the given registry. This value should only appear in the credentials file. This is used for registry commands like cargo publish that require authentication. Can be overridden with the --token command-line option. registries.<name>.credential-provider Type: string or array of path and arguments Default: none Environment: CARGO_REGISTRIES_<name>_CREDENTIAL_PROVIDER Specifies the credential provider for the given registry. If not set, the providers in registry.global-credential-providers will be used. If specified as a string, path and arguments will be split on spaces. For paths or arguments that contain spaces, use an array. If the value exists in the [credential-alias] table, the alias will be used. See Registry Authentication for more information. registries.crates-io.protocol Type: string Default: "sparse" Environment: CARGO_REGISTRIES_CRATES_IO_PROTOCOL Specifies the protocol used to access crates.io. Allowed values are git or sparse . git causes Cargo to clone the entire index of all packages ever published to crates.io from https://github.com/rust-lang/crates.io-index/ . This can have performance implications due to the size of the index. sparse is a newer protocol which uses HTTPS to download only what is necessary from https://index.crates.io/ . This can result in a significant performance improvement for resolving new dependencies in most situations. More information about registry protocols may be found in the Registries chapter . [registry] The [registry] table controls the default registry used when one is not specified. registry.index This value is no longer accepted and should not be used. registry.default Type: string Default: "crates-io" Environment: CARGO_REGISTRY_DEFAULT The name of the registry (from the registries table ) to use by default for registry commands like cargo publish . Can be overridden with the --registry command-line option. registry.credential-provider Type: string or array of path and arguments Default: none Environment: CARGO_REGISTRY_CREDENTIAL_PROVIDER Specifies the credential provider for crates.io . If not set, the providers in registry.global-credential-providers will be used. If specified as a string, path and arguments will be split on spaces. For paths or arguments that contain spaces, use an array. If the value exists in the [credential-alias] table, the alias will be used. See Registry Authentication for more information. registry.token Type: string Default: none Environment: CARGO_REGISTRY_TOKEN Specifies the authentication token for crates.io . This value should only appear in the credentials file. This is used for registry commands like cargo publish that require authentication. Can be overridden with the --token command-line option. registry.global-credential-providers Type: array Default: ["cargo:token"] Environment: CARGO_REGISTRY_GLOBAL_CREDENTIAL_PROVIDERS Specifies the list of global credential providers. If credential provider is not set for a specific registry using registries.<name>.credential-provider , Cargo will use the credential providers in this list. Providers toward the end of the list have precedence. Path and arguments are split on spaces. If the path or arguments contains spaces, the credential provider should be defined in the [credential-alias] table and referenced here by its alias. See Registry Authentication for more information. [source] The [source] table defines the registry sources available. See Source Replacement for more information. It consists of a sub-table for each named source. A source should only define one kind (directory, registry, local-registry, or git). source.<name>.replace-with Type: string Default: none Environment: not supported If set, replace this source with the given named source or named registry. source.<name>.directory Type: string (path) Default: none Environment: not supported Sets the path to a directory to use as a directory source. source.<name>.registry Type: string (url) Default: none Environment: not supported Sets the URL to use for a registry source. source.<name>.local-registry Type: string (path) Default: none Environment: not supported Sets the path to a directory to use as a local registry source. source.<name>.git Type: string (url) Default: none Environment: not supported Sets the URL to use for a git repository source. source.<name>.branch Type: string Default: none Environment: not supported Sets the branch name to use for a git repository. If none of branch , tag , or rev is set, defaults to the master branch. source.<name>.tag Type: string Default: none Environment: not supported Sets the tag name to use for a git repository. If none of branch , tag , or rev is set, defaults to the master branch. source.<name>.rev Type: string Default: none Environment: not supported Sets the revision to use for a git repository. If none of branch , tag , or rev is set, defaults to the master branch. [target] The [target] table is used for specifying settings for specific platform targets. It consists of a sub-table which is either a platform triple or a cfg() expression . The given values will be used if the target platform matches either the <triple> value or the <cfg> expression. [target.thumbv7m-none-eabi] linker = "arm-none-eabi-gcc" runner = "my-emulator" rustflags = ["…", "…"] [target.'cfg(all(target_arch = "arm", target_os = "none"))'] runner = "my-arm-wrapper" rustflags = ["…", "…"] cfg values come from those built-in to the compiler (run rustc --print=cfg to view) and extra --cfg flags passed to rustc (such as those defined in RUSTFLAGS ). Do not try to match on debug_assertions , test , Cargo features like feature="foo" , or values set by build scripts . If using a target spec JSON file, the <triple> value is the filename stem. For example --target foo/bar.json would match [target.bar] . target.<triple>.ar This option is deprecated and unused. target.<triple>.linker Type: string (program path) Default: none Environment: CARGO_TARGET_<triple>_LINKER Specifies the linker which is passed to rustc (via -C linker ) when the <triple> is being compiled for. By default, the linker is not overridden. target.<cfg>.linker This is similar to the target linker , but using a cfg() expression . If both a <triple> and <cfg> runner match, the <triple> will take precedence. It is an error if more than one <cfg> runner matches the current target. target.<triple>.runner Type: string or array of strings ( program path with args ) Default: none Environment: CARGO_TARGET_<triple>_RUNNER If a runner is provided, executables for the target <triple> will be executed by invoking the specified runner with the actual executable passed as an argument. This applies to cargo run , cargo test and cargo bench commands. By default, compiled executables are executed directly. target.<cfg>.runner This is similar to the target runner , but using a cfg() expression . If both a <triple> and <cfg> runner match, the <triple> will take precedence. It is an error if more than one <cfg> runner matches the current target. target.<triple>.rustflags Type: string or array of strings Default: none Environment: CARGO_TARGET_<triple>_RUSTFLAGS Passes a set of custom flags to the compiler for this <triple> . The value may be an array of strings or a space-separated string. See build.rustflags for more details on the different ways to specific extra flags. target.<cfg>.rustflags This is similar to the target rustflags , but using a cfg() expression . If several <cfg> and <triple> entries match the current target, the flags are joined together. target.<triple>.rustdocflags Type: string or array of strings Default: none Environment: CARGO_TARGET_<triple>_RUSTDOCFLAGS Passes a set of custom flags to the compiler for this <triple> . The value may be an array of strings or a space-separated string. See build.rustdocflags for more details on the different ways to specific extra flags. target.<triple>.<links> The links sub-table provides a way to override a build script . When specified, the build script for the given links library will not be run, and the given values will be used instead. [target.x86_64-unknown-linux-gnu.foo] rustc-link-lib = ["foo"] rustc-link-search = ["/path/to/foo"] rustc-flags = "-L /some/path" rustc-cfg = ['key="value"'] rustc-env = {key = "value"} rustc-cdylib-link-arg = ["…"] metadata_key1 = "value" metadata_key2 = "value" [term] The [term] table controls terminal output and interaction. term.quiet Type: boolean Default: false Environment: CARGO_TERM_QUIET Controls whether or not log messages are displayed by Cargo. Specifying the --quiet flag will override and force quiet output. Specifying the --verbose flag will override and disable quiet output. term.verbose Type: boolean Default: false Environment: CARGO_TERM_VERBOSE Controls whether or not extra detailed messages are displayed by Cargo. Specifying the --quiet flag will override and disable verbose output. Specifying the --verbose flag will override and force verbose output. term.color Type: string Default: "auto" Environment: CARGO_TERM_COLOR Controls whether or not colored output is used in the terminal. Possible values: auto (default): Automatically detect if color support is available on the terminal. always : Always display colors. never : Never display colors. Can be overridden with the --color command-line option. term.hyperlinks Type: bool Default: auto-detect Environment: CARGO_TERM_HYPERLINKS Controls whether or not hyperlinks are used in the terminal. term.unicode Type: bool Default: auto-detect Environment: CARGO_TERM_UNICODE Control whether output can be rendered using non-ASCII unicode characters. term.progress.when Type: string Default: "auto" Environment: CARGO_TERM_PROGRESS_WHEN Controls whether or not progress bar is shown in the terminal. Possible values: auto (default): Intelligently guess whether to show progress bar. always : Always show progress bar. never : Never show progress bar. term.progress.width Type: integer Default: none Environment: CARGO_TERM_PROGRESS_WIDTH Sets the width for progress bar. term.progress.term-integration Type: bool Default: auto-detect Environment: CARGO_TERM_PROGRESS_TERM_INTEGRATION Report progress to the terminal emulator for display in places like the task bar. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/std/fmt/trait.UpperHex.html | UpperHex in std::fmt - Rust This old browser is unsupported and will most likely display funky things. UpperHex std 1.92.0 (ded5c06cf 2025-12-08) Upper Hex Sections Examples Required Methods fmt Implementors In std:: fmt std :: fmt Trait Upper Hex Copy item path 1.0.0 · Source pub trait UpperHex { // Required method fn fmt (&self, f: &mut Formatter <'_>) -> Result < () , Error >; } Expand description X formatting. The UpperHex trait should format its output as a number in hexadecimal, with A through F in upper case. For primitive signed integers ( i8 to i128 , and isize ), negative values are formatted as the two’s complement representation. The alternate flag, # , adds a 0x in front of the output. For more information on formatters, see the module-level documentation . § Examples Basic usage with i32 : let y = 42 ; // 42 is '2A' in hex assert_eq! ( format! ( "{y:X}" ), "2A" ); assert_eq! ( format! ( "{y:#X}" ), "0x2A" ); assert_eq! ( format! ( "{:X}" , - 16 ), "FFFFFFF0" ); Implementing UpperHex on a type: use std::fmt; struct Length(i32); impl fmt::UpperHex for Length { fn fmt( & self , f: &mut fmt::Formatter< '_ >) -> fmt::Result { let val = self . 0 ; fmt::UpperHex::fmt( & val, f) // delegate to i32's implementation } } let l = Length(i32::MAX); assert_eq! ( format! ( "l as hex is: {l:X}" ), "l as hex is: 7FFFFFFF" ); assert_eq! ( format! ( "l as hex is: {l:#010X}" ), "l as hex is: 0x7FFFFFFF" ); Required Methods § 1.0.0 · Source fn fmt (&self, f: &mut Formatter <'_>) -> Result < () , Error > Formats the value using the given formatter. § Errors This function should return Err if, and only if, the provided Formatter returns Err . String formatting is considered an infallible operation; this function only returns a Result because writing to the underlying stream might fail and it must provide a way to propagate the fact that an error has occurred back up the stack. Implementors § 1.0.0 · Source § impl UpperHex for i8 1.0.0 · Source § impl UpperHex for i16 1.0.0 · Source § impl UpperHex for i32 1.0.0 · Source § impl UpperHex for i64 1.0.0 · Source § impl UpperHex for i128 1.0.0 · Source § impl UpperHex for isize 1.0.0 · Source § impl UpperHex for u8 1.0.0 · Source § impl UpperHex for u16 1.0.0 · Source § impl UpperHex for u32 1.0.0 · Source § impl UpperHex for u64 1.0.0 · Source § impl UpperHex for u128 1.0.0 · Source § impl UpperHex for usize 1.0.0 · Source § impl<T> UpperHex for &T where T: UpperHex + ? Sized , 1.0.0 · Source § impl<T> UpperHex for &mut T where T: UpperHex + ? Sized , 1.28.0 · Source § impl<T> UpperHex for NonZero <T> where T: ZeroablePrimitive + UpperHex , 1.74.0 · Source § impl<T> UpperHex for Saturating <T> where T: UpperHex , 1.11.0 · Source § impl<T> UpperHex for Wrapping <T> where T: UpperHex , | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/reference/attributes.html#grammar-InnerAttribute | Attributes - The Rust Reference Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Reference [attributes] Attributes [attributes .syntax] Syntax InnerAttribute → # ! [ Attr ] OuterAttribute → # [ Attr ] Attr → SimplePath AttrInput ? | unsafe ( SimplePath AttrInput ? ) AttrInput → DelimTokenTree | = Expression Show Railroad InnerAttribute # ! [ Attr ] OuterAttribute # [ Attr ] Attr SimplePath AttrInput unsafe ( SimplePath AttrInput ) AttrInput DelimTokenTree = Expression [attributes .intro] An attribute is a general, free-form metadatum that is interpreted according to name, convention, language, and compiler version. Attributes are modeled on Attributes in ECMA-335 , with the syntax coming from ECMA-334 (C#). [attributes .inner] Inner attributes , written with a bang ( ! ) after the hash ( # ), apply to the item that the attribute is declared within. Outer attributes , written without the bang after the hash, apply to the thing that follows the attribute. [attributes .input] The attribute consists of a path to the attribute, followed by an optional delimited token tree whose interpretation is defined by the attribute. Attributes other than macro attributes also allow the input to be an equals sign ( = ) followed by an expression. See the meta item syntax below for more details. [attributes .safety] An attribute may be unsafe to apply. To avoid undefined behavior when using these attributes, certain obligations that cannot be checked by the compiler must be met. To assert these have been, the attribute is wrapped in unsafe(..) , e.g. #[unsafe(no_mangle)] . The following attributes are unsafe: export_name link_section naked no_mangle [attributes .kind] Attributes can be classified into the following kinds: Built-in attributes Proc macro attributes Derive macro helper attributes Tool attributes [attributes .allowed-position] Attributes may be applied to many things in the language: All item declarations accept outer attributes while external blocks , functions , implementations , and modules accept inner attributes. Most statements accept outer attributes (see Expression Attributes for limitations on expression statements). Block expressions accept outer and inner attributes, but only when they are the outer expression of an expression statement or the final expression of another block expression. Enum variants and struct and union fields accept outer attributes. Match expression arms accept outer attributes. Generic lifetime or type parameter accept outer attributes. Expressions accept outer attributes in limited situations, see Expression Attributes for details. Function , closure and function pointer parameters accept outer attributes. This includes attributes on variadic parameters denoted with ... in function pointers and external blocks . Some examples of attributes: #![allow(unused)] fn main() { // General metadata applied to the enclosing module or crate. #![crate_type = "lib"] // A function marked as a unit test #[test] fn test_foo() { /* ... */ } // A conditionally-compiled module #[cfg(target_os = "linux")] mod bar { /* ... */ } // A lint attribute used to suppress a warning/error #[allow(non_camel_case_types)] type int8_t = i8; // Inner attribute applies to the entire function. fn some_unused_variables() { #![allow(unused_variables)] let x = (); let y = (); let z = (); } } [attributes .meta] Meta item attribute syntax [attributes .meta .intro] A “meta item” is the syntax used for the Attr rule by most built-in attributes . It has the following grammar: [attributes .meta .syntax] Syntax MetaItem → SimplePath | SimplePath = Expression | SimplePath ( MetaSeq ? ) MetaSeq → MetaItemInner ( , MetaItemInner ) * , ? MetaItemInner → MetaItem | Expression Show Railroad MetaItem SimplePath SimplePath = Expression SimplePath ( MetaSeq ) MetaSeq MetaItemInner , MetaItemInner , MetaItemInner MetaItem Expression [attributes .meta .literal-expr] Expressions in meta items must macro-expand to literal expressions, which must not include integer or float type suffixes. Expressions which are not literal expressions will be syntactically accepted (and can be passed to proc-macros), but will be rejected after parsing. [attributes .meta .order] Note that if the attribute appears within another macro, it will be expanded after that outer macro. For example, the following code will expand the Serialize proc-macro first, which must preserve the include_str! call in order for it to be expanded: #[derive(Serialize)] struct Foo { #[doc = include_str!("x.md")] x: u32 } [attributes .meta .order-macro] Additionally, macros in attributes will be expanded only after all other attributes applied to the item: #[macro_attr1] // expanded first #[doc = mac!()] // `mac!` is expanded fourth. #[macro_attr2] // expanded second #[derive(MacroDerive1, MacroDerive2)] // expanded third fn foo() {} [attributes .meta .builtin] Various built-in attributes use different subsets of the meta item syntax to specify their inputs. The following grammar rules show some commonly used forms: [attributes .meta .builtin .syntax] Syntax MetaWord → IDENTIFIER MetaNameValueStr → IDENTIFIER = ( STRING_LITERAL | RAW_STRING_LITERAL ) MetaListPaths → IDENTIFIER ( ( SimplePath ( , SimplePath ) * , ? ) ? ) MetaListIdents → IDENTIFIER ( ( IDENTIFIER ( , IDENTIFIER ) * , ? ) ? ) MetaListNameValueStr → IDENTIFIER ( ( MetaNameValueStr ( , MetaNameValueStr ) * , ? ) ? ) Show Railroad MetaWord IDENTIFIER MetaNameValueStr IDENTIFIER = STRING_LITERAL RAW_STRING_LITERAL MetaListPaths IDENTIFIER ( SimplePath , SimplePath , ) MetaListIdents IDENTIFIER ( IDENTIFIER , IDENTIFIER , ) MetaListNameValueStr IDENTIFIER ( MetaNameValueStr , MetaNameValueStr , ) Some examples of meta items are: Style Example MetaWord no_std MetaNameValueStr doc = "example" MetaListPaths allow(unused, clippy::inline_always) MetaListIdents macro_use(foo, bar) MetaListNameValueStr link(name = "CoreFoundation", kind = "framework") [attributes .activity] Active and inert attributes [attributes .activity .intro] An attribute is either active or inert. During attribute processing, active attributes remove themselves from the thing they are on while inert attributes stay on. The cfg and cfg_attr attributes are active. Attribute macros are active. All other attributes are inert. [attributes .tool] Tool attributes [attributes .tool .intro] The compiler may allow attributes for external tools where each tool resides in its own module in the tool prelude . The first segment of the attribute path is the name of the tool, with one or more additional segments whose interpretation is up to the tool. [attributes .tool .ignored] When a tool is not in use, the tool’s attributes are accepted without a warning. When the tool is in use, the tool is responsible for processing and interpretation of its attributes. [attributes .tool .prelude] Tool attributes are not available if the no_implicit_prelude attribute is used. #![allow(unused)] fn main() { // Tells the rustfmt tool to not format the following element. #[rustfmt::skip] struct S { } // Controls the "cyclomatic complexity" threshold for the clippy tool. #[clippy::cyclomatic_complexity = "100"] pub fn f() {} } Note rustc currently recognizes the tools “clippy”, “rustfmt”, “diagnostic”, “miri” and “rust_analyzer”. [attributes .builtin] Built-in attributes index The following is an index of all built-in attributes. Conditional compilation cfg — Controls conditional compilation. cfg_attr — Conditionally includes attributes. Testing test — Marks a function as a test. ignore — Disables a test function. should_panic — Indicates a test should generate a panic. Derive derive — Automatic trait implementations. automatically_derived — Marker for implementations created by derive . Macros macro_export — Exports a macro_rules macro for cross-crate usage. macro_use — Expands macro visibility, or imports macros from other crates. proc_macro — Defines a function-like macro. proc_macro_derive — Defines a derive macro. proc_macro_attribute — Defines an attribute macro. Diagnostics allow , expect , warn , deny , forbid — Alters the default lint level. deprecated — Generates deprecation notices. must_use — Generates a lint for unused values. diagnostic::on_unimplemented — Hints the compiler to emit a certain error message if a trait is not implemented. diagnostic::do_not_recommend — Hints the compiler to not show a certain trait impl in error messages. ABI, linking, symbols, and FFI link — Specifies a native library to link with an extern block. link_name — Specifies the name of the symbol for functions or statics in an extern block. link_ordinal — Specifies the ordinal of the symbol for functions or statics in an extern block. no_link — Prevents linking an extern crate. repr — Controls type layout. crate_type — Specifies the type of crate (library, executable, etc.). no_main — Disables emitting the main symbol. export_name — Specifies the exported symbol name for a function or static. link_section — Specifies the section of an object file to use for a function or static. no_mangle — Disables symbol name encoding. used — Forces the compiler to keep a static item in the output object file. crate_name — Specifies the crate name. Code generation inline — Hint to inline code. cold — Hint that a function is unlikely to be called. naked — Prevent the compiler from emitting a function prologue and epilogue. no_builtins — Disables use of certain built-in functions. target_feature — Configure platform-specific code generation. track_caller — Pass the parent call location to std::panic::Location::caller() . instruction_set — Specify the instruction set used to generate a functions code Documentation doc — Specifies documentation. See The Rustdoc Book for more information. Doc comments are transformed into doc attributes. Preludes no_std — Removes std from the prelude. no_implicit_prelude — Disables prelude lookups within a module. Modules path — Specifies the filename for a module. Limits recursion_limit — Sets the maximum recursion limit for certain compile-time operations. type_length_limit — Sets the maximum size of a polymorphic type. Runtime panic_handler — Sets the function to handle panics. global_allocator — Sets the global memory allocator. windows_subsystem — Specifies the windows subsystem to link with. Features feature — Used to enable unstable or experimental compiler features. See The Unstable Book for features implemented in rustc . Type System non_exhaustive — Indicate that a type will have more fields/variants added in future. Debugger debugger_visualizer — Embeds a file that specifies debugger output for a type. collapse_debuginfo — Controls how macro invocations are encoded in debuginfo. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/reference/attributes.html#built-in-attributes-index | Attributes - The Rust Reference Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Reference [attributes] Attributes [attributes .syntax] Syntax InnerAttribute → # ! [ Attr ] OuterAttribute → # [ Attr ] Attr → SimplePath AttrInput ? | unsafe ( SimplePath AttrInput ? ) AttrInput → DelimTokenTree | = Expression Show Railroad InnerAttribute # ! [ Attr ] OuterAttribute # [ Attr ] Attr SimplePath AttrInput unsafe ( SimplePath AttrInput ) AttrInput DelimTokenTree = Expression [attributes .intro] An attribute is a general, free-form metadatum that is interpreted according to name, convention, language, and compiler version. Attributes are modeled on Attributes in ECMA-335 , with the syntax coming from ECMA-334 (C#). [attributes .inner] Inner attributes , written with a bang ( ! ) after the hash ( # ), apply to the item that the attribute is declared within. Outer attributes , written without the bang after the hash, apply to the thing that follows the attribute. [attributes .input] The attribute consists of a path to the attribute, followed by an optional delimited token tree whose interpretation is defined by the attribute. Attributes other than macro attributes also allow the input to be an equals sign ( = ) followed by an expression. See the meta item syntax below for more details. [attributes .safety] An attribute may be unsafe to apply. To avoid undefined behavior when using these attributes, certain obligations that cannot be checked by the compiler must be met. To assert these have been, the attribute is wrapped in unsafe(..) , e.g. #[unsafe(no_mangle)] . The following attributes are unsafe: export_name link_section naked no_mangle [attributes .kind] Attributes can be classified into the following kinds: Built-in attributes Proc macro attributes Derive macro helper attributes Tool attributes [attributes .allowed-position] Attributes may be applied to many things in the language: All item declarations accept outer attributes while external blocks , functions , implementations , and modules accept inner attributes. Most statements accept outer attributes (see Expression Attributes for limitations on expression statements). Block expressions accept outer and inner attributes, but only when they are the outer expression of an expression statement or the final expression of another block expression. Enum variants and struct and union fields accept outer attributes. Match expression arms accept outer attributes. Generic lifetime or type parameter accept outer attributes. Expressions accept outer attributes in limited situations, see Expression Attributes for details. Function , closure and function pointer parameters accept outer attributes. This includes attributes on variadic parameters denoted with ... in function pointers and external blocks . Some examples of attributes: #![allow(unused)] fn main() { // General metadata applied to the enclosing module or crate. #![crate_type = "lib"] // A function marked as a unit test #[test] fn test_foo() { /* ... */ } // A conditionally-compiled module #[cfg(target_os = "linux")] mod bar { /* ... */ } // A lint attribute used to suppress a warning/error #[allow(non_camel_case_types)] type int8_t = i8; // Inner attribute applies to the entire function. fn some_unused_variables() { #![allow(unused_variables)] let x = (); let y = (); let z = (); } } [attributes .meta] Meta item attribute syntax [attributes .meta .intro] A “meta item” is the syntax used for the Attr rule by most built-in attributes . It has the following grammar: [attributes .meta .syntax] Syntax MetaItem → SimplePath | SimplePath = Expression | SimplePath ( MetaSeq ? ) MetaSeq → MetaItemInner ( , MetaItemInner ) * , ? MetaItemInner → MetaItem | Expression Show Railroad MetaItem SimplePath SimplePath = Expression SimplePath ( MetaSeq ) MetaSeq MetaItemInner , MetaItemInner , MetaItemInner MetaItem Expression [attributes .meta .literal-expr] Expressions in meta items must macro-expand to literal expressions, which must not include integer or float type suffixes. Expressions which are not literal expressions will be syntactically accepted (and can be passed to proc-macros), but will be rejected after parsing. [attributes .meta .order] Note that if the attribute appears within another macro, it will be expanded after that outer macro. For example, the following code will expand the Serialize proc-macro first, which must preserve the include_str! call in order for it to be expanded: #[derive(Serialize)] struct Foo { #[doc = include_str!("x.md")] x: u32 } [attributes .meta .order-macro] Additionally, macros in attributes will be expanded only after all other attributes applied to the item: #[macro_attr1] // expanded first #[doc = mac!()] // `mac!` is expanded fourth. #[macro_attr2] // expanded second #[derive(MacroDerive1, MacroDerive2)] // expanded third fn foo() {} [attributes .meta .builtin] Various built-in attributes use different subsets of the meta item syntax to specify their inputs. The following grammar rules show some commonly used forms: [attributes .meta .builtin .syntax] Syntax MetaWord → IDENTIFIER MetaNameValueStr → IDENTIFIER = ( STRING_LITERAL | RAW_STRING_LITERAL ) MetaListPaths → IDENTIFIER ( ( SimplePath ( , SimplePath ) * , ? ) ? ) MetaListIdents → IDENTIFIER ( ( IDENTIFIER ( , IDENTIFIER ) * , ? ) ? ) MetaListNameValueStr → IDENTIFIER ( ( MetaNameValueStr ( , MetaNameValueStr ) * , ? ) ? ) Show Railroad MetaWord IDENTIFIER MetaNameValueStr IDENTIFIER = STRING_LITERAL RAW_STRING_LITERAL MetaListPaths IDENTIFIER ( SimplePath , SimplePath , ) MetaListIdents IDENTIFIER ( IDENTIFIER , IDENTIFIER , ) MetaListNameValueStr IDENTIFIER ( MetaNameValueStr , MetaNameValueStr , ) Some examples of meta items are: Style Example MetaWord no_std MetaNameValueStr doc = "example" MetaListPaths allow(unused, clippy::inline_always) MetaListIdents macro_use(foo, bar) MetaListNameValueStr link(name = "CoreFoundation", kind = "framework") [attributes .activity] Active and inert attributes [attributes .activity .intro] An attribute is either active or inert. During attribute processing, active attributes remove themselves from the thing they are on while inert attributes stay on. The cfg and cfg_attr attributes are active. Attribute macros are active. All other attributes are inert. [attributes .tool] Tool attributes [attributes .tool .intro] The compiler may allow attributes for external tools where each tool resides in its own module in the tool prelude . The first segment of the attribute path is the name of the tool, with one or more additional segments whose interpretation is up to the tool. [attributes .tool .ignored] When a tool is not in use, the tool’s attributes are accepted without a warning. When the tool is in use, the tool is responsible for processing and interpretation of its attributes. [attributes .tool .prelude] Tool attributes are not available if the no_implicit_prelude attribute is used. #![allow(unused)] fn main() { // Tells the rustfmt tool to not format the following element. #[rustfmt::skip] struct S { } // Controls the "cyclomatic complexity" threshold for the clippy tool. #[clippy::cyclomatic_complexity = "100"] pub fn f() {} } Note rustc currently recognizes the tools “clippy”, “rustfmt”, “diagnostic”, “miri” and “rust_analyzer”. [attributes .builtin] Built-in attributes index The following is an index of all built-in attributes. Conditional compilation cfg — Controls conditional compilation. cfg_attr — Conditionally includes attributes. Testing test — Marks a function as a test. ignore — Disables a test function. should_panic — Indicates a test should generate a panic. Derive derive — Automatic trait implementations. automatically_derived — Marker for implementations created by derive . Macros macro_export — Exports a macro_rules macro for cross-crate usage. macro_use — Expands macro visibility, or imports macros from other crates. proc_macro — Defines a function-like macro. proc_macro_derive — Defines a derive macro. proc_macro_attribute — Defines an attribute macro. Diagnostics allow , expect , warn , deny , forbid — Alters the default lint level. deprecated — Generates deprecation notices. must_use — Generates a lint for unused values. diagnostic::on_unimplemented — Hints the compiler to emit a certain error message if a trait is not implemented. diagnostic::do_not_recommend — Hints the compiler to not show a certain trait impl in error messages. ABI, linking, symbols, and FFI link — Specifies a native library to link with an extern block. link_name — Specifies the name of the symbol for functions or statics in an extern block. link_ordinal — Specifies the ordinal of the symbol for functions or statics in an extern block. no_link — Prevents linking an extern crate. repr — Controls type layout. crate_type — Specifies the type of crate (library, executable, etc.). no_main — Disables emitting the main symbol. export_name — Specifies the exported symbol name for a function or static. link_section — Specifies the section of an object file to use for a function or static. no_mangle — Disables symbol name encoding. used — Forces the compiler to keep a static item in the output object file. crate_name — Specifies the crate name. Code generation inline — Hint to inline code. cold — Hint that a function is unlikely to be called. naked — Prevent the compiler from emitting a function prologue and epilogue. no_builtins — Disables use of certain built-in functions. target_feature — Configure platform-specific code generation. track_caller — Pass the parent call location to std::panic::Location::caller() . instruction_set — Specify the instruction set used to generate a functions code Documentation doc — Specifies documentation. See The Rustdoc Book for more information. Doc comments are transformed into doc attributes. Preludes no_std — Removes std from the prelude. no_implicit_prelude — Disables prelude lookups within a module. Modules path — Specifies the filename for a module. Limits recursion_limit — Sets the maximum recursion limit for certain compile-time operations. type_length_limit — Sets the maximum size of a polymorphic type. Runtime panic_handler — Sets the function to handle panics. global_allocator — Sets the global memory allocator. windows_subsystem — Specifies the windows subsystem to link with. Features feature — Used to enable unstable or experimental compiler features. See The Unstable Book for features implemented in rustc . Type System non_exhaustive — Indicate that a type will have more fields/variants added in future. Debugger debugger_visualizer — Embeds a file that specifies debugger output for a type. collapse_debuginfo — Controls how macro invocations are encoded in debuginfo. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/cargo/reference/config.html#executable-paths-with-arguments | Configuration - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Configuration This document explains how Cargo’s configuration system works, as well as available keys or configuration. For configuration of a package through its manifest, see the manifest format . Hierarchical structure Cargo allows local configuration for a particular package as well as global configuration. It looks for configuration files in the current directory and all parent directories. If, for example, Cargo were invoked in /projects/foo/bar/baz , then the following configuration files would be probed for and unified in this order: /projects/foo/bar/baz/.cargo/config.toml /projects/foo/bar/.cargo/config.toml /projects/foo/.cargo/config.toml /projects/.cargo/config.toml /.cargo/config.toml $CARGO_HOME/config.toml which defaults to: Windows: %USERPROFILE%\.cargo\config.toml Unix: $HOME/.cargo/config.toml With this structure, you can specify configuration per-package, and even possibly check it into version control. You can also specify personal defaults with a configuration file in your home directory. If a key is specified in multiple config files, the values will get merged together. Numbers, strings, and booleans will use the value in the deeper config directory taking precedence over ancestor directories, where the home directory is the lowest priority. Arrays will be joined together with higher precedence items being placed later in the merged array. At present, when being invoked from a workspace, Cargo does not read config files from crates within the workspace. i.e. if a workspace has two crates in it, named /projects/foo/bar/baz/mylib and /projects/foo/bar/baz/mybin , and there are Cargo configs at /projects/foo/bar/baz/mylib/.cargo/config.toml and /projects/foo/bar/baz/mybin/.cargo/config.toml , Cargo does not read those configuration files if it is invoked from the workspace root ( /projects/foo/bar/baz/ ). Note: Cargo also reads config files without the .toml extension, such as .cargo/config . Support for the .toml extension was added in version 1.39 and is the preferred form. If both files exist, Cargo will use the file without the extension. Configuration format Configuration files are written in the TOML format (like the manifest), with simple key-value pairs inside of sections (tables). The following is a quick overview of all settings, with detailed descriptions found below. paths = ["/path/to/override"] # path dependency overrides [alias] # command aliases b = "build" c = "check" t = "test" r = "run" rr = "run --release" recursive_example = "rr --example recursions" space_example = ["run", "--release", "--", "\"command list\""] [build] jobs = 1 # number of parallel jobs, defaults to # of CPUs rustc = "rustc" # the rust compiler tool rustc-wrapper = "…" # run this wrapper instead of `rustc` rustc-workspace-wrapper = "…" # run this wrapper instead of `rustc` for workspace members rustdoc = "rustdoc" # the doc generator tool target = "triple" # build for the target triple (ignored by `cargo install`) target-dir = "target" # path of where to place generated artifacts build-dir = "target" # path of where to place intermediate build artifacts rustflags = ["…", "…"] # custom flags to pass to all compiler invocations rustdocflags = ["…", "…"] # custom flags to pass to rustdoc incremental = true # whether or not to enable incremental compilation dep-info-basedir = "…" # path for the base directory for targets in depfiles [credential-alias] # Provides a way to define aliases for credential providers. my-alias = ["/usr/bin/cargo-credential-example", "--argument", "value", "--flag"] [doc] browser = "chromium" # browser to use with `cargo doc --open`, # overrides the `BROWSER` environment variable [env] # Set ENV_VAR_NAME=value for any process run by Cargo ENV_VAR_NAME = "value" # Set even if already present in environment ENV_VAR_NAME_2 = { value = "value", force = true } # `value` is relative to the parent of `.cargo/config.toml`, env var will be the full absolute path ENV_VAR_NAME_3 = { value = "relative/path", relative = true } [future-incompat-report] frequency = 'always' # when to display a notification about a future incompat report [cache] auto-clean-frequency = "1 day" # How often to perform automatic cache cleaning [cargo-new] vcs = "none" # VCS to use ('git', 'hg', 'pijul', 'fossil', 'none') [http] debug = false # HTTP debugging proxy = "host:port" # HTTP proxy in libcurl format ssl-version = "tlsv1.3" # TLS version to use ssl-version.max = "tlsv1.3" # maximum TLS version ssl-version.min = "tlsv1.1" # minimum TLS version timeout = 30 # timeout for each HTTP request, in seconds low-speed-limit = 10 # network timeout threshold (bytes/sec) cainfo = "cert.pem" # path to Certificate Authority (CA) bundle proxy-cainfo = "cert.pem" # path to proxy Certificate Authority (CA) bundle check-revoke = true # check for SSL certificate revocation multiplexing = true # HTTP/2 multiplexing user-agent = "…" # the user-agent header [install] root = "/some/path" # `cargo install` destination directory [net] retry = 3 # network retries git-fetch-with-cli = true # use the `git` executable for git operations offline = true # do not access the network [net.ssh] known-hosts = ["..."] # known SSH host keys [patch.<registry>] # Same keys as for [patch] in Cargo.toml [profile.<name>] # Modify profile settings via config. inherits = "dev" # Inherits settings from [profile.dev]. opt-level = 0 # Optimization level. debug = true # Include debug info. split-debuginfo = '...' # Debug info splitting behavior. strip = "none" # Removes symbols or debuginfo. debug-assertions = true # Enables debug assertions. overflow-checks = true # Enables runtime integer overflow checks. lto = false # Sets link-time optimization. panic = 'unwind' # The panic strategy. incremental = true # Incremental compilation. codegen-units = 16 # Number of code generation units. rpath = false # Sets the rpath linking option. [profile.<name>.build-override] # Overrides build-script settings. # Same keys for a normal profile. [profile.<name>.package.<name>] # Override profile for a package. # Same keys for a normal profile (minus `panic`, `lto`, and `rpath`). [resolver] incompatible-rust-versions = "allow" # Specifies how resolver reacts to these [registries.<name>] # registries other than crates.io index = "…" # URL of the registry index token = "…" # authentication token for the registry credential-provider = "cargo:token" # The credential provider for this registry. [registries.crates-io] protocol = "sparse" # The protocol to use to access crates.io. [registry] default = "…" # name of the default registry token = "…" # authentication token for crates.io credential-provider = "cargo:token" # The credential provider for crates.io. global-credential-providers = ["cargo:token"] # The credential providers to use by default. [source.<name>] # source definition and replacement replace-with = "…" # replace this source with the given named source directory = "…" # path to a directory source registry = "…" # URL to a registry source local-registry = "…" # path to a local registry source git = "…" # URL of a git repository source branch = "…" # branch name for the git repository tag = "…" # tag name for the git repository rev = "…" # revision for the git repository [target.<triple>] linker = "…" # linker to use runner = "…" # wrapper to run executables rustflags = ["…", "…"] # custom flags for `rustc` rustdocflags = ["…", "…"] # custom flags for `rustdoc` [target.<cfg>] linker = "…" # linker to use runner = "…" # wrapper to run executables rustflags = ["…", "…"] # custom flags for `rustc` [target.<triple>.<links>] # `links` build script override rustc-link-lib = ["foo"] rustc-link-search = ["/path/to/foo"] rustc-flags = "-L /some/path" rustc-cfg = ['key="value"'] rustc-env = {key = "value"} rustc-cdylib-link-arg = ["…"] metadata_key1 = "value" metadata_key2 = "value" [term] quiet = false # whether cargo output is quiet verbose = false # whether cargo provides verbose output color = 'auto' # whether cargo colorizes output hyperlinks = true # whether cargo inserts links into output unicode = true # whether cargo can render output using non-ASCII unicode characters progress.when = 'auto' # whether cargo shows progress bar progress.width = 80 # width of progress bar progress.term-integration = true # whether cargo reports progress to terminal emulator Environment variables Cargo can also be configured through environment variables in addition to the TOML configuration files. For each configuration key of the form foo.bar the environment variable CARGO_FOO_BAR can also be used to define the value. Keys are converted to uppercase, dots and dashes are converted to underscores. For example the target.x86_64-unknown-linux-gnu.runner key can also be defined by the CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUNNER environment variable. Environment variables will take precedence over TOML configuration files. Currently only integer, boolean, string and some array values are supported to be defined by environment variables. Descriptions below indicate which keys support environment variables and otherwise they are not supported due to technical issues . In addition to the system above, Cargo recognizes a few other specific environment variables . Command-line overrides Cargo also accepts arbitrary configuration overrides through the --config command-line option. The argument should be in TOML syntax of KEY=VALUE or provided as a path to an extra configuration file: # With `KEY=VALUE` in TOML syntax cargo --config net.git-fetch-with-cli=true fetch # With a path to a configuration file cargo --config ./path/to/my/extra-config.toml fetch The --config option may be specified multiple times, in which case the values are merged in left-to-right order, using the same merging logic that is used when multiple configuration files apply. Configuration values specified this way take precedence over environment variables, which take precedence over configuration files. When the --config option is provided as an extra configuration file, The configuration file loaded this way follow the same precedence rules as other options specified directly with --config . Some examples of what it looks like using Bourne shell syntax: # Most shells will require escaping. cargo --config http.proxy=\"http://example.com\" … # Spaces may be used. cargo --config "net.git-fetch-with-cli = true" … # TOML array example. Single quotes make it easier to read and write. cargo --config 'build.rustdocflags = ["--html-in-header", "header.html"]' … # Example of a complex TOML key. cargo --config "target.'cfg(all(target_arch = \"arm\", target_os = \"none\"))'.runner = 'my-runner'" … # Example of overriding a profile setting. cargo --config profile.dev.package.image.opt-level=3 … Config-relative paths Paths in config files may be absolute, relative, or a bare name without any path separators. Paths for executables without a path separator will use the PATH environment variable to search for the executable. Paths for non-executables will be relative to where the config value is defined. In particular, rules are: For environment variables, paths are relative to the current working directory. For config values loaded directly from the --config KEY=VALUE option, paths are relative to the current working directory. For config files, paths are relative to the parent directory of the directory where the config files were defined, no matter those files are from either the hierarchical probing or the --config <path> option. Note: To maintain consistency with existing .cargo/config.toml probing behavior, it is by design that a path in a config file passed via --config <path> is also relative to two levels up from the config file itself. To avoid unexpected results, the rule of thumb is putting your extra config files at the same level of discovered .cargo/config.toml in your project. For instance, given a project /my/project , it is recommended to put config files under /my/project/.cargo or a new directory at the same level, such as /my/project/.config . # Relative path examples. [target.x86_64-unknown-linux-gnu] runner = "foo" # Searches `PATH` for `foo`. [source.vendored-sources] # Directory is relative to the parent where `.cargo/config.toml` is located. # For example, `/my/project/.cargo/config.toml` would result in `/my/project/vendor`. directory = "vendor" Executable paths with arguments Some Cargo commands invoke external programs, which can be configured as a path and some number of arguments. The value may be an array of strings like ['/path/to/program', 'somearg'] or a space-separated string like '/path/to/program somearg' . If the path to the executable contains a space, the list form must be used. If Cargo is passing other arguments to the program such as a path to open or run, they will be passed after the last specified argument in the value of an option of this format. If the specified program does not have path separators, Cargo will search PATH for its executable. Credentials Configuration values with sensitive information are stored in the $CARGO_HOME/credentials.toml file. This file is automatically created and updated by cargo login and cargo logout when using the cargo:token credential provider. Tokens are used by some Cargo commands such as cargo publish for authenticating with remote registries. Care should be taken to protect the tokens and to keep them secret. It follows the same format as Cargo config files. [registry] token = "…" # Access token for crates.io [registries.<name>] token = "…" # Access token for the named registry As with most other config values, tokens may be specified with environment variables. The token for crates.io may be specified with the CARGO_REGISTRY_TOKEN environment variable. Tokens for other registries may be specified with environment variables of the form CARGO_REGISTRIES_<name>_TOKEN where <name> is the name of the registry in all capital letters. Note: Cargo also reads and writes credential files without the .toml extension, such as .cargo/credentials . Support for the .toml extension was added in version 1.39. In version 1.68, Cargo writes to the file with the extension by default. However, for backward compatibility reason, when both files exist, Cargo will read and write the file without the extension. Configuration keys This section documents all configuration keys. The description for keys with variable parts are annotated with angled brackets like target.<triple> where the <triple> part can be any target triple like target.x86_64-pc-windows-msvc . paths Type: array of strings (paths) Default: none Environment: not supported An array of paths to local packages which are to be used as overrides for dependencies. For more information see the Overriding Dependencies guide . [alias] Type: string or array of strings Default: see below Environment: CARGO_ALIAS_<name> The [alias] table defines CLI command aliases. For example, running cargo b is an alias for running cargo build . Each key in the table is the subcommand, and the value is the actual command to run. The value may be an array of strings, where the first element is the command and the following are arguments. It may also be a string, which will be split on spaces into subcommand and arguments. The following aliases are built-in to Cargo: [alias] b = "build" c = "check" d = "doc" t = "test" r = "run" rm = "remove" Aliases are not allowed to redefine existing built-in commands. Aliases are recursive: [alias] rr = "run --release" recursive_example = "rr --example recursions" [build] The [build] table controls build-time operations and compiler settings. build.jobs Type: integer or string Default: number of logical CPUs Environment: CARGO_BUILD_JOBS Sets the maximum number of compiler processes to run in parallel. If negative, it sets the maximum number of compiler processes to the number of logical CPUs plus provided value. Should not be 0. If a string default is provided, it sets the value back to defaults. Can be overridden with the --jobs CLI option. build.rustc Type: string (program path) Default: "rustc" Environment: CARGO_BUILD_RUSTC or RUSTC Sets the executable to use for rustc . build.rustc-wrapper Type: string (program path) Default: none Environment: CARGO_BUILD_RUSTC_WRAPPER or RUSTC_WRAPPER Sets a wrapper to execute instead of rustc . The first argument passed to the wrapper is the path to the actual executable to use (i.e., build.rustc , if that is set, or "rustc" otherwise). build.rustc-workspace-wrapper Type: string (program path) Default: none Environment: CARGO_BUILD_RUSTC_WORKSPACE_WRAPPER or RUSTC_WORKSPACE_WRAPPER Sets a wrapper to execute instead of rustc , for workspace members only. When building a single-package project without workspaces, that package is considered to be the workspace. The first argument passed to the wrapper is the path to the actual executable to use (i.e., build.rustc , if that is set, or "rustc" otherwise). It affects the filename hash so that artifacts produced by the wrapper are cached separately. If both rustc-wrapper and rustc-workspace-wrapper are set, then they will be nested: the final invocation is $RUSTC_WRAPPER $RUSTC_WORKSPACE_WRAPPER $RUSTC . build.rustdoc Type: string (program path) Default: "rustdoc" Environment: CARGO_BUILD_RUSTDOC or RUSTDOC Sets the executable to use for rustdoc . build.target Type: string or array of strings Default: host platform Environment: CARGO_BUILD_TARGET The default target platform triples to compile to. Possible values: Any supported target in rustc --print target-list . "host-tuple" , which will internally be substituted by the host’s target. This can be particularly useful if you’re cross-compiling some crates, and don’t want to specify your host’s machine as a target (for instance, an xtask in a shared project that may be worked on by many hosts). A path to a custom target specification. See Custom Target Lookup Path for more information. Can be overridden with the --target CLI option. [build] target = ["x86_64-unknown-linux-gnu", "i686-unknown-linux-gnu"] build.target-dir Type: string (path) Default: "target" Environment: CARGO_BUILD_TARGET_DIR or CARGO_TARGET_DIR The path to where all compiler output is placed. The default if not specified is a directory named target located at the root of the workspace. Can be overridden with the --target-dir CLI option. For more information see the build cache documentation . build.build-dir Type: string (path) Default: Defaults to the value of build.target-dir Environment: CARGO_BUILD_BUILD_DIR The directory where intermediate build artifacts will be stored. Intermediate artifacts are produced by Rustc/Cargo during the build process. This option supports path templating. Available template variables: {workspace-root} resolves to root of the current workspace. {cargo-cache-home} resolves to CARGO_HOME {workspace-path-hash} resolves to a hash of the manifest path For more information see the build cache documentation . build.rustflags Type: string or array of strings Default: none Environment: CARGO_BUILD_RUSTFLAGS or CARGO_ENCODED_RUSTFLAGS or RUSTFLAGS Extra command-line flags to pass to rustc . The value may be an array of strings or a space-separated string. There are four mutually exclusive sources of extra flags. They are checked in order, with the first one being used: CARGO_ENCODED_RUSTFLAGS environment variable. RUSTFLAGS environment variable. All matching target.<triple>.rustflags and target.<cfg>.rustflags config entries joined together. build.rustflags config value. Additional flags may also be passed with the cargo rustc command. If the --target flag (or build.target ) is used, then the flags will only be passed to the compiler for the target. Things being built for the host, such as build scripts or proc macros, will not receive the args. Without --target , the flags will be passed to all compiler invocations (including build scripts and proc macros) because dependencies are shared. If you have args that you do not want to pass to build scripts or proc macros and are building for the host, pass --target with the host triple . It is not recommended to pass in flags that Cargo itself usually manages. For example, the flags driven by profiles are best handled by setting the appropriate profile setting. Caution : Due to the low-level nature of passing flags directly to the compiler, this may cause a conflict with future versions of Cargo which may issue the same or similar flags on its own which may interfere with the flags you specify. This is an area where Cargo may not always be backwards compatible. build.rustdocflags Type: string or array of strings Default: none Environment: CARGO_BUILD_RUSTDOCFLAGS or CARGO_ENCODED_RUSTDOCFLAGS or RUSTDOCFLAGS Extra command-line flags to pass to rustdoc . The value may be an array of strings or a space-separated string. There are four mutually exclusive sources of extra flags. They are checked in order, with the first one being used: CARGO_ENCODED_RUSTDOCFLAGS environment variable. RUSTDOCFLAGS environment variable. All matching target.<triple>.rustdocflags config entries joined together. build.rustdocflags config value. Additional flags may also be passed with the cargo rustdoc command. Caution : Due to the low-level nature of passing flags directly to the compiler, this may cause a conflict with future versions of Cargo which may issue the same or similar flags on its own which may interfere with the flags you specify. This is an area where Cargo may not always be backwards compatible. build.incremental Type: bool Default: from profile Environment: CARGO_BUILD_INCREMENTAL or CARGO_INCREMENTAL Whether or not to perform incremental compilation . The default if not set is to use the value from the profile . Otherwise this overrides the setting of all profiles. The CARGO_INCREMENTAL environment variable can be set to 1 to force enable incremental compilation for all profiles, or 0 to disable it. This env var overrides the config setting. build.dep-info-basedir Type: string (path) Default: none Environment: CARGO_BUILD_DEP_INFO_BASEDIR Strips the given path prefix from dep info file paths. This config setting is intended to convert absolute paths to relative paths for tools that require relative paths. The setting itself is a config-relative path. So, for example, a value of "." would strip all paths starting with the parent directory of the .cargo directory. build.pipelining This option is deprecated and unused. Cargo always has pipelining enabled. [credential-alias] Type: string or array of strings Default: empty Environment: CARGO_CREDENTIAL_ALIAS_<name> The [credential-alias] table defines credential provider aliases. These aliases can be referenced as an element of the registry.global-credential-providers array, or as a credential provider for a specific registry under registries.<NAME>.credential-provider . If specified as a string, the value will be split on spaces into path and arguments. For example, to define an alias called my-alias : [credential-alias] my-alias = ["/usr/bin/cargo-credential-example", "--argument", "value", "--flag"] See Registry Authentication for more information. [doc] The [doc] table defines options for the cargo doc command. doc.browser Type: string or array of strings ( program path with args ) Default: BROWSER environment variable, or, if that is missing, opening the link in a system specific way This option sets the browser to be used by cargo doc , overriding the BROWSER environment variable when opening documentation with the --open option. [cargo-new] The [cargo-new] table defines defaults for the cargo new command. cargo-new.name This option is deprecated and unused. cargo-new.email This option is deprecated and unused. cargo-new.vcs Type: string Default: "git" or "none" Environment: CARGO_CARGO_NEW_VCS Specifies the source control system to use for initializing a new repository. Valid values are git , hg (for Mercurial), pijul , fossil or none to disable this behavior. Defaults to git , or none if already inside a VCS repository. Can be overridden with the --vcs CLI option. [env] The [env] section allows you to set additional environment variables for build scripts, rustc invocations, cargo run and cargo build . [env] OPENSSL_DIR = "/opt/openssl" By default, the variables specified will not override values that already exist in the environment. This behavior can be changed by setting the force flag. Setting the relative flag evaluates the value as a config-relative path that is relative to the parent directory of the .cargo directory that contains the config.toml file. The value of the environment variable will be the full absolute path. [env] TMPDIR = { value = "/home/tmp", force = true } OPENSSL_DIR = { value = "vendor/openssl", relative = true } [future-incompat-report] The [future-incompat-report] table controls setting for future incompat reporting future-incompat-report.frequency Type: string Default: "always" Environment: CARGO_FUTURE_INCOMPAT_REPORT_FREQUENCY Controls how often we display a notification to the terminal when a future incompat report is available. Possible values: always (default): Always display a notification when a command (e.g. cargo build ) produces a future incompat report never : Never display a notification [cache] The [cache] table defines settings for cargo’s caches. Global caches When running cargo commands, Cargo will automatically track which files you are using within the global cache. Periodically, Cargo will delete files that have not been used for some period of time. It will delete files that have to be downloaded from the network if they have not been used in 3 months. Files that can be generated without network access will be deleted if they have not been used in 1 month. The automatic deletion of files only occurs when running commands that are already doing a significant amount of work, such as all of the build commands ( cargo build , cargo test , cargo check , etc.), and cargo fetch . Automatic deletion is disabled if cargo is offline such as with --offline or --frozen to avoid deleting artifacts that may need to be used if you are offline for a long period of time. Note : This tracking is currently only implemented for the global cache in Cargo’s home directory. This includes registry indexes and source files downloaded from registries and git dependencies. Support for tracking build artifacts is not yet implemented, and tracked in cargo#13136 . Additionally, there is an unstable feature to support manually triggering cache cleaning, and to further customize the configuration options. See the Unstable chapter for more information. cache.auto-clean-frequency Type: string Default: "1 day" Environment: CARGO_CACHE_AUTO_CLEAN_FREQUENCY This option defines how often Cargo will automatically delete unused files in the global cache. This does not define how old the files must be, those thresholds are described above . It supports the following settings: "never" — Never deletes old files. "always" — Checks to delete old files every time Cargo runs. An integer followed by “seconds”, “minutes”, “hours”, “days”, “weeks”, or “months” — Checks to delete old files at most the given time frame. [http] The [http] table defines settings for HTTP behavior. This includes fetching crate dependencies and accessing remote git repositories. http.debug Type: boolean Default: false Environment: CARGO_HTTP_DEBUG If true , enables debugging of HTTP requests. The debug information can be seen by setting the CARGO_LOG=network=debug environment variable (or use network=trace for even more information). Be wary when posting logs from this output in a public location. The output may include headers with authentication tokens which you don’t want to leak! Be sure to review logs before posting them. http.proxy Type: string Default: none Environment: CARGO_HTTP_PROXY or HTTPS_PROXY or https_proxy or http_proxy Sets an HTTP and HTTPS proxy to use. The format is in libcurl format as in [protocol://]host[:port] . If not set, Cargo will also check the http.proxy setting in your global git configuration. If none of those are set, the HTTPS_PROXY or https_proxy environment variables set the proxy for HTTPS requests, and http_proxy sets it for HTTP requests. http.timeout Type: integer Default: 30 Environment: CARGO_HTTP_TIMEOUT or HTTP_TIMEOUT Sets the timeout for each HTTP request, in seconds. http.cainfo Type: string (path) Default: none Environment: CARGO_HTTP_CAINFO Path to a Certificate Authority (CA) bundle file, used to verify TLS certificates. If not specified, Cargo attempts to use the system certificates. http.proxy-cainfo Type: string (path) Default: falls back to http.cainfo if not set Environment: CARGO_HTTP_PROXY_CAINFO Path to a Certificate Authority (CA) bundle file, used to verify proxy TLS certificates. http.check-revoke Type: boolean Default: true (Windows) false (all others) Environment: CARGO_HTTP_CHECK_REVOKE This determines whether or not TLS certificate revocation checks should be performed. This only works on Windows. http.ssl-version Type: string or min/max table Default: none Environment: CARGO_HTTP_SSL_VERSION This sets the minimum TLS version to use. It takes a string, with one of the possible values of "default" , "tlsv1" , "tlsv1.0" , "tlsv1.1" , "tlsv1.2" , or "tlsv1.3" . This may alternatively take a table with two keys, min and max , which each take a string value of the same kind that specifies the minimum and maximum range of TLS versions to use. The default is a minimum version of "tlsv1.0" and a max of the newest version supported on your platform, typically "tlsv1.3" . http.low-speed-limit Type: integer Default: 10 Environment: CARGO_HTTP_LOW_SPEED_LIMIT This setting controls timeout behavior for slow connections. If the average transfer speed in bytes per second is below the given value for http.timeout seconds (default 30 seconds), then the connection is considered too slow and Cargo will abort and retry. http.multiplexing Type: boolean Default: true Environment: CARGO_HTTP_MULTIPLEXING When true , Cargo will attempt to use the HTTP2 protocol with multiplexing. This allows multiple requests to use the same connection, usually improving performance when fetching multiple files. If false , Cargo will use HTTP 1.1 without pipelining. http.user-agent Type: string Default: Cargo’s version Environment: CARGO_HTTP_USER_AGENT Specifies a custom user-agent header to use. The default if not specified is a string that includes Cargo’s version. [install] The [install] table defines defaults for the cargo install command. install.root Type: string (path) Default: Cargo’s home directory Environment: CARGO_INSTALL_ROOT Sets the path to the root directory for installing executables for cargo install . Executables go into a bin directory underneath the root. To track information of installed executables, some extra files, such as .crates.toml and .crates2.json , are also created under this root. The default if not specified is Cargo’s home directory (default .cargo in your home directory). Can be overridden with the --root command-line option. [net] The [net] table controls networking configuration. net.retry Type: integer Default: 3 Environment: CARGO_NET_RETRY Number of times to retry possibly spurious network errors. net.git-fetch-with-cli Type: boolean Default: false Environment: CARGO_NET_GIT_FETCH_WITH_CLI If this is true , then Cargo will use the git executable to fetch registry indexes and git dependencies. If false , then it uses a built-in git library. Setting this to true can be helpful if you have special authentication requirements that Cargo does not support. See Git Authentication for more information about setting up git authentication. net.offline Type: boolean Default: false Environment: CARGO_NET_OFFLINE If this is true , then Cargo will avoid accessing the network, and attempt to proceed with locally cached data. If false , Cargo will access the network as needed, and generate an error if it encounters a network error. Can be overridden with the --offline command-line option. net.ssh The [net.ssh] table contains settings for SSH connections. net.ssh.known-hosts Type: array of strings Default: see description Environment: not supported The known-hosts array contains a list of SSH host keys that should be accepted as valid when connecting to an SSH server (such as for SSH git dependencies). Each entry should be a string in a format similar to OpenSSH known_hosts files. Each string should start with one or more hostnames separated by commas, a space, the key type name, a space, and the base64-encoded key. For example: [net.ssh] known-hosts = [ "example.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFO4Q5T0UV0SQevair9PFwoxY9dl4pQl3u5phoqJH3cF" ] Cargo will attempt to load known hosts keys from common locations supported in OpenSSH, and will join those with any listed in a Cargo configuration file. If any matching entry has the correct key, the connection will be allowed. Cargo comes with the host keys for github.com built-in. If those ever change, you can add the new keys to the config or known_hosts file. See Git Authentication for more details. [patch] Just as you can override dependencies using [patch] in Cargo.toml , you can override them in the cargo configuration file to apply those patches to any affected build. The format is identical to the one used in Cargo.toml . Since .cargo/config.toml files are not usually checked into source control, you should prefer patching using Cargo.toml where possible to ensure that other developers can compile your crate in their own environments. Patching through cargo configuration files is generally only appropriate when the patch section is automatically generated by an external build tool. If a given dependency is patched both in a cargo configuration file and a Cargo.toml file, the patch in the configuration file is used. If multiple configuration files patch the same dependency, standard cargo configuration merging is used, which prefers the value defined closest to the current directory, with $HOME/.cargo/config.toml taking the lowest precedence. Relative path dependencies in such a [patch] section are resolved relative to the configuration file they appear in. [profile] The [profile] table can be used to globally change profile settings, and override settings specified in Cargo.toml . It has the same syntax and options as profiles specified in Cargo.toml . See the Profiles chapter for details about the options. [profile.<name>.build-override] Environment: CARGO_PROFILE_<name>_BUILD_OVERRIDE_<key> The build-override table overrides settings for build scripts, proc macros, and their dependencies. It has the same keys as a normal profile. See the overrides section for more details. [profile.<name>.package.<name>] Environment: not supported The package table overrides settings for specific packages. It has the same keys as a normal profile, minus the panic , lto , and rpath settings. See the overrides section for more details. profile.<name>.codegen-units Type: integer Default: See profile docs. Environment: CARGO_PROFILE_<name>_CODEGEN_UNITS See codegen-units . profile.<name>.debug Type: integer or boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_DEBUG See debug . profile.<name>.split-debuginfo Type: string Default: See profile docs. Environment: CARGO_PROFILE_<name>_SPLIT_DEBUGINFO See split-debuginfo . profile.<name>.debug-assertions Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_DEBUG_ASSERTIONS See debug-assertions . profile.<name>.incremental Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_INCREMENTAL See incremental . profile.<name>.lto Type: string or boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_LTO See lto . profile.<name>.overflow-checks Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_OVERFLOW_CHECKS See overflow-checks . profile.<name>.opt-level Type: integer or string Default: See profile docs. Environment: CARGO_PROFILE_<name>_OPT_LEVEL See opt-level . profile.<name>.panic Type: string Default: See profile docs. Environment: CARGO_PROFILE_<name>_PANIC See panic . profile.<name>.rpath Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_RPATH See rpath . profile.<name>.strip Type: string or boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_STRIP See strip . [resolver] The [resolver] table overrides dependency resolution behavior for local development (e.g. excludes cargo install ). resolver.incompatible-rust-versions Type: string Default: See resolver docs Environment: CARGO_RESOLVER_INCOMPATIBLE_RUST_VERSIONS When resolving which version of a dependency to use, select how versions with incompatible package.rust-version s are treated. Values include: allow : treat rust-version -incompatible versions like any other version fallback : only consider rust-version -incompatible versions if no other version matched Can be overridden with --ignore-rust-version CLI option Setting the dependency’s version requirement higher than any version with a compatible rust-version Specifying the version to cargo update with --precise See the resolver chapter for more details. MSRV: allow is supported on any version fallback is respected as of 1.84 [registries] The [registries] table is used for specifying additional registries . It consists of a sub-table for each named registry. registries.<name>.index Type: string (url) Default: none Environment: CARGO_REGISTRIES_<name>_INDEX Specifies the URL of the index for the registry. registries.<name>.token Type: string Default: none Environment: CARGO_REGISTRIES_<name>_TOKEN Specifies the authentication token for the given registry. This value should only appear in the credentials file. This is used for registry commands like cargo publish that require authentication. Can be overridden with the --token command-line option. registries.<name>.credential-provider Type: string or array of path and arguments Default: none Environment: CARGO_REGISTRIES_<name>_CREDENTIAL_PROVIDER Specifies the credential provider for the given registry. If not set, the providers in registry.global-credential-providers will be used. If specified as a string, path and arguments will be split on spaces. For paths or arguments that contain spaces, use an array. If the value exists in the [credential-alias] table, the alias will be used. See Registry Authentication for more information. registries.crates-io.protocol Type: string Default: "sparse" Environment: CARGO_REGISTRIES_CRATES_IO_PROTOCOL Specifies the protocol used to access crates.io. Allowed values are git or sparse . git causes Cargo to clone the entire index of all packages ever published to crates.io from https://github.com/rust-lang/crates.io-index/ . This can have performance implications due to the size of the index. sparse is a newer protocol which uses HTTPS to download only what is necessary from https://index.crates.io/ . This can result in a significant performance improvement for resolving new dependencies in most situations. More information about registry protocols may be found in the Registries chapter . [registry] The [registry] table controls the default registry used when one is not specified. registry.index This value is no longer accepted and should not be used. registry.default Type: string Default: "crates-io" Environment: CARGO_REGISTRY_DEFAULT The name of the registry (from the registries table ) to use by default for registry commands like cargo publish . Can be overridden with the --registry command-line option. registry.credential-provider Type: string or array of path and arguments Default: none Environment: CARGO_REGISTRY_CREDENTIAL_PROVIDER Specifies the credential provider for crates.io . If not set, the providers in registry.global-credential-providers will be used. If specified as a string, path and arguments will be split on spaces. For paths or arguments that contain spaces, use an array. If the value exists in the [credential-alias] table, the alias will be used. See Registry Authentication for more information. registry.token Type: string Default: none Environment: CARGO_REGISTRY_TOKEN Specifies the authentication token for crates.io . This value should only appear in the credentials file. This is used for registry commands like cargo publish that require authentication. Can be overridden with the --token command-line option. registry.global-credential-providers Type: array Default: ["cargo:token"] Environment: CARGO_REGISTRY_GLOBAL_CREDENTIAL_PROVIDERS Specifies the list of global credential providers. If credential provider is not set for a specific registry using registries.<name>.credential-provider , Cargo will use the credential providers in this list. Providers toward the end of the list have precedence. Path and arguments are split on spaces. If the path or arguments contains spaces, the credential provider should be defined in the [credential-alias] table and referenced here by its alias. See Registry Authentication for more information. [source] The [source] table defines the registry sources available. See Source Replacement for more information. It consists of a sub-table for each named source. A source should only define one kind (directory, registry, local-registry, or git). source.<name>.replace-with Type: string Default: none Environment: not supported If set, replace this source with the given named source or named registry. source.<name>.directory Type: string (path) Default: none Environment: not supported Sets the path to a directory to use as a directory source. source.<name>.registry Type: string (url) Default: none Environment: not supported Sets the URL to use for a registry source. source.<name>.local-registry Type: string (path) Default: none Environment: not supported Sets the path to a directory to use as a local registry source. source.<name>.git Type: string (url) Default: none Environment: not supported Sets the URL to use for a git repository source. source.<name>.branch Type: string Default: none Environment: not supported Sets the branch name to use for a git repository. If none of branch , tag , or rev is set, defaults to the master branch. source.<name>.tag Type: string Default: none Environment: not supported Sets the tag name to use for a git repository. If none of branch , tag , or rev is set, defaults to the master branch. source.<name>.rev Type: string Default: none Environment: not supported Sets the revision to use for a git repository. If none of branch , tag , or rev is set, defaults to the master branch. [target] The [target] table is used for specifying settings for specific platform targets. It consists of a sub-table which is either a platform triple or a cfg() expression . The given values will be used if the target platform matches either the <triple> value or the <cfg> expression. [target.thumbv7m-none-eabi] linker = "arm-none-eabi-gcc" runner = "my-emulator" rustflags = ["…", "…"] [target.'cfg(all(target_arch = "arm", target_os = "none"))'] runner = "my-arm-wrapper" rustflags = ["…", "…"] cfg values come from those built-in to the compiler (run rustc --print=cfg to view) and extra --cfg flags passed to rustc (such as those defined in RUSTFLAGS ). Do not try to match on debug_assertions , test , Cargo features like feature="foo" , or values set by build scripts . If using a target spec JSON file, the <triple> value is the filename stem. For example --target foo/bar.json would match [target.bar] . target.<triple>.ar This option is deprecated and unused. target.<triple>.linker Type: string (program path) Default: none Environment: CARGO_TARGET_<triple>_LINKER Specifies the linker which is passed to rustc (via -C linker ) when the <triple> is being compiled for. By default, the linker is not overridden. target.<cfg>.linker This is similar to the target linker , but using a cfg() expression . If both a <triple> and <cfg> runner match, the <triple> will take precedence. It is an error if more than one <cfg> runner matches the current target. target.<triple>.runner Type: string or array of strings ( program path with args ) Default: none Environment: CARGO_TARGET_<triple>_RUNNER If a runner is provided, executables for the target <triple> will be executed by invoking the specified runner with the actual executable passed as an argument. This applies to cargo run , cargo test and cargo bench commands. By default, compiled executables are executed directly. target.<cfg>.runner This is similar to the target runner , but using a cfg() expression . If both a <triple> and <cfg> runner match, the <triple> will take precedence. It is an error if more than one <cfg> runner matches the current target. target.<triple>.rustflags Type: string or array of strings Default: none Environment: CARGO_TARGET_<triple>_RUSTFLAGS Passes a set of custom flags to the compiler for this <triple> . The value may be an array of strings or a space-separated string. See build.rustflags for more details on the different ways to specific extra flags. target.<cfg>.rustflags This is similar to the target rustflags , but using a cfg() expression . If several <cfg> and <triple> entries match the current target, the flags are joined together. target.<triple>.rustdocflags Type: string or array of strings Default: none Environment: CARGO_TARGET_<triple>_RUSTDOCFLAGS Passes a set of custom flags to the compiler for this <triple> . The value may be an array of strings or a space-separated string. See build.rustdocflags for more details on the different ways to specific extra flags. target.<triple>.<links> The links sub-table provides a way to override a build script . When specified, the build script for the given links library will not be run, and the given values will be used instead. [target.x86_64-unknown-linux-gnu.foo] rustc-link-lib = ["foo"] rustc-link-search = ["/path/to/foo"] rustc-flags = "-L /some/path" rustc-cfg = ['key="value"'] rustc-env = {key = "value"} rustc-cdylib-link-arg = ["…"] metadata_key1 = "value" metadata_key2 = "value" [term] The [term] table controls terminal output and interaction. term.quiet Type: boolean Default: false Environment: CARGO_TERM_QUIET Controls whether or not log messages are displayed by Cargo. Specifying the --quiet flag will override and force quiet output. Specifying the --verbose flag will override and disable quiet output. term.verbose Type: boolean Default: false Environment: CARGO_TERM_VERBOSE Controls whether or not extra detailed messages are displayed by Cargo. Specifying the --quiet flag will override and disable verbose output. Specifying the --verbose flag will override and force verbose output. term.color Type: string Default: "auto" Environment: CARGO_TERM_COLOR Controls whether or not colored output is used in the terminal. Possible values: auto (default): Automatically detect if color support is available on the terminal. always : Always display colors. never : Never display colors. Can be overridden with the --color command-line option. term.hyperlinks Type: bool Default: auto-detect Environment: CARGO_TERM_HYPERLINKS Controls whether or not hyperlinks are used in the terminal. term.unicode Type: bool Default: auto-detect Environment: CARGO_TERM_UNICODE Control whether output can be rendered using non-ASCII unicode characters. term.progress.when Type: string Default: "auto" Environment: CARGO_TERM_PROGRESS_WHEN Controls whether or not progress bar is shown in the terminal. Possible values: auto (default): Intelligently guess whether to show progress bar. always : Always show progress bar. never : Never show progress bar. term.progress.width Type: integer Default: none Environment: CARGO_TERM_PROGRESS_WIDTH Sets the width for progress bar. term.progress.term-integration Type: bool Default: auto-detect Environment: CARGO_TERM_PROGRESS_TERM_INTEGRATION Report progress to the terminal emulator for display in places like the task bar. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/cargo/reference/features.html#the-features-section | Features - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Features Cargo “features” provide a mechanism to express conditional compilation and optional dependencies . A package defines a set of named features in the [features] table of Cargo.toml , and each feature can either be enabled or disabled. Features for the package being built can be enabled on the command-line with flags such as --features . Features for dependencies can be enabled in the dependency declaration in Cargo.toml . Note : New crates or versions published on crates.io are now limited to a maximum of 300 features. Exceptions are granted on a case-by-case basis. See this blog post for details. Participation in solution discussions is encouraged via the crates.io Zulip stream. See also the Features Examples chapter for some examples of how features can be used. The [features] section Features are defined in the [features] table in Cargo.toml . Each feature specifies an array of other features or optional dependencies that it enables. The following examples illustrate how features could be used for a 2D image processing library where support for different image formats can be optionally included: [features] # Defines a feature named `webp` that does not enable any other features. webp = [] With this feature defined, cfg expressions can be used to conditionally include code to support the requested feature at compile time. For example, inside lib.rs of the package could include this: #![allow(unused)] fn main() { // This conditionally includes a module which implements WEBP support. #[cfg(feature = "webp")] pub mod webp; } Cargo sets features in the package using the rustc --cfg flag , and code can test for their presence with the cfg attribute or the cfg macro . Features can list other features to enable. For example, the ICO image format can contain BMP and PNG images, so when it is enabled, it should make sure those other features are enabled, too: [features] bmp = [] png = [] ico = ["bmp", "png"] webp = [] Feature names may include characters from the Unicode XID standard (which includes most letters), and additionally allows starting with _ or digits 0 through 9 , and after the first character may also contain - , + , or . . Note : crates.io imposes additional constraints on feature name syntax that they must only be ASCII alphanumeric characters or _ , - , or + . The default feature By default, all features are disabled unless explicitly enabled. This can be changed by specifying the default feature: [features] default = ["ico", "webp"] bmp = [] png = [] ico = ["bmp", "png"] webp = [] When the package is built, the default feature is enabled which in turn enables the listed features. This behavior can be changed by: The --no-default-features command-line flag disables the default features of the package. The default-features = false option can be specified in a dependency declaration . Note : Be careful about choosing the default feature set. The default features are a convenience that make it easier to use a package without forcing the user to carefully select which features to enable for common use, but there are some drawbacks. Dependencies automatically enable default features unless default-features = false is specified. This can make it difficult to ensure that the default features are not enabled, especially for a dependency that appears multiple times in the dependency graph. Every package must ensure that default-features = false is specified to avoid enabling them. Another issue is that it can be a SemVer incompatible change to remove a feature from the default set, so you should be confident that you will keep those features. Optional dependencies Dependencies can be marked “optional”, which means they will not be compiled by default. For example, let’s say that our 2D image processing library uses an external package to handle GIF images. This can be expressed like this: [dependencies] gif = { version = "0.11.1", optional = true } By default, this optional dependency implicitly defines a feature that looks like this: [features] gif = ["dep:gif"] This means that this dependency will only be included if the gif feature is enabled. The same cfg(feature = "gif") syntax can be used in the code, and the dependency can be enabled just like any feature such as --features gif (see Command-line feature options below). In some cases, you may not want to expose a feature that has the same name as the optional dependency. For example, perhaps the optional dependency is an internal detail, or you want to group multiple optional dependencies together, or you just want to use a better name. If you specify the optional dependency with the dep: prefix anywhere in the [features] table, that disables the implicit feature. Note : The dep: syntax is only available starting with Rust 1.60. Previous versions can only use the implicit feature name. For example, let’s say in order to support the AVIF image format, our library needs two other dependencies to be enabled: [dependencies] ravif = { version = "0.6.3", optional = true } rgb = { version = "0.8.25", optional = true } [features] avif = ["dep:ravif", "dep:rgb"] In this example, the avif feature will enable the two listed dependencies. This also avoids creating the implicit ravif and rgb features, since we don’t want users to enable those individually as they are internal details to our crate. Note : Another way to optionally include a dependency is to use platform-specific dependencies . Instead of using features, these are conditional based on the target platform. Dependency features Features of dependencies can be enabled within the dependency declaration. The features key indicates which features to enable: [dependencies] # Enables the `derive` feature of serde. serde = { version = "1.0.118", features = ["derive"] } The default features can be disabled using default-features = false : [dependencies] flate2 = { version = "1.0.3", default-features = false, features = ["zlib-rs"] } Note : This may not ensure the default features are disabled. If another dependency includes flate2 without specifying default-features = false , then the default features will be enabled. See feature unification below for more details. Features of dependencies can also be enabled in the [features] table. The syntax is "package-name/feature-name" . For example: [dependencies] jpeg-decoder = { version = "0.1.20", default-features = false } [features] # Enables parallel processing support by enabling the "rayon" feature of jpeg-decoder. parallel = ["jpeg-decoder/rayon"] The "package-name/feature-name" syntax will also enable package-name if it is an optional dependency. Often this is not what you want. You can add a ? as in "package-name?/feature-name" which will only enable the given feature if something else enables the optional dependency. Note : The ? syntax is only available starting with Rust 1.60. For example, let’s say we have added some serialization support to our library, and it requires enabling a corresponding feature in some optional dependencies. That can be done like this: [dependencies] serde = { version = "1.0.133", optional = true } rgb = { version = "0.8.25", optional = true } [features] serde = ["dep:serde", "rgb?/serde"] In this example, enabling the serde feature will enable the serde dependency. It will also enable the serde feature for the rgb dependency, but only if something else has enabled the rgb dependency. Command-line feature options The following command-line flags can be used to control which features are enabled: --features FEATURES : Enables the listed features. Multiple features may be separated with commas or spaces. If using spaces, be sure to use quotes around all the features if running Cargo from a shell (such as --features "foo bar" ). If building multiple packages in a workspace , the package-name/feature-name syntax can be used to specify features for specific workspace members. --all-features : Activates all features of all packages selected on the command line. --no-default-features : Does not activate the default feature of the selected packages. NOTE : check the individual subcommand documentation for details. Not all flags are available for all subcommands. Feature unification Features are unique to the package that defines them. Enabling a feature on a package does not enable a feature of the same name on other packages. When a dependency is used by multiple packages, Cargo will use the union of all features enabled on that dependency when building it. This helps ensure that only a single copy of the dependency is used. See the features section of the resolver documentation for more details. For example, let’s look at the winapi package which uses a large number of features. If your package depends on a package foo which enables the “fileapi” and “handleapi” features of winapi , and another dependency bar which enables the “std” and “winnt” features of winapi , then winapi will be built with all four of those features enabled. A consequence of this is that features should be additive . That is, enabling a feature should not disable functionality, and it should usually be safe to enable any combination of features. A feature should not introduce a SemVer-incompatible change . For example, if you want to optionally support no_std environments, do not use a no_std feature. Instead, use a std feature that enables std . For example: #![allow(unused)] #![no_std] fn main() { #[cfg(feature = "std")] extern crate std; #[cfg(feature = "std")] pub fn function_that_requires_std() { // ... } } Mutually exclusive features There are rare cases where features may be mutually incompatible with one another. This should be avoided if at all possible, because it requires coordinating all uses of the package in the dependency graph to cooperate to avoid enabling them together. If it is not possible, consider adding a compile error to detect this scenario. For example: #[cfg(all(feature = "foo", feature = "bar"))] compile_error!("feature \"foo\" and feature \"bar\" cannot be enabled at the same time"); Instead of using mutually exclusive features, consider some other options: Split the functionality into separate packages. When there is a conflict, choose one feature over another . The cfg-if package can help with writing more complex cfg expressions. Architect the code to allow the features to be enabled concurrently, and use runtime options to control which is used. For example, use a config file, command-line argument, or environment variable to choose which behavior to enable. Inspecting resolved features In complex dependency graphs, it can sometimes be difficult to understand how different features get enabled on various packages. The cargo tree command offers several options to help inspect and visualize which features are enabled. Some options to try: cargo tree -e features : This will show features in the dependency graph. Each feature will appear showing which package enabled it. cargo tree -f "{p} {f}" : This is a more compact view that shows a comma-separated list of features enabled on each package. cargo tree -e features -i foo : This will invert the tree, showing how features flow into the given package “foo”. This can be useful because viewing the entire graph can be quite large and overwhelming. Use this when you are trying to figure out which features are enabled on a specific package and why. See the example at the bottom of the cargo tree page on how to read this. Feature resolver version 2 A different feature resolver can be specified with the resolver field in Cargo.toml , like this: [package] name = "my-package" version = "1.0.0" resolver = "2" See the resolver versions section for more detail on specifying resolver versions. The version "2" resolver avoids unifying features in a few situations where that unification can be unwanted. The exact situations are described in the resolver chapter , but in short, it avoids unifying in these situations: Features enabled on platform-specific dependencies for target architectures not currently being built are ignored. Build-dependencies and proc-macros do not share features with normal dependencies. Dev-dependencies do not activate features unless building a Cargo target that needs them (like tests or examples). Avoiding the unification is necessary for some situations. For example, if a build-dependency enables a std feature, and the same dependency is used as a normal dependency for a no_std environment, enabling std would break the build. However, one drawback is that this can increase build times because the dependency is built multiple times (each with different features). When using the version "2" resolver, it is recommended to check for dependencies that are built multiple times to reduce overall build time. If it is not required to build those duplicated packages with separate features, consider adding features to the features list in the dependency declaration so that the duplicates end up with the same features (and thus Cargo will build it only once). You can detect these duplicate dependencies with the cargo tree --duplicates command. It will show which packages are built multiple times; look for any entries listed with the same version. See Inspecting resolved features for more on fetching information on the resolved features. For build dependencies, this is not necessary if you are cross-compiling with the --target flag because build dependencies are always built separately from normal dependencies in that scenario. Resolver version 2 command-line flags The resolver = "2" setting also changes the behavior of the --features and --no-default-features command-line options . With version "1" , you can only enable features for the package in the current working directory. For example, in a workspace with packages foo and bar , and you are in the directory for package foo , and ran the command cargo build -p bar --features bar-feat , this would fail because the --features flag only allowed enabling features on foo . With resolver = "2" , the features flags allow enabling features for any of the packages selected on the command-line with -p and --workspace flags. For example: # This command is allowed with resolver = "2", regardless of which directory # you are in. cargo build -p foo -p bar --features foo-feat,bar-feat # This explicit equivalent works with any resolver version: cargo build -p foo -p bar --features foo/foo-feat,bar/bar-feat Additionally, with resolver = "1" , the --no-default-features flag only disables the default feature for the package in the current directory. With version “2”, it will disable the default features for all workspace members. Build scripts Build scripts can detect which features are enabled on the package by inspecting the CARGO_FEATURE_<name> environment variable, where <name> is the feature name converted to uppercase and - converted to _ . Required features The required-features field can be used to disable specific Cargo targets if a feature is not enabled. See the linked documentation for more details. SemVer compatibility Enabling a feature should not introduce a SemVer-incompatible change. For example, the feature shouldn’t change an existing API in a way that could break existing uses. More details about what changes are compatible can be found in the SemVer Compatibility chapter . Care should be taken when adding and removing feature definitions and optional dependencies, as these can sometimes be backwards-incompatible changes. More details can be found in the Cargo section of the SemVer Compatibility chapter. In short, follow these rules: The following is usually safe to do in a minor release: Add a new feature or optional dependency . Change the features used on a dependency . The following should usually not be done in a minor release: Remove a feature or optional dependency . Moving existing public code behind a feature . Remove a feature from a feature list . See the links for caveats and examples. Feature documentation and discovery You are encouraged to document which features are available in your package. This can be done by adding doc comments at the top of lib.rs . As an example, see the regex crate source , which when rendered can be viewed on docs.rs . If you have other documentation, such as a user guide, consider adding the documentation there (for example, see serde.rs ). If you have a binary project, consider documenting the features in the README or other documentation for the project (for example, see sccache ). Clearly documenting the features can set expectations about features that are considered “unstable” or otherwise shouldn’t be used. For example, if there is an optional dependency, but you don’t want users to explicitly list that optional dependency as a feature, exclude it from the documented list. Documentation published on docs.rs can use metadata in Cargo.toml to control which features are enabled when the documentation is built. See docs.rs metadata documentation for more details. Note : Rustdoc has experimental support for annotating the documentation to indicate which features are required to use certain APIs. See the doc_cfg documentation for more details. An example is the syn documentation , where you can see colored boxes which note which features are required to use it. Discovering features When features are documented in the library API, this can make it easier for your users to discover which features are available and what they do. If the feature documentation for a package isn’t readily available, you can look at the Cargo.toml file, but sometimes it can be hard to track it down. The crate page on crates.io has a link to the source repository if available. Tools like cargo vendor or cargo-clone-crate can be used to download the source and inspect it. Feature combinations Because features are a form of conditional compilation, they require an exponential number of configurations and test cases to be 100% covered. By default, tests, docs, and other tooling such as Clippy will only run with the default set of features. We encourage you to consider your strategy and tooling in regards to different feature combinations — Every project will have different requirements in conjunction with time, resources, and the cost-benefit of covering specific scenarios. Common configurations may be with / without default features, specific combinations of features, or all combinations of features. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/cargo/reference/config.html#paths | Configuration - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Configuration This document explains how Cargo’s configuration system works, as well as available keys or configuration. For configuration of a package through its manifest, see the manifest format . Hierarchical structure Cargo allows local configuration for a particular package as well as global configuration. It looks for configuration files in the current directory and all parent directories. If, for example, Cargo were invoked in /projects/foo/bar/baz , then the following configuration files would be probed for and unified in this order: /projects/foo/bar/baz/.cargo/config.toml /projects/foo/bar/.cargo/config.toml /projects/foo/.cargo/config.toml /projects/.cargo/config.toml /.cargo/config.toml $CARGO_HOME/config.toml which defaults to: Windows: %USERPROFILE%\.cargo\config.toml Unix: $HOME/.cargo/config.toml With this structure, you can specify configuration per-package, and even possibly check it into version control. You can also specify personal defaults with a configuration file in your home directory. If a key is specified in multiple config files, the values will get merged together. Numbers, strings, and booleans will use the value in the deeper config directory taking precedence over ancestor directories, where the home directory is the lowest priority. Arrays will be joined together with higher precedence items being placed later in the merged array. At present, when being invoked from a workspace, Cargo does not read config files from crates within the workspace. i.e. if a workspace has two crates in it, named /projects/foo/bar/baz/mylib and /projects/foo/bar/baz/mybin , and there are Cargo configs at /projects/foo/bar/baz/mylib/.cargo/config.toml and /projects/foo/bar/baz/mybin/.cargo/config.toml , Cargo does not read those configuration files if it is invoked from the workspace root ( /projects/foo/bar/baz/ ). Note: Cargo also reads config files without the .toml extension, such as .cargo/config . Support for the .toml extension was added in version 1.39 and is the preferred form. If both files exist, Cargo will use the file without the extension. Configuration format Configuration files are written in the TOML format (like the manifest), with simple key-value pairs inside of sections (tables). The following is a quick overview of all settings, with detailed descriptions found below. paths = ["/path/to/override"] # path dependency overrides [alias] # command aliases b = "build" c = "check" t = "test" r = "run" rr = "run --release" recursive_example = "rr --example recursions" space_example = ["run", "--release", "--", "\"command list\""] [build] jobs = 1 # number of parallel jobs, defaults to # of CPUs rustc = "rustc" # the rust compiler tool rustc-wrapper = "…" # run this wrapper instead of `rustc` rustc-workspace-wrapper = "…" # run this wrapper instead of `rustc` for workspace members rustdoc = "rustdoc" # the doc generator tool target = "triple" # build for the target triple (ignored by `cargo install`) target-dir = "target" # path of where to place generated artifacts build-dir = "target" # path of where to place intermediate build artifacts rustflags = ["…", "…"] # custom flags to pass to all compiler invocations rustdocflags = ["…", "…"] # custom flags to pass to rustdoc incremental = true # whether or not to enable incremental compilation dep-info-basedir = "…" # path for the base directory for targets in depfiles [credential-alias] # Provides a way to define aliases for credential providers. my-alias = ["/usr/bin/cargo-credential-example", "--argument", "value", "--flag"] [doc] browser = "chromium" # browser to use with `cargo doc --open`, # overrides the `BROWSER` environment variable [env] # Set ENV_VAR_NAME=value for any process run by Cargo ENV_VAR_NAME = "value" # Set even if already present in environment ENV_VAR_NAME_2 = { value = "value", force = true } # `value` is relative to the parent of `.cargo/config.toml`, env var will be the full absolute path ENV_VAR_NAME_3 = { value = "relative/path", relative = true } [future-incompat-report] frequency = 'always' # when to display a notification about a future incompat report [cache] auto-clean-frequency = "1 day" # How often to perform automatic cache cleaning [cargo-new] vcs = "none" # VCS to use ('git', 'hg', 'pijul', 'fossil', 'none') [http] debug = false # HTTP debugging proxy = "host:port" # HTTP proxy in libcurl format ssl-version = "tlsv1.3" # TLS version to use ssl-version.max = "tlsv1.3" # maximum TLS version ssl-version.min = "tlsv1.1" # minimum TLS version timeout = 30 # timeout for each HTTP request, in seconds low-speed-limit = 10 # network timeout threshold (bytes/sec) cainfo = "cert.pem" # path to Certificate Authority (CA) bundle proxy-cainfo = "cert.pem" # path to proxy Certificate Authority (CA) bundle check-revoke = true # check for SSL certificate revocation multiplexing = true # HTTP/2 multiplexing user-agent = "…" # the user-agent header [install] root = "/some/path" # `cargo install` destination directory [net] retry = 3 # network retries git-fetch-with-cli = true # use the `git` executable for git operations offline = true # do not access the network [net.ssh] known-hosts = ["..."] # known SSH host keys [patch.<registry>] # Same keys as for [patch] in Cargo.toml [profile.<name>] # Modify profile settings via config. inherits = "dev" # Inherits settings from [profile.dev]. opt-level = 0 # Optimization level. debug = true # Include debug info. split-debuginfo = '...' # Debug info splitting behavior. strip = "none" # Removes symbols or debuginfo. debug-assertions = true # Enables debug assertions. overflow-checks = true # Enables runtime integer overflow checks. lto = false # Sets link-time optimization. panic = 'unwind' # The panic strategy. incremental = true # Incremental compilation. codegen-units = 16 # Number of code generation units. rpath = false # Sets the rpath linking option. [profile.<name>.build-override] # Overrides build-script settings. # Same keys for a normal profile. [profile.<name>.package.<name>] # Override profile for a package. # Same keys for a normal profile (minus `panic`, `lto`, and `rpath`). [resolver] incompatible-rust-versions = "allow" # Specifies how resolver reacts to these [registries.<name>] # registries other than crates.io index = "…" # URL of the registry index token = "…" # authentication token for the registry credential-provider = "cargo:token" # The credential provider for this registry. [registries.crates-io] protocol = "sparse" # The protocol to use to access crates.io. [registry] default = "…" # name of the default registry token = "…" # authentication token for crates.io credential-provider = "cargo:token" # The credential provider for crates.io. global-credential-providers = ["cargo:token"] # The credential providers to use by default. [source.<name>] # source definition and replacement replace-with = "…" # replace this source with the given named source directory = "…" # path to a directory source registry = "…" # URL to a registry source local-registry = "…" # path to a local registry source git = "…" # URL of a git repository source branch = "…" # branch name for the git repository tag = "…" # tag name for the git repository rev = "…" # revision for the git repository [target.<triple>] linker = "…" # linker to use runner = "…" # wrapper to run executables rustflags = ["…", "…"] # custom flags for `rustc` rustdocflags = ["…", "…"] # custom flags for `rustdoc` [target.<cfg>] linker = "…" # linker to use runner = "…" # wrapper to run executables rustflags = ["…", "…"] # custom flags for `rustc` [target.<triple>.<links>] # `links` build script override rustc-link-lib = ["foo"] rustc-link-search = ["/path/to/foo"] rustc-flags = "-L /some/path" rustc-cfg = ['key="value"'] rustc-env = {key = "value"} rustc-cdylib-link-arg = ["…"] metadata_key1 = "value" metadata_key2 = "value" [term] quiet = false # whether cargo output is quiet verbose = false # whether cargo provides verbose output color = 'auto' # whether cargo colorizes output hyperlinks = true # whether cargo inserts links into output unicode = true # whether cargo can render output using non-ASCII unicode characters progress.when = 'auto' # whether cargo shows progress bar progress.width = 80 # width of progress bar progress.term-integration = true # whether cargo reports progress to terminal emulator Environment variables Cargo can also be configured through environment variables in addition to the TOML configuration files. For each configuration key of the form foo.bar the environment variable CARGO_FOO_BAR can also be used to define the value. Keys are converted to uppercase, dots and dashes are converted to underscores. For example the target.x86_64-unknown-linux-gnu.runner key can also be defined by the CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUNNER environment variable. Environment variables will take precedence over TOML configuration files. Currently only integer, boolean, string and some array values are supported to be defined by environment variables. Descriptions below indicate which keys support environment variables and otherwise they are not supported due to technical issues . In addition to the system above, Cargo recognizes a few other specific environment variables . Command-line overrides Cargo also accepts arbitrary configuration overrides through the --config command-line option. The argument should be in TOML syntax of KEY=VALUE or provided as a path to an extra configuration file: # With `KEY=VALUE` in TOML syntax cargo --config net.git-fetch-with-cli=true fetch # With a path to a configuration file cargo --config ./path/to/my/extra-config.toml fetch The --config option may be specified multiple times, in which case the values are merged in left-to-right order, using the same merging logic that is used when multiple configuration files apply. Configuration values specified this way take precedence over environment variables, which take precedence over configuration files. When the --config option is provided as an extra configuration file, The configuration file loaded this way follow the same precedence rules as other options specified directly with --config . Some examples of what it looks like using Bourne shell syntax: # Most shells will require escaping. cargo --config http.proxy=\"http://example.com\" … # Spaces may be used. cargo --config "net.git-fetch-with-cli = true" … # TOML array example. Single quotes make it easier to read and write. cargo --config 'build.rustdocflags = ["--html-in-header", "header.html"]' … # Example of a complex TOML key. cargo --config "target.'cfg(all(target_arch = \"arm\", target_os = \"none\"))'.runner = 'my-runner'" … # Example of overriding a profile setting. cargo --config profile.dev.package.image.opt-level=3 … Config-relative paths Paths in config files may be absolute, relative, or a bare name without any path separators. Paths for executables without a path separator will use the PATH environment variable to search for the executable. Paths for non-executables will be relative to where the config value is defined. In particular, rules are: For environment variables, paths are relative to the current working directory. For config values loaded directly from the --config KEY=VALUE option, paths are relative to the current working directory. For config files, paths are relative to the parent directory of the directory where the config files were defined, no matter those files are from either the hierarchical probing or the --config <path> option. Note: To maintain consistency with existing .cargo/config.toml probing behavior, it is by design that a path in a config file passed via --config <path> is also relative to two levels up from the config file itself. To avoid unexpected results, the rule of thumb is putting your extra config files at the same level of discovered .cargo/config.toml in your project. For instance, given a project /my/project , it is recommended to put config files under /my/project/.cargo or a new directory at the same level, such as /my/project/.config . # Relative path examples. [target.x86_64-unknown-linux-gnu] runner = "foo" # Searches `PATH` for `foo`. [source.vendored-sources] # Directory is relative to the parent where `.cargo/config.toml` is located. # For example, `/my/project/.cargo/config.toml` would result in `/my/project/vendor`. directory = "vendor" Executable paths with arguments Some Cargo commands invoke external programs, which can be configured as a path and some number of arguments. The value may be an array of strings like ['/path/to/program', 'somearg'] or a space-separated string like '/path/to/program somearg' . If the path to the executable contains a space, the list form must be used. If Cargo is passing other arguments to the program such as a path to open or run, they will be passed after the last specified argument in the value of an option of this format. If the specified program does not have path separators, Cargo will search PATH for its executable. Credentials Configuration values with sensitive information are stored in the $CARGO_HOME/credentials.toml file. This file is automatically created and updated by cargo login and cargo logout when using the cargo:token credential provider. Tokens are used by some Cargo commands such as cargo publish for authenticating with remote registries. Care should be taken to protect the tokens and to keep them secret. It follows the same format as Cargo config files. [registry] token = "…" # Access token for crates.io [registries.<name>] token = "…" # Access token for the named registry As with most other config values, tokens may be specified with environment variables. The token for crates.io may be specified with the CARGO_REGISTRY_TOKEN environment variable. Tokens for other registries may be specified with environment variables of the form CARGO_REGISTRIES_<name>_TOKEN where <name> is the name of the registry in all capital letters. Note: Cargo also reads and writes credential files without the .toml extension, such as .cargo/credentials . Support for the .toml extension was added in version 1.39. In version 1.68, Cargo writes to the file with the extension by default. However, for backward compatibility reason, when both files exist, Cargo will read and write the file without the extension. Configuration keys This section documents all configuration keys. The description for keys with variable parts are annotated with angled brackets like target.<triple> where the <triple> part can be any target triple like target.x86_64-pc-windows-msvc . paths Type: array of strings (paths) Default: none Environment: not supported An array of paths to local packages which are to be used as overrides for dependencies. For more information see the Overriding Dependencies guide . [alias] Type: string or array of strings Default: see below Environment: CARGO_ALIAS_<name> The [alias] table defines CLI command aliases. For example, running cargo b is an alias for running cargo build . Each key in the table is the subcommand, and the value is the actual command to run. The value may be an array of strings, where the first element is the command and the following are arguments. It may also be a string, which will be split on spaces into subcommand and arguments. The following aliases are built-in to Cargo: [alias] b = "build" c = "check" d = "doc" t = "test" r = "run" rm = "remove" Aliases are not allowed to redefine existing built-in commands. Aliases are recursive: [alias] rr = "run --release" recursive_example = "rr --example recursions" [build] The [build] table controls build-time operations and compiler settings. build.jobs Type: integer or string Default: number of logical CPUs Environment: CARGO_BUILD_JOBS Sets the maximum number of compiler processes to run in parallel. If negative, it sets the maximum number of compiler processes to the number of logical CPUs plus provided value. Should not be 0. If a string default is provided, it sets the value back to defaults. Can be overridden with the --jobs CLI option. build.rustc Type: string (program path) Default: "rustc" Environment: CARGO_BUILD_RUSTC or RUSTC Sets the executable to use for rustc . build.rustc-wrapper Type: string (program path) Default: none Environment: CARGO_BUILD_RUSTC_WRAPPER or RUSTC_WRAPPER Sets a wrapper to execute instead of rustc . The first argument passed to the wrapper is the path to the actual executable to use (i.e., build.rustc , if that is set, or "rustc" otherwise). build.rustc-workspace-wrapper Type: string (program path) Default: none Environment: CARGO_BUILD_RUSTC_WORKSPACE_WRAPPER or RUSTC_WORKSPACE_WRAPPER Sets a wrapper to execute instead of rustc , for workspace members only. When building a single-package project without workspaces, that package is considered to be the workspace. The first argument passed to the wrapper is the path to the actual executable to use (i.e., build.rustc , if that is set, or "rustc" otherwise). It affects the filename hash so that artifacts produced by the wrapper are cached separately. If both rustc-wrapper and rustc-workspace-wrapper are set, then they will be nested: the final invocation is $RUSTC_WRAPPER $RUSTC_WORKSPACE_WRAPPER $RUSTC . build.rustdoc Type: string (program path) Default: "rustdoc" Environment: CARGO_BUILD_RUSTDOC or RUSTDOC Sets the executable to use for rustdoc . build.target Type: string or array of strings Default: host platform Environment: CARGO_BUILD_TARGET The default target platform triples to compile to. Possible values: Any supported target in rustc --print target-list . "host-tuple" , which will internally be substituted by the host’s target. This can be particularly useful if you’re cross-compiling some crates, and don’t want to specify your host’s machine as a target (for instance, an xtask in a shared project that may be worked on by many hosts). A path to a custom target specification. See Custom Target Lookup Path for more information. Can be overridden with the --target CLI option. [build] target = ["x86_64-unknown-linux-gnu", "i686-unknown-linux-gnu"] build.target-dir Type: string (path) Default: "target" Environment: CARGO_BUILD_TARGET_DIR or CARGO_TARGET_DIR The path to where all compiler output is placed. The default if not specified is a directory named target located at the root of the workspace. Can be overridden with the --target-dir CLI option. For more information see the build cache documentation . build.build-dir Type: string (path) Default: Defaults to the value of build.target-dir Environment: CARGO_BUILD_BUILD_DIR The directory where intermediate build artifacts will be stored. Intermediate artifacts are produced by Rustc/Cargo during the build process. This option supports path templating. Available template variables: {workspace-root} resolves to root of the current workspace. {cargo-cache-home} resolves to CARGO_HOME {workspace-path-hash} resolves to a hash of the manifest path For more information see the build cache documentation . build.rustflags Type: string or array of strings Default: none Environment: CARGO_BUILD_RUSTFLAGS or CARGO_ENCODED_RUSTFLAGS or RUSTFLAGS Extra command-line flags to pass to rustc . The value may be an array of strings or a space-separated string. There are four mutually exclusive sources of extra flags. They are checked in order, with the first one being used: CARGO_ENCODED_RUSTFLAGS environment variable. RUSTFLAGS environment variable. All matching target.<triple>.rustflags and target.<cfg>.rustflags config entries joined together. build.rustflags config value. Additional flags may also be passed with the cargo rustc command. If the --target flag (or build.target ) is used, then the flags will only be passed to the compiler for the target. Things being built for the host, such as build scripts or proc macros, will not receive the args. Without --target , the flags will be passed to all compiler invocations (including build scripts and proc macros) because dependencies are shared. If you have args that you do not want to pass to build scripts or proc macros and are building for the host, pass --target with the host triple . It is not recommended to pass in flags that Cargo itself usually manages. For example, the flags driven by profiles are best handled by setting the appropriate profile setting. Caution : Due to the low-level nature of passing flags directly to the compiler, this may cause a conflict with future versions of Cargo which may issue the same or similar flags on its own which may interfere with the flags you specify. This is an area where Cargo may not always be backwards compatible. build.rustdocflags Type: string or array of strings Default: none Environment: CARGO_BUILD_RUSTDOCFLAGS or CARGO_ENCODED_RUSTDOCFLAGS or RUSTDOCFLAGS Extra command-line flags to pass to rustdoc . The value may be an array of strings or a space-separated string. There are four mutually exclusive sources of extra flags. They are checked in order, with the first one being used: CARGO_ENCODED_RUSTDOCFLAGS environment variable. RUSTDOCFLAGS environment variable. All matching target.<triple>.rustdocflags config entries joined together. build.rustdocflags config value. Additional flags may also be passed with the cargo rustdoc command. Caution : Due to the low-level nature of passing flags directly to the compiler, this may cause a conflict with future versions of Cargo which may issue the same or similar flags on its own which may interfere with the flags you specify. This is an area where Cargo may not always be backwards compatible. build.incremental Type: bool Default: from profile Environment: CARGO_BUILD_INCREMENTAL or CARGO_INCREMENTAL Whether or not to perform incremental compilation . The default if not set is to use the value from the profile . Otherwise this overrides the setting of all profiles. The CARGO_INCREMENTAL environment variable can be set to 1 to force enable incremental compilation for all profiles, or 0 to disable it. This env var overrides the config setting. build.dep-info-basedir Type: string (path) Default: none Environment: CARGO_BUILD_DEP_INFO_BASEDIR Strips the given path prefix from dep info file paths. This config setting is intended to convert absolute paths to relative paths for tools that require relative paths. The setting itself is a config-relative path. So, for example, a value of "." would strip all paths starting with the parent directory of the .cargo directory. build.pipelining This option is deprecated and unused. Cargo always has pipelining enabled. [credential-alias] Type: string or array of strings Default: empty Environment: CARGO_CREDENTIAL_ALIAS_<name> The [credential-alias] table defines credential provider aliases. These aliases can be referenced as an element of the registry.global-credential-providers array, or as a credential provider for a specific registry under registries.<NAME>.credential-provider . If specified as a string, the value will be split on spaces into path and arguments. For example, to define an alias called my-alias : [credential-alias] my-alias = ["/usr/bin/cargo-credential-example", "--argument", "value", "--flag"] See Registry Authentication for more information. [doc] The [doc] table defines options for the cargo doc command. doc.browser Type: string or array of strings ( program path with args ) Default: BROWSER environment variable, or, if that is missing, opening the link in a system specific way This option sets the browser to be used by cargo doc , overriding the BROWSER environment variable when opening documentation with the --open option. [cargo-new] The [cargo-new] table defines defaults for the cargo new command. cargo-new.name This option is deprecated and unused. cargo-new.email This option is deprecated and unused. cargo-new.vcs Type: string Default: "git" or "none" Environment: CARGO_CARGO_NEW_VCS Specifies the source control system to use for initializing a new repository. Valid values are git , hg (for Mercurial), pijul , fossil or none to disable this behavior. Defaults to git , or none if already inside a VCS repository. Can be overridden with the --vcs CLI option. [env] The [env] section allows you to set additional environment variables for build scripts, rustc invocations, cargo run and cargo build . [env] OPENSSL_DIR = "/opt/openssl" By default, the variables specified will not override values that already exist in the environment. This behavior can be changed by setting the force flag. Setting the relative flag evaluates the value as a config-relative path that is relative to the parent directory of the .cargo directory that contains the config.toml file. The value of the environment variable will be the full absolute path. [env] TMPDIR = { value = "/home/tmp", force = true } OPENSSL_DIR = { value = "vendor/openssl", relative = true } [future-incompat-report] The [future-incompat-report] table controls setting for future incompat reporting future-incompat-report.frequency Type: string Default: "always" Environment: CARGO_FUTURE_INCOMPAT_REPORT_FREQUENCY Controls how often we display a notification to the terminal when a future incompat report is available. Possible values: always (default): Always display a notification when a command (e.g. cargo build ) produces a future incompat report never : Never display a notification [cache] The [cache] table defines settings for cargo’s caches. Global caches When running cargo commands, Cargo will automatically track which files you are using within the global cache. Periodically, Cargo will delete files that have not been used for some period of time. It will delete files that have to be downloaded from the network if they have not been used in 3 months. Files that can be generated without network access will be deleted if they have not been used in 1 month. The automatic deletion of files only occurs when running commands that are already doing a significant amount of work, such as all of the build commands ( cargo build , cargo test , cargo check , etc.), and cargo fetch . Automatic deletion is disabled if cargo is offline such as with --offline or --frozen to avoid deleting artifacts that may need to be used if you are offline for a long period of time. Note : This tracking is currently only implemented for the global cache in Cargo’s home directory. This includes registry indexes and source files downloaded from registries and git dependencies. Support for tracking build artifacts is not yet implemented, and tracked in cargo#13136 . Additionally, there is an unstable feature to support manually triggering cache cleaning, and to further customize the configuration options. See the Unstable chapter for more information. cache.auto-clean-frequency Type: string Default: "1 day" Environment: CARGO_CACHE_AUTO_CLEAN_FREQUENCY This option defines how often Cargo will automatically delete unused files in the global cache. This does not define how old the files must be, those thresholds are described above . It supports the following settings: "never" — Never deletes old files. "always" — Checks to delete old files every time Cargo runs. An integer followed by “seconds”, “minutes”, “hours”, “days”, “weeks”, or “months” — Checks to delete old files at most the given time frame. [http] The [http] table defines settings for HTTP behavior. This includes fetching crate dependencies and accessing remote git repositories. http.debug Type: boolean Default: false Environment: CARGO_HTTP_DEBUG If true , enables debugging of HTTP requests. The debug information can be seen by setting the CARGO_LOG=network=debug environment variable (or use network=trace for even more information). Be wary when posting logs from this output in a public location. The output may include headers with authentication tokens which you don’t want to leak! Be sure to review logs before posting them. http.proxy Type: string Default: none Environment: CARGO_HTTP_PROXY or HTTPS_PROXY or https_proxy or http_proxy Sets an HTTP and HTTPS proxy to use. The format is in libcurl format as in [protocol://]host[:port] . If not set, Cargo will also check the http.proxy setting in your global git configuration. If none of those are set, the HTTPS_PROXY or https_proxy environment variables set the proxy for HTTPS requests, and http_proxy sets it for HTTP requests. http.timeout Type: integer Default: 30 Environment: CARGO_HTTP_TIMEOUT or HTTP_TIMEOUT Sets the timeout for each HTTP request, in seconds. http.cainfo Type: string (path) Default: none Environment: CARGO_HTTP_CAINFO Path to a Certificate Authority (CA) bundle file, used to verify TLS certificates. If not specified, Cargo attempts to use the system certificates. http.proxy-cainfo Type: string (path) Default: falls back to http.cainfo if not set Environment: CARGO_HTTP_PROXY_CAINFO Path to a Certificate Authority (CA) bundle file, used to verify proxy TLS certificates. http.check-revoke Type: boolean Default: true (Windows) false (all others) Environment: CARGO_HTTP_CHECK_REVOKE This determines whether or not TLS certificate revocation checks should be performed. This only works on Windows. http.ssl-version Type: string or min/max table Default: none Environment: CARGO_HTTP_SSL_VERSION This sets the minimum TLS version to use. It takes a string, with one of the possible values of "default" , "tlsv1" , "tlsv1.0" , "tlsv1.1" , "tlsv1.2" , or "tlsv1.3" . This may alternatively take a table with two keys, min and max , which each take a string value of the same kind that specifies the minimum and maximum range of TLS versions to use. The default is a minimum version of "tlsv1.0" and a max of the newest version supported on your platform, typically "tlsv1.3" . http.low-speed-limit Type: integer Default: 10 Environment: CARGO_HTTP_LOW_SPEED_LIMIT This setting controls timeout behavior for slow connections. If the average transfer speed in bytes per second is below the given value for http.timeout seconds (default 30 seconds), then the connection is considered too slow and Cargo will abort and retry. http.multiplexing Type: boolean Default: true Environment: CARGO_HTTP_MULTIPLEXING When true , Cargo will attempt to use the HTTP2 protocol with multiplexing. This allows multiple requests to use the same connection, usually improving performance when fetching multiple files. If false , Cargo will use HTTP 1.1 without pipelining. http.user-agent Type: string Default: Cargo’s version Environment: CARGO_HTTP_USER_AGENT Specifies a custom user-agent header to use. The default if not specified is a string that includes Cargo’s version. [install] The [install] table defines defaults for the cargo install command. install.root Type: string (path) Default: Cargo’s home directory Environment: CARGO_INSTALL_ROOT Sets the path to the root directory for installing executables for cargo install . Executables go into a bin directory underneath the root. To track information of installed executables, some extra files, such as .crates.toml and .crates2.json , are also created under this root. The default if not specified is Cargo’s home directory (default .cargo in your home directory). Can be overridden with the --root command-line option. [net] The [net] table controls networking configuration. net.retry Type: integer Default: 3 Environment: CARGO_NET_RETRY Number of times to retry possibly spurious network errors. net.git-fetch-with-cli Type: boolean Default: false Environment: CARGO_NET_GIT_FETCH_WITH_CLI If this is true , then Cargo will use the git executable to fetch registry indexes and git dependencies. If false , then it uses a built-in git library. Setting this to true can be helpful if you have special authentication requirements that Cargo does not support. See Git Authentication for more information about setting up git authentication. net.offline Type: boolean Default: false Environment: CARGO_NET_OFFLINE If this is true , then Cargo will avoid accessing the network, and attempt to proceed with locally cached data. If false , Cargo will access the network as needed, and generate an error if it encounters a network error. Can be overridden with the --offline command-line option. net.ssh The [net.ssh] table contains settings for SSH connections. net.ssh.known-hosts Type: array of strings Default: see description Environment: not supported The known-hosts array contains a list of SSH host keys that should be accepted as valid when connecting to an SSH server (such as for SSH git dependencies). Each entry should be a string in a format similar to OpenSSH known_hosts files. Each string should start with one or more hostnames separated by commas, a space, the key type name, a space, and the base64-encoded key. For example: [net.ssh] known-hosts = [ "example.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFO4Q5T0UV0SQevair9PFwoxY9dl4pQl3u5phoqJH3cF" ] Cargo will attempt to load known hosts keys from common locations supported in OpenSSH, and will join those with any listed in a Cargo configuration file. If any matching entry has the correct key, the connection will be allowed. Cargo comes with the host keys for github.com built-in. If those ever change, you can add the new keys to the config or known_hosts file. See Git Authentication for more details. [patch] Just as you can override dependencies using [patch] in Cargo.toml , you can override them in the cargo configuration file to apply those patches to any affected build. The format is identical to the one used in Cargo.toml . Since .cargo/config.toml files are not usually checked into source control, you should prefer patching using Cargo.toml where possible to ensure that other developers can compile your crate in their own environments. Patching through cargo configuration files is generally only appropriate when the patch section is automatically generated by an external build tool. If a given dependency is patched both in a cargo configuration file and a Cargo.toml file, the patch in the configuration file is used. If multiple configuration files patch the same dependency, standard cargo configuration merging is used, which prefers the value defined closest to the current directory, with $HOME/.cargo/config.toml taking the lowest precedence. Relative path dependencies in such a [patch] section are resolved relative to the configuration file they appear in. [profile] The [profile] table can be used to globally change profile settings, and override settings specified in Cargo.toml . It has the same syntax and options as profiles specified in Cargo.toml . See the Profiles chapter for details about the options. [profile.<name>.build-override] Environment: CARGO_PROFILE_<name>_BUILD_OVERRIDE_<key> The build-override table overrides settings for build scripts, proc macros, and their dependencies. It has the same keys as a normal profile. See the overrides section for more details. [profile.<name>.package.<name>] Environment: not supported The package table overrides settings for specific packages. It has the same keys as a normal profile, minus the panic , lto , and rpath settings. See the overrides section for more details. profile.<name>.codegen-units Type: integer Default: See profile docs. Environment: CARGO_PROFILE_<name>_CODEGEN_UNITS See codegen-units . profile.<name>.debug Type: integer or boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_DEBUG See debug . profile.<name>.split-debuginfo Type: string Default: See profile docs. Environment: CARGO_PROFILE_<name>_SPLIT_DEBUGINFO See split-debuginfo . profile.<name>.debug-assertions Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_DEBUG_ASSERTIONS See debug-assertions . profile.<name>.incremental Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_INCREMENTAL See incremental . profile.<name>.lto Type: string or boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_LTO See lto . profile.<name>.overflow-checks Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_OVERFLOW_CHECKS See overflow-checks . profile.<name>.opt-level Type: integer or string Default: See profile docs. Environment: CARGO_PROFILE_<name>_OPT_LEVEL See opt-level . profile.<name>.panic Type: string Default: See profile docs. Environment: CARGO_PROFILE_<name>_PANIC See panic . profile.<name>.rpath Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_RPATH See rpath . profile.<name>.strip Type: string or boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_STRIP See strip . [resolver] The [resolver] table overrides dependency resolution behavior for local development (e.g. excludes cargo install ). resolver.incompatible-rust-versions Type: string Default: See resolver docs Environment: CARGO_RESOLVER_INCOMPATIBLE_RUST_VERSIONS When resolving which version of a dependency to use, select how versions with incompatible package.rust-version s are treated. Values include: allow : treat rust-version -incompatible versions like any other version fallback : only consider rust-version -incompatible versions if no other version matched Can be overridden with --ignore-rust-version CLI option Setting the dependency’s version requirement higher than any version with a compatible rust-version Specifying the version to cargo update with --precise See the resolver chapter for more details. MSRV: allow is supported on any version fallback is respected as of 1.84 [registries] The [registries] table is used for specifying additional registries . It consists of a sub-table for each named registry. registries.<name>.index Type: string (url) Default: none Environment: CARGO_REGISTRIES_<name>_INDEX Specifies the URL of the index for the registry. registries.<name>.token Type: string Default: none Environment: CARGO_REGISTRIES_<name>_TOKEN Specifies the authentication token for the given registry. This value should only appear in the credentials file. This is used for registry commands like cargo publish that require authentication. Can be overridden with the --token command-line option. registries.<name>.credential-provider Type: string or array of path and arguments Default: none Environment: CARGO_REGISTRIES_<name>_CREDENTIAL_PROVIDER Specifies the credential provider for the given registry. If not set, the providers in registry.global-credential-providers will be used. If specified as a string, path and arguments will be split on spaces. For paths or arguments that contain spaces, use an array. If the value exists in the [credential-alias] table, the alias will be used. See Registry Authentication for more information. registries.crates-io.protocol Type: string Default: "sparse" Environment: CARGO_REGISTRIES_CRATES_IO_PROTOCOL Specifies the protocol used to access crates.io. Allowed values are git or sparse . git causes Cargo to clone the entire index of all packages ever published to crates.io from https://github.com/rust-lang/crates.io-index/ . This can have performance implications due to the size of the index. sparse is a newer protocol which uses HTTPS to download only what is necessary from https://index.crates.io/ . This can result in a significant performance improvement for resolving new dependencies in most situations. More information about registry protocols may be found in the Registries chapter . [registry] The [registry] table controls the default registry used when one is not specified. registry.index This value is no longer accepted and should not be used. registry.default Type: string Default: "crates-io" Environment: CARGO_REGISTRY_DEFAULT The name of the registry (from the registries table ) to use by default for registry commands like cargo publish . Can be overridden with the --registry command-line option. registry.credential-provider Type: string or array of path and arguments Default: none Environment: CARGO_REGISTRY_CREDENTIAL_PROVIDER Specifies the credential provider for crates.io . If not set, the providers in registry.global-credential-providers will be used. If specified as a string, path and arguments will be split on spaces. For paths or arguments that contain spaces, use an array. If the value exists in the [credential-alias] table, the alias will be used. See Registry Authentication for more information. registry.token Type: string Default: none Environment: CARGO_REGISTRY_TOKEN Specifies the authentication token for crates.io . This value should only appear in the credentials file. This is used for registry commands like cargo publish that require authentication. Can be overridden with the --token command-line option. registry.global-credential-providers Type: array Default: ["cargo:token"] Environment: CARGO_REGISTRY_GLOBAL_CREDENTIAL_PROVIDERS Specifies the list of global credential providers. If credential provider is not set for a specific registry using registries.<name>.credential-provider , Cargo will use the credential providers in this list. Providers toward the end of the list have precedence. Path and arguments are split on spaces. If the path or arguments contains spaces, the credential provider should be defined in the [credential-alias] table and referenced here by its alias. See Registry Authentication for more information. [source] The [source] table defines the registry sources available. See Source Replacement for more information. It consists of a sub-table for each named source. A source should only define one kind (directory, registry, local-registry, or git). source.<name>.replace-with Type: string Default: none Environment: not supported If set, replace this source with the given named source or named registry. source.<name>.directory Type: string (path) Default: none Environment: not supported Sets the path to a directory to use as a directory source. source.<name>.registry Type: string (url) Default: none Environment: not supported Sets the URL to use for a registry source. source.<name>.local-registry Type: string (path) Default: none Environment: not supported Sets the path to a directory to use as a local registry source. source.<name>.git Type: string (url) Default: none Environment: not supported Sets the URL to use for a git repository source. source.<name>.branch Type: string Default: none Environment: not supported Sets the branch name to use for a git repository. If none of branch , tag , or rev is set, defaults to the master branch. source.<name>.tag Type: string Default: none Environment: not supported Sets the tag name to use for a git repository. If none of branch , tag , or rev is set, defaults to the master branch. source.<name>.rev Type: string Default: none Environment: not supported Sets the revision to use for a git repository. If none of branch , tag , or rev is set, defaults to the master branch. [target] The [target] table is used for specifying settings for specific platform targets. It consists of a sub-table which is either a platform triple or a cfg() expression . The given values will be used if the target platform matches either the <triple> value or the <cfg> expression. [target.thumbv7m-none-eabi] linker = "arm-none-eabi-gcc" runner = "my-emulator" rustflags = ["…", "…"] [target.'cfg(all(target_arch = "arm", target_os = "none"))'] runner = "my-arm-wrapper" rustflags = ["…", "…"] cfg values come from those built-in to the compiler (run rustc --print=cfg to view) and extra --cfg flags passed to rustc (such as those defined in RUSTFLAGS ). Do not try to match on debug_assertions , test , Cargo features like feature="foo" , or values set by build scripts . If using a target spec JSON file, the <triple> value is the filename stem. For example --target foo/bar.json would match [target.bar] . target.<triple>.ar This option is deprecated and unused. target.<triple>.linker Type: string (program path) Default: none Environment: CARGO_TARGET_<triple>_LINKER Specifies the linker which is passed to rustc (via -C linker ) when the <triple> is being compiled for. By default, the linker is not overridden. target.<cfg>.linker This is similar to the target linker , but using a cfg() expression . If both a <triple> and <cfg> runner match, the <triple> will take precedence. It is an error if more than one <cfg> runner matches the current target. target.<triple>.runner Type: string or array of strings ( program path with args ) Default: none Environment: CARGO_TARGET_<triple>_RUNNER If a runner is provided, executables for the target <triple> will be executed by invoking the specified runner with the actual executable passed as an argument. This applies to cargo run , cargo test and cargo bench commands. By default, compiled executables are executed directly. target.<cfg>.runner This is similar to the target runner , but using a cfg() expression . If both a <triple> and <cfg> runner match, the <triple> will take precedence. It is an error if more than one <cfg> runner matches the current target. target.<triple>.rustflags Type: string or array of strings Default: none Environment: CARGO_TARGET_<triple>_RUSTFLAGS Passes a set of custom flags to the compiler for this <triple> . The value may be an array of strings or a space-separated string. See build.rustflags for more details on the different ways to specific extra flags. target.<cfg>.rustflags This is similar to the target rustflags , but using a cfg() expression . If several <cfg> and <triple> entries match the current target, the flags are joined together. target.<triple>.rustdocflags Type: string or array of strings Default: none Environment: CARGO_TARGET_<triple>_RUSTDOCFLAGS Passes a set of custom flags to the compiler for this <triple> . The value may be an array of strings or a space-separated string. See build.rustdocflags for more details on the different ways to specific extra flags. target.<triple>.<links> The links sub-table provides a way to override a build script . When specified, the build script for the given links library will not be run, and the given values will be used instead. [target.x86_64-unknown-linux-gnu.foo] rustc-link-lib = ["foo"] rustc-link-search = ["/path/to/foo"] rustc-flags = "-L /some/path" rustc-cfg = ['key="value"'] rustc-env = {key = "value"} rustc-cdylib-link-arg = ["…"] metadata_key1 = "value" metadata_key2 = "value" [term] The [term] table controls terminal output and interaction. term.quiet Type: boolean Default: false Environment: CARGO_TERM_QUIET Controls whether or not log messages are displayed by Cargo. Specifying the --quiet flag will override and force quiet output. Specifying the --verbose flag will override and disable quiet output. term.verbose Type: boolean Default: false Environment: CARGO_TERM_VERBOSE Controls whether or not extra detailed messages are displayed by Cargo. Specifying the --quiet flag will override and disable verbose output. Specifying the --verbose flag will override and force verbose output. term.color Type: string Default: "auto" Environment: CARGO_TERM_COLOR Controls whether or not colored output is used in the terminal. Possible values: auto (default): Automatically detect if color support is available on the terminal. always : Always display colors. never : Never display colors. Can be overridden with the --color command-line option. term.hyperlinks Type: bool Default: auto-detect Environment: CARGO_TERM_HYPERLINKS Controls whether or not hyperlinks are used in the terminal. term.unicode Type: bool Default: auto-detect Environment: CARGO_TERM_UNICODE Control whether output can be rendered using non-ASCII unicode characters. term.progress.when Type: string Default: "auto" Environment: CARGO_TERM_PROGRESS_WHEN Controls whether or not progress bar is shown in the terminal. Possible values: auto (default): Intelligently guess whether to show progress bar. always : Always show progress bar. never : Never show progress bar. term.progress.width Type: integer Default: none Environment: CARGO_TERM_PROGRESS_WIDTH Sets the width for progress bar. term.progress.term-integration Type: bool Default: auto-detect Environment: CARGO_TERM_PROGRESS_TERM_INTEGRATION Report progress to the terminal emulator for display in places like the task bar. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/rust-by-example/compatibility/raw_identifiers.html | Raw identifiers - Rust By Example Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu Rust By Example Raw identifiers Rust, like many programming languages, has the concept of "keywords". These identifiers mean something to the language, and so you cannot use them in places like variable names, function names, and other places. Raw identifiers let you use keywords where they would not normally be allowed. This is particularly useful when Rust introduces new keywords, and a library using an older edition of Rust has a variable or function with the same name as a keyword introduced in a newer edition. For example, consider a crate foo compiled with the 2015 edition of Rust that exports a function named try . This keyword is reserved for a new feature in the 2018 edition, so without raw identifiers, we would have no way to name the function. extern crate foo; fn main() { foo::try(); } You'll get this error: error: expected identifier, found keyword `try` --> src/main.rs:4:4 | 4 | foo::try(); | ^^^ expected identifier, found keyword You can write this with a raw identifier: extern crate foo; fn main() { foo::r#try(); } | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/reference/attributes.html#grammar-MetaSeq | Attributes - The Rust Reference Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Reference [attributes] Attributes [attributes .syntax] Syntax InnerAttribute → # ! [ Attr ] OuterAttribute → # [ Attr ] Attr → SimplePath AttrInput ? | unsafe ( SimplePath AttrInput ? ) AttrInput → DelimTokenTree | = Expression Show Railroad InnerAttribute # ! [ Attr ] OuterAttribute # [ Attr ] Attr SimplePath AttrInput unsafe ( SimplePath AttrInput ) AttrInput DelimTokenTree = Expression [attributes .intro] An attribute is a general, free-form metadatum that is interpreted according to name, convention, language, and compiler version. Attributes are modeled on Attributes in ECMA-335 , with the syntax coming from ECMA-334 (C#). [attributes .inner] Inner attributes , written with a bang ( ! ) after the hash ( # ), apply to the item that the attribute is declared within. Outer attributes , written without the bang after the hash, apply to the thing that follows the attribute. [attributes .input] The attribute consists of a path to the attribute, followed by an optional delimited token tree whose interpretation is defined by the attribute. Attributes other than macro attributes also allow the input to be an equals sign ( = ) followed by an expression. See the meta item syntax below for more details. [attributes .safety] An attribute may be unsafe to apply. To avoid undefined behavior when using these attributes, certain obligations that cannot be checked by the compiler must be met. To assert these have been, the attribute is wrapped in unsafe(..) , e.g. #[unsafe(no_mangle)] . The following attributes are unsafe: export_name link_section naked no_mangle [attributes .kind] Attributes can be classified into the following kinds: Built-in attributes Proc macro attributes Derive macro helper attributes Tool attributes [attributes .allowed-position] Attributes may be applied to many things in the language: All item declarations accept outer attributes while external blocks , functions , implementations , and modules accept inner attributes. Most statements accept outer attributes (see Expression Attributes for limitations on expression statements). Block expressions accept outer and inner attributes, but only when they are the outer expression of an expression statement or the final expression of another block expression. Enum variants and struct and union fields accept outer attributes. Match expression arms accept outer attributes. Generic lifetime or type parameter accept outer attributes. Expressions accept outer attributes in limited situations, see Expression Attributes for details. Function , closure and function pointer parameters accept outer attributes. This includes attributes on variadic parameters denoted with ... in function pointers and external blocks . Some examples of attributes: #![allow(unused)] fn main() { // General metadata applied to the enclosing module or crate. #![crate_type = "lib"] // A function marked as a unit test #[test] fn test_foo() { /* ... */ } // A conditionally-compiled module #[cfg(target_os = "linux")] mod bar { /* ... */ } // A lint attribute used to suppress a warning/error #[allow(non_camel_case_types)] type int8_t = i8; // Inner attribute applies to the entire function. fn some_unused_variables() { #![allow(unused_variables)] let x = (); let y = (); let z = (); } } [attributes .meta] Meta item attribute syntax [attributes .meta .intro] A “meta item” is the syntax used for the Attr rule by most built-in attributes . It has the following grammar: [attributes .meta .syntax] Syntax MetaItem → SimplePath | SimplePath = Expression | SimplePath ( MetaSeq ? ) MetaSeq → MetaItemInner ( , MetaItemInner ) * , ? MetaItemInner → MetaItem | Expression Show Railroad MetaItem SimplePath SimplePath = Expression SimplePath ( MetaSeq ) MetaSeq MetaItemInner , MetaItemInner , MetaItemInner MetaItem Expression [attributes .meta .literal-expr] Expressions in meta items must macro-expand to literal expressions, which must not include integer or float type suffixes. Expressions which are not literal expressions will be syntactically accepted (and can be passed to proc-macros), but will be rejected after parsing. [attributes .meta .order] Note that if the attribute appears within another macro, it will be expanded after that outer macro. For example, the following code will expand the Serialize proc-macro first, which must preserve the include_str! call in order for it to be expanded: #[derive(Serialize)] struct Foo { #[doc = include_str!("x.md")] x: u32 } [attributes .meta .order-macro] Additionally, macros in attributes will be expanded only after all other attributes applied to the item: #[macro_attr1] // expanded first #[doc = mac!()] // `mac!` is expanded fourth. #[macro_attr2] // expanded second #[derive(MacroDerive1, MacroDerive2)] // expanded third fn foo() {} [attributes .meta .builtin] Various built-in attributes use different subsets of the meta item syntax to specify their inputs. The following grammar rules show some commonly used forms: [attributes .meta .builtin .syntax] Syntax MetaWord → IDENTIFIER MetaNameValueStr → IDENTIFIER = ( STRING_LITERAL | RAW_STRING_LITERAL ) MetaListPaths → IDENTIFIER ( ( SimplePath ( , SimplePath ) * , ? ) ? ) MetaListIdents → IDENTIFIER ( ( IDENTIFIER ( , IDENTIFIER ) * , ? ) ? ) MetaListNameValueStr → IDENTIFIER ( ( MetaNameValueStr ( , MetaNameValueStr ) * , ? ) ? ) Show Railroad MetaWord IDENTIFIER MetaNameValueStr IDENTIFIER = STRING_LITERAL RAW_STRING_LITERAL MetaListPaths IDENTIFIER ( SimplePath , SimplePath , ) MetaListIdents IDENTIFIER ( IDENTIFIER , IDENTIFIER , ) MetaListNameValueStr IDENTIFIER ( MetaNameValueStr , MetaNameValueStr , ) Some examples of meta items are: Style Example MetaWord no_std MetaNameValueStr doc = "example" MetaListPaths allow(unused, clippy::inline_always) MetaListIdents macro_use(foo, bar) MetaListNameValueStr link(name = "CoreFoundation", kind = "framework") [attributes .activity] Active and inert attributes [attributes .activity .intro] An attribute is either active or inert. During attribute processing, active attributes remove themselves from the thing they are on while inert attributes stay on. The cfg and cfg_attr attributes are active. Attribute macros are active. All other attributes are inert. [attributes .tool] Tool attributes [attributes .tool .intro] The compiler may allow attributes for external tools where each tool resides in its own module in the tool prelude . The first segment of the attribute path is the name of the tool, with one or more additional segments whose interpretation is up to the tool. [attributes .tool .ignored] When a tool is not in use, the tool’s attributes are accepted without a warning. When the tool is in use, the tool is responsible for processing and interpretation of its attributes. [attributes .tool .prelude] Tool attributes are not available if the no_implicit_prelude attribute is used. #![allow(unused)] fn main() { // Tells the rustfmt tool to not format the following element. #[rustfmt::skip] struct S { } // Controls the "cyclomatic complexity" threshold for the clippy tool. #[clippy::cyclomatic_complexity = "100"] pub fn f() {} } Note rustc currently recognizes the tools “clippy”, “rustfmt”, “diagnostic”, “miri” and “rust_analyzer”. [attributes .builtin] Built-in attributes index The following is an index of all built-in attributes. Conditional compilation cfg — Controls conditional compilation. cfg_attr — Conditionally includes attributes. Testing test — Marks a function as a test. ignore — Disables a test function. should_panic — Indicates a test should generate a panic. Derive derive — Automatic trait implementations. automatically_derived — Marker for implementations created by derive . Macros macro_export — Exports a macro_rules macro for cross-crate usage. macro_use — Expands macro visibility, or imports macros from other crates. proc_macro — Defines a function-like macro. proc_macro_derive — Defines a derive macro. proc_macro_attribute — Defines an attribute macro. Diagnostics allow , expect , warn , deny , forbid — Alters the default lint level. deprecated — Generates deprecation notices. must_use — Generates a lint for unused values. diagnostic::on_unimplemented — Hints the compiler to emit a certain error message if a trait is not implemented. diagnostic::do_not_recommend — Hints the compiler to not show a certain trait impl in error messages. ABI, linking, symbols, and FFI link — Specifies a native library to link with an extern block. link_name — Specifies the name of the symbol for functions or statics in an extern block. link_ordinal — Specifies the ordinal of the symbol for functions or statics in an extern block. no_link — Prevents linking an extern crate. repr — Controls type layout. crate_type — Specifies the type of crate (library, executable, etc.). no_main — Disables emitting the main symbol. export_name — Specifies the exported symbol name for a function or static. link_section — Specifies the section of an object file to use for a function or static. no_mangle — Disables symbol name encoding. used — Forces the compiler to keep a static item in the output object file. crate_name — Specifies the crate name. Code generation inline — Hint to inline code. cold — Hint that a function is unlikely to be called. naked — Prevent the compiler from emitting a function prologue and epilogue. no_builtins — Disables use of certain built-in functions. target_feature — Configure platform-specific code generation. track_caller — Pass the parent call location to std::panic::Location::caller() . instruction_set — Specify the instruction set used to generate a functions code Documentation doc — Specifies documentation. See The Rustdoc Book for more information. Doc comments are transformed into doc attributes. Preludes no_std — Removes std from the prelude. no_implicit_prelude — Disables prelude lookups within a module. Modules path — Specifies the filename for a module. Limits recursion_limit — Sets the maximum recursion limit for certain compile-time operations. type_length_limit — Sets the maximum size of a polymorphic type. Runtime panic_handler — Sets the function to handle panics. global_allocator — Sets the global memory allocator. windows_subsystem — Specifies the windows subsystem to link with. Features feature — Used to enable unstable or experimental compiler features. See The Unstable Book for features implemented in rustc . Type System non_exhaustive — Indicate that a type will have more fields/variants added in future. Debugger debugger_visualizer — Embeds a file that specifies debugger output for a type. collapse_debuginfo — Controls how macro invocations are encoded in debuginfo. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/reference/abi.html#the-no_mangle-attribute | Application binary interface - The Rust Reference Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Reference [abi] Application binary interface (ABI) [abi .intro] This section documents features that affect the ABI of the compiled output of a crate. See extern functions for information on specifying the ABI for exporting functions. See external blocks for information on specifying the ABI for linking external libraries. [abi .used] The used attribute [abi .used .intro] The used attribute can only be applied to static items . This attribute forces the compiler to keep the variable in the output object file (.o, .rlib, etc. excluding final binaries) even if the variable is not used, or referenced, by any other item in the crate. However, the linker is still free to remove such an item. Below is an example that shows under what conditions the compiler keeps a static item in the output object file. #![allow(unused)] fn main() { // foo.rs // This is kept because of `#[used]`: #[used] static FOO: u32 = 0; // This is removable because it is unused: #[allow(dead_code)] static BAR: u32 = 0; // This is kept because it is publicly reachable: pub static BAZ: u32 = 0; // This is kept because it is referenced by a public, reachable function: static QUUX: u32 = 0; pub fn quux() -> &'static u32 { &QUUX } // This is removable because it is referenced by a private, unused (dead) function: static CORGE: u32 = 0; #[allow(dead_code)] fn corge() -> &'static u32 { &CORGE } } $ rustc -O --emit=obj --crate-type=rlib foo.rs $ nm -C foo.o 0000000000000000 R foo::BAZ 0000000000000000 r foo::FOO 0000000000000000 R foo::QUUX 0000000000000000 T foo::quux [abi .no_mangle] The no_mangle attribute [abi .no_mangle .intro] The no_mangle attribute may be used on any item to disable standard symbol name mangling. The symbol for the item will be the identifier of the item’s name. [abi .no_mangle .publicly-exported] Additionally, the item will be publicly exported from the produced library or object file, similar to the used attribute . [abi .no_mangle .unsafe] This attribute is unsafe as an unmangled symbol may collide with another symbol with the same name (or with a well-known symbol), leading to undefined behavior. #![allow(unused)] fn main() { #[unsafe(no_mangle)] extern "C" fn foo() {} } [abi .no_mangle .edition2024] 2024 Edition differences Before the 2024 edition it is allowed to use the no_mangle attribute without the unsafe qualification. [abi .link_section] The link_section attribute [abi .link_section .intro] The link_section attribute specifies the section of the object file that a function or static ’s content will be placed into. [abi .link_section .syntax] The link_section attribute uses the MetaNameValueStr syntax to specify the section name. #![allow(unused)] fn main() { #[unsafe(no_mangle)] #[unsafe(link_section = ".example_section")] pub static VAR1: u32 = 1; } [abi .link_section .unsafe] This attribute is unsafe as it allows users to place data and code into sections of memory not expecting them, such as mutable data into read-only areas. [abi .link_section .edition2024] 2024 Edition differences Before the 2024 edition it is allowed to use the link_section attribute without the unsafe qualification. [abi .export_name] The export_name attribute [abi .export_name .intro] The export_name attribute specifies the name of the symbol that will be exported on a function or static . [abi .export_name .syntax] The export_name attribute uses the MetaNameValueStr syntax to specify the symbol name. #![allow(unused)] fn main() { #[unsafe(export_name = "exported_symbol_name")] pub fn name_in_rust() { } } [abi .export_name .unsafe] This attribute is unsafe as a symbol with a custom name may collide with another symbol with the same name (or with a well-known symbol), leading to undefined behavior. [abi .export_name .edition2024] 2024 Edition differences Before the 2024 edition it is allowed to use the export_name attribute without the unsafe qualification. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/reference/attributes.html#r-attributes.meta | Attributes - The Rust Reference Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Reference [attributes] Attributes [attributes .syntax] Syntax InnerAttribute → # ! [ Attr ] OuterAttribute → # [ Attr ] Attr → SimplePath AttrInput ? | unsafe ( SimplePath AttrInput ? ) AttrInput → DelimTokenTree | = Expression Show Railroad InnerAttribute # ! [ Attr ] OuterAttribute # [ Attr ] Attr SimplePath AttrInput unsafe ( SimplePath AttrInput ) AttrInput DelimTokenTree = Expression [attributes .intro] An attribute is a general, free-form metadatum that is interpreted according to name, convention, language, and compiler version. Attributes are modeled on Attributes in ECMA-335 , with the syntax coming from ECMA-334 (C#). [attributes .inner] Inner attributes , written with a bang ( ! ) after the hash ( # ), apply to the item that the attribute is declared within. Outer attributes , written without the bang after the hash, apply to the thing that follows the attribute. [attributes .input] The attribute consists of a path to the attribute, followed by an optional delimited token tree whose interpretation is defined by the attribute. Attributes other than macro attributes also allow the input to be an equals sign ( = ) followed by an expression. See the meta item syntax below for more details. [attributes .safety] An attribute may be unsafe to apply. To avoid undefined behavior when using these attributes, certain obligations that cannot be checked by the compiler must be met. To assert these have been, the attribute is wrapped in unsafe(..) , e.g. #[unsafe(no_mangle)] . The following attributes are unsafe: export_name link_section naked no_mangle [attributes .kind] Attributes can be classified into the following kinds: Built-in attributes Proc macro attributes Derive macro helper attributes Tool attributes [attributes .allowed-position] Attributes may be applied to many things in the language: All item declarations accept outer attributes while external blocks , functions , implementations , and modules accept inner attributes. Most statements accept outer attributes (see Expression Attributes for limitations on expression statements). Block expressions accept outer and inner attributes, but only when they are the outer expression of an expression statement or the final expression of another block expression. Enum variants and struct and union fields accept outer attributes. Match expression arms accept outer attributes. Generic lifetime or type parameter accept outer attributes. Expressions accept outer attributes in limited situations, see Expression Attributes for details. Function , closure and function pointer parameters accept outer attributes. This includes attributes on variadic parameters denoted with ... in function pointers and external blocks . Some examples of attributes: #![allow(unused)] fn main() { // General metadata applied to the enclosing module or crate. #![crate_type = "lib"] // A function marked as a unit test #[test] fn test_foo() { /* ... */ } // A conditionally-compiled module #[cfg(target_os = "linux")] mod bar { /* ... */ } // A lint attribute used to suppress a warning/error #[allow(non_camel_case_types)] type int8_t = i8; // Inner attribute applies to the entire function. fn some_unused_variables() { #![allow(unused_variables)] let x = (); let y = (); let z = (); } } [attributes .meta] Meta item attribute syntax [attributes .meta .intro] A “meta item” is the syntax used for the Attr rule by most built-in attributes . It has the following grammar: [attributes .meta .syntax] Syntax MetaItem → SimplePath | SimplePath = Expression | SimplePath ( MetaSeq ? ) MetaSeq → MetaItemInner ( , MetaItemInner ) * , ? MetaItemInner → MetaItem | Expression Show Railroad MetaItem SimplePath SimplePath = Expression SimplePath ( MetaSeq ) MetaSeq MetaItemInner , MetaItemInner , MetaItemInner MetaItem Expression [attributes .meta .literal-expr] Expressions in meta items must macro-expand to literal expressions, which must not include integer or float type suffixes. Expressions which are not literal expressions will be syntactically accepted (and can be passed to proc-macros), but will be rejected after parsing. [attributes .meta .order] Note that if the attribute appears within another macro, it will be expanded after that outer macro. For example, the following code will expand the Serialize proc-macro first, which must preserve the include_str! call in order for it to be expanded: #[derive(Serialize)] struct Foo { #[doc = include_str!("x.md")] x: u32 } [attributes .meta .order-macro] Additionally, macros in attributes will be expanded only after all other attributes applied to the item: #[macro_attr1] // expanded first #[doc = mac!()] // `mac!` is expanded fourth. #[macro_attr2] // expanded second #[derive(MacroDerive1, MacroDerive2)] // expanded third fn foo() {} [attributes .meta .builtin] Various built-in attributes use different subsets of the meta item syntax to specify their inputs. The following grammar rules show some commonly used forms: [attributes .meta .builtin .syntax] Syntax MetaWord → IDENTIFIER MetaNameValueStr → IDENTIFIER = ( STRING_LITERAL | RAW_STRING_LITERAL ) MetaListPaths → IDENTIFIER ( ( SimplePath ( , SimplePath ) * , ? ) ? ) MetaListIdents → IDENTIFIER ( ( IDENTIFIER ( , IDENTIFIER ) * , ? ) ? ) MetaListNameValueStr → IDENTIFIER ( ( MetaNameValueStr ( , MetaNameValueStr ) * , ? ) ? ) Show Railroad MetaWord IDENTIFIER MetaNameValueStr IDENTIFIER = STRING_LITERAL RAW_STRING_LITERAL MetaListPaths IDENTIFIER ( SimplePath , SimplePath , ) MetaListIdents IDENTIFIER ( IDENTIFIER , IDENTIFIER , ) MetaListNameValueStr IDENTIFIER ( MetaNameValueStr , MetaNameValueStr , ) Some examples of meta items are: Style Example MetaWord no_std MetaNameValueStr doc = "example" MetaListPaths allow(unused, clippy::inline_always) MetaListIdents macro_use(foo, bar) MetaListNameValueStr link(name = "CoreFoundation", kind = "framework") [attributes .activity] Active and inert attributes [attributes .activity .intro] An attribute is either active or inert. During attribute processing, active attributes remove themselves from the thing they are on while inert attributes stay on. The cfg and cfg_attr attributes are active. Attribute macros are active. All other attributes are inert. [attributes .tool] Tool attributes [attributes .tool .intro] The compiler may allow attributes for external tools where each tool resides in its own module in the tool prelude . The first segment of the attribute path is the name of the tool, with one or more additional segments whose interpretation is up to the tool. [attributes .tool .ignored] When a tool is not in use, the tool’s attributes are accepted without a warning. When the tool is in use, the tool is responsible for processing and interpretation of its attributes. [attributes .tool .prelude] Tool attributes are not available if the no_implicit_prelude attribute is used. #![allow(unused)] fn main() { // Tells the rustfmt tool to not format the following element. #[rustfmt::skip] struct S { } // Controls the "cyclomatic complexity" threshold for the clippy tool. #[clippy::cyclomatic_complexity = "100"] pub fn f() {} } Note rustc currently recognizes the tools “clippy”, “rustfmt”, “diagnostic”, “miri” and “rust_analyzer”. [attributes .builtin] Built-in attributes index The following is an index of all built-in attributes. Conditional compilation cfg — Controls conditional compilation. cfg_attr — Conditionally includes attributes. Testing test — Marks a function as a test. ignore — Disables a test function. should_panic — Indicates a test should generate a panic. Derive derive — Automatic trait implementations. automatically_derived — Marker for implementations created by derive . Macros macro_export — Exports a macro_rules macro for cross-crate usage. macro_use — Expands macro visibility, or imports macros from other crates. proc_macro — Defines a function-like macro. proc_macro_derive — Defines a derive macro. proc_macro_attribute — Defines an attribute macro. Diagnostics allow , expect , warn , deny , forbid — Alters the default lint level. deprecated — Generates deprecation notices. must_use — Generates a lint for unused values. diagnostic::on_unimplemented — Hints the compiler to emit a certain error message if a trait is not implemented. diagnostic::do_not_recommend — Hints the compiler to not show a certain trait impl in error messages. ABI, linking, symbols, and FFI link — Specifies a native library to link with an extern block. link_name — Specifies the name of the symbol for functions or statics in an extern block. link_ordinal — Specifies the ordinal of the symbol for functions or statics in an extern block. no_link — Prevents linking an extern crate. repr — Controls type layout. crate_type — Specifies the type of crate (library, executable, etc.). no_main — Disables emitting the main symbol. export_name — Specifies the exported symbol name for a function or static. link_section — Specifies the section of an object file to use for a function or static. no_mangle — Disables symbol name encoding. used — Forces the compiler to keep a static item in the output object file. crate_name — Specifies the crate name. Code generation inline — Hint to inline code. cold — Hint that a function is unlikely to be called. naked — Prevent the compiler from emitting a function prologue and epilogue. no_builtins — Disables use of certain built-in functions. target_feature — Configure platform-specific code generation. track_caller — Pass the parent call location to std::panic::Location::caller() . instruction_set — Specify the instruction set used to generate a functions code Documentation doc — Specifies documentation. See The Rustdoc Book for more information. Doc comments are transformed into doc attributes. Preludes no_std — Removes std from the prelude. no_implicit_prelude — Disables prelude lookups within a module. Modules path — Specifies the filename for a module. Limits recursion_limit — Sets the maximum recursion limit for certain compile-time operations. type_length_limit — Sets the maximum size of a polymorphic type. Runtime panic_handler — Sets the function to handle panics. global_allocator — Sets the global memory allocator. windows_subsystem — Specifies the windows subsystem to link with. Features feature — Used to enable unstable or experimental compiler features. See The Unstable Book for features implemented in rustc . Type System non_exhaustive — Indicate that a type will have more fields/variants added in future. Debugger debugger_visualizer — Embeds a file that specifies debugger output for a type. collapse_debuginfo — Controls how macro invocations are encoded in debuginfo. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/cargo/commands/cargo-package.html#option-cargo-package---locked | cargo package - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book cargo-package(1) NAME cargo-package — Assemble the local package into a distributable tarball SYNOPSIS cargo package [ options ] DESCRIPTION This command will create a distributable, compressed .crate file with the source code of the package in the current directory. The resulting file will be stored in the target/package directory. This performs the following steps: Load and check the current workspace, performing some basic checks. Path dependencies are not allowed unless they have a version key. Cargo will ignore the path key for dependencies in published packages. dev-dependencies do not have this restriction. Create the compressed .crate file. The original Cargo.toml file is rewritten and normalized. [patch] , [replace] , and [workspace] sections are removed from the manifest. Cargo.lock is always included. When missing, a new lock file will be generated unless the --exclude-lockfile flag is used. cargo-install(1) will use the packaged lock file if the --locked flag is used. A .cargo_vcs_info.json file is included that contains information about the current VCS checkout hash if available, as well as a flag if the worktree is dirty. Symlinks are flattened to their target files. Files and directories are included or excluded based on rules mentioned in the [include] and [exclude] fields . Extract the .crate file and build it to verify it can build. This will rebuild your package from scratch to ensure that it can be built from a pristine state. The --no-verify flag can be used to skip this step. Check that build scripts did not modify any source files. The list of files included can be controlled with the include and exclude fields in the manifest. See the reference for more details about packaging and publishing. .cargo_vcs_info.json format Will generate a .cargo_vcs_info.json in the following format { "git": { "sha1": "aac20b6e7e543e6dd4118b246c77225e3a3a1302", "dirty": true }, "path_in_vcs": "" } dirty indicates that the Git worktree was dirty when the package was built. path_in_vcs will be set to a repo-relative path for packages in subdirectories of the version control repository. The compatibility of this file is maintained under the same policy as the JSON output of cargo-metadata(1) . Note that this file provides a best-effort snapshot of the VCS information. However, the provenance of the package is not verified. There is no guarantee that the source code in the tarball matches the VCS information. OPTIONS Package Options -l --list Print files included in a package without making one. --no-verify Don’t verify the contents by building them. --no-metadata Ignore warnings about a lack of human-usable metadata (such as the description or the license). --allow-dirty Allow working directories with uncommitted VCS changes to be packaged. --exclude-lockfile Don’t include the lock file when packaging. This flag is not for general use. Some tools may expect a lock file to be present (e.g. cargo install --locked ). Consider other options before using this. --index index The URL of the registry index to use. --registry registry Name of the registry to package for; see cargo publish --help for more details about configuration of registry names. The packages will not be published to this registry, but if we are packaging multiple inter-dependent crates, lock-files will be generated under the assumption that dependencies will be published to this registry. --message-format fmt Specifies the output message format. Currently, it only works with --list and affects the file listing format. This is unstable and requires -Zunstable-options . Valid output formats: human (default): Display in a file-per-line format. json : Emit machine-readable JSON information about each package. One package per JSON line (Newline delimited JSON). { /* The Package ID Spec of the package. */ "id": "path+file:///home/foo#0.0.0", /* Files of this package */ "files" { /* Relative path in the archive file. */ "Cargo.toml.orig": { /* Where the file is from. - "generate" for file being generated during packaging - "copy" for file being copied from another location. */ "kind": "copy", /* For the "copy" kind, it is an absolute path to the actual file content. For the "generate" kind, it is the original file the generated one is based on. */ "path": "/home/foo/Cargo.toml" }, "Cargo.toml": { "kind": "generate", "path": "/home/foo/Cargo.toml" }, "src/main.rs": { "kind": "copy", "path": "/home/foo/src/main.rs" } } } Package Selection By default, when no package selection options are given, the packages selected depend on the selected manifest file (based on the current working directory if --manifest-path is not given). If the manifest is the root of a workspace then the workspaces default members are selected, otherwise only the package defined by the manifest will be selected. The default members of a workspace can be set explicitly with the workspace.default-members key in the root manifest. If this is not set, a virtual workspace will include all workspace members (equivalent to passing --workspace ), and a non-virtual workspace will include only the root crate itself. -p spec … --package spec … Package only the specified packages. See cargo-pkgid(1) for the SPEC format. This flag may be specified multiple times and supports common Unix glob patterns like * , ? and [] . However, to avoid your shell accidentally expanding glob patterns before Cargo handles them, you must use single quotes or double quotes around each pattern. --workspace Package all members in the workspace. --exclude SPEC … Exclude the specified packages. Must be used in conjunction with the --workspace flag. This flag may be specified multiple times and supports common Unix glob patterns like * , ? and [] . However, to avoid your shell accidentally expanding glob patterns before Cargo handles them, you must use single quotes or double quotes around each pattern. Compilation Options --target triple Package for the specified target architecture. Flag may be specified multiple times. The default is the host architecture. The general format of the triple is <arch><sub>-<vendor>-<sys>-<abi> . Possible values: Any supported target in rustc --print target-list . "host-tuple" , which will internally be substituted by the host’s target. This can be particularly useful if you’re cross-compiling some crates, and don’t want to specify your host’s machine as a target (for instance, an xtask in a shared project that may be worked on by many hosts). A path to a custom target specification. See Custom Target Lookup Path for more information. This may also be specified with the build.target config value . Note that specifying this flag makes Cargo run in a different mode where the target artifacts are placed in a separate directory. See the build cache documentation for more details. --target-dir directory Directory for all generated artifacts and intermediate files. May also be specified with the CARGO_TARGET_DIR environment variable, or the build.target-dir config value . Defaults to target in the root of the workspace. Feature Selection The feature flags allow you to control which features are enabled. When no feature options are given, the default feature is activated for every selected package. See the features documentation for more details. -F features --features features Space or comma separated list of features to activate. Features of workspace members may be enabled with package-name/feature-name syntax. This flag may be specified multiple times, which enables all specified features. --all-features Activate all available features of all selected packages. --no-default-features Do not activate the default feature of the selected packages. Manifest Options --manifest-path path Path to the Cargo.toml file. By default, Cargo searches for the Cargo.toml file in the current directory or any parent directory. --locked Asserts that the exact same dependencies and versions are used as when the existing Cargo.lock file was originally generated. Cargo will exit with an error when either of the following scenarios arises: The lock file is missing. Cargo attempted to change the lock file due to a different dependency resolution. It may be used in environments where deterministic builds are desired, such as in CI pipelines. --offline Prevents Cargo from accessing the network for any reason. Without this flag, Cargo will stop with an error if it needs to access the network and the network is not available. With this flag, Cargo will attempt to proceed without the network if possible. Beware that this may result in different dependency resolution than online mode. Cargo will restrict itself to crates that are downloaded locally, even if there might be a newer version as indicated in the local copy of the index. See the cargo-fetch(1) command to download dependencies before going offline. May also be specified with the net.offline config value . --frozen Equivalent to specifying both --locked and --offline . --lockfile-path PATH Changes the path of the lockfile from the default ( <workspace_root>/Cargo.lock ) to PATH . PATH must end with Cargo.lock (e.g. --lockfile-path /tmp/temporary-lockfile/Cargo.lock ). Note that providing --lockfile-path will ignore existing lockfile at the default path, and instead will either use the lockfile from PATH , or write a new lockfile into the provided PATH if it doesn’t exist. This flag can be used to run most commands in read-only directories, writing lockfile into the provided PATH . This option is only available on the nightly channel and requires the -Z unstable-options flag to enable (see #14421 ). Miscellaneous Options -j N --jobs N Number of parallel jobs to run. May also be specified with the build.jobs config value . Defaults to the number of logical CPUs. If negative, it sets the maximum number of parallel jobs to the number of logical CPUs plus provided value. If a string default is provided, it sets the value back to defaults. Should not be 0. --keep-going Build as many crates in the dependency graph as possible, rather than aborting the build on the first one that fails to build. For example if the current package depends on dependencies fails and works , one of which fails to build, cargo package -j1 may or may not build the one that succeeds (depending on which one of the two builds Cargo picked to run first), whereas cargo package -j1 --keep-going would definitely run both builds, even if the one run first fails. Display Options -v --verbose Use verbose output. May be specified twice for “very verbose” output which includes extra output such as dependency warnings and build script output. May also be specified with the term.verbose config value . -q --quiet Do not print cargo log messages. May also be specified with the term.quiet config value . --color when Control when colored output is used. Valid values: auto (default): Automatically detect if color support is available on the terminal. always : Always display colors. never : Never display colors. May also be specified with the term.color config value . Common Options + toolchain If Cargo has been installed with rustup, and the first argument to cargo begins with + , it will be interpreted as a rustup toolchain name (such as +stable or +nightly ). See the rustup documentation for more information about how toolchain overrides work. --config KEY=VALUE or PATH Overrides a Cargo configuration value. The argument should be in TOML syntax of KEY=VALUE , or provided as a path to an extra configuration file. This flag may be specified multiple times. See the command-line overrides section for more information. -C PATH Changes the current working directory before executing any specified operations. This affects things like where cargo looks by default for the project manifest ( Cargo.toml ), as well as the directories searched for discovering .cargo/config.toml , for example. This option must appear before the command name, for example cargo -C path/to/my-project build . This option is only available on the nightly channel and requires the -Z unstable-options flag to enable (see #10098 ). -h --help Prints help information. -Z flag Unstable (nightly-only) flags to Cargo. Run cargo -Z help for details. ENVIRONMENT See the reference for details on environment variables that Cargo reads. EXIT STATUS 0 : Cargo succeeded. 101 : Cargo failed to complete. EXAMPLES Create a compressed .crate file of the current package: cargo package SEE ALSO cargo(1) , cargo-publish(1) | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/cargo/reference/registry-authentication.html#cargotoken | Registry Authentication - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Registry Authentication Cargo authenticates to registries with credential providers. These credential providers are external executables or built-in providers that Cargo uses to store and retrieve credentials. Using alternative registries with authentication requires a credential provider to be configured to avoid unknowingly storing unencrypted credentials on disk. For historical reasons, public (non-authenticated) registries do not require credential provider configuration, and the cargo:token provider is used if no providers are configured. Cargo also includes platform-specific providers that use the operating system to securely store tokens. The cargo:token provider is also included which stores credentials in unencrypted plain text in the credentials file. Recommended configuration It’s recommended to configure a global credential provider list in $CARGO_HOME/config.toml which defaults to: Windows: %USERPROFILE%\.cargo\config.toml Unix: ~/.cargo/config.toml This recommended configuration uses the operating system provider, with a fallback to cargo:token to look in Cargo’s credentials file or environment variables: # ~/.cargo/config.toml [registry] global-credential-providers = ["cargo:token", "cargo:libsecret", "cargo:macos-keychain", "cargo:wincred"] Note that later entries have higher precedence. See registry.global-credential-providers for more details. Some private registries may also recommend a registry-specific credential-provider. Check your registry’s documentation to see if this is the case. Built-in providers Cargo includes several built-in credential providers. The available built-in providers may change in future Cargo releases (though there are currently no plans to do so). cargo:token Uses Cargo’s credentials file to store tokens unencrypted in plain text. When retrieving tokens, checks the CARGO_REGISTRIES_<NAME>_TOKEN environment variable. If this credential provider is not listed, then the *_TOKEN environment variables will not work. cargo:wincred Uses the Windows Credential Manager to store tokens. The credentials are stored as cargo-registry:<index-url> in the Credential Manager under “Windows Credentials”. cargo:macos-keychain Uses the macOS Keychain to store tokens. The Keychain Access app can be used to view stored tokens. cargo:libsecret Uses libsecret to store tokens. Any password manager with libsecret support can be used to view stored tokens. The following are a few examples (non-exhaustive): GNOME Keyring KDE Wallet Manager (since KDE Frameworks 5.97.0) KeePassXC (since 2.5.0) cargo:token-from-stdout <command> <args> Launch a subprocess that returns a token on stdout. Newlines will be trimmed. The process inherits the user’s stdin and stderr. It should exit 0 on success, and nonzero on error. cargo login and cargo logout are not supported and return an error if used. The following environment variables will be provided to the executed command: CARGO — Path to the cargo binary executing the command. CARGO_REGISTRY_INDEX_URL — The URL of the registry index. CARGO_REGISTRY_NAME_OPT — Optional name of the registry. Should not be used as a lookup key. Arguments will be passed on to the subcommand. Credential plugins For credential provider plugins that follow Cargo’s credential provider protocol , the configuration value should be a string with the path to the executable (or the executable name if on the PATH ). For example, to install cargo-credential-1password from crates.io do the following: Install the provider with cargo install cargo-credential-1password In the config, add to (or create) registry.global-credential-providers : [registry] global-credential-providers = ["cargo:token", "cargo-credential-1password --account my.1password.com"] The values in global-credential-providers are split on spaces into path and command-line arguments. To define a global credential provider where the path or arguments contain spaces, use the [credential-alias] table . | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/cargo/commands/cargo-package.html#option-cargo-package---jobs | cargo package - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book cargo-package(1) NAME cargo-package — Assemble the local package into a distributable tarball SYNOPSIS cargo package [ options ] DESCRIPTION This command will create a distributable, compressed .crate file with the source code of the package in the current directory. The resulting file will be stored in the target/package directory. This performs the following steps: Load and check the current workspace, performing some basic checks. Path dependencies are not allowed unless they have a version key. Cargo will ignore the path key for dependencies in published packages. dev-dependencies do not have this restriction. Create the compressed .crate file. The original Cargo.toml file is rewritten and normalized. [patch] , [replace] , and [workspace] sections are removed from the manifest. Cargo.lock is always included. When missing, a new lock file will be generated unless the --exclude-lockfile flag is used. cargo-install(1) will use the packaged lock file if the --locked flag is used. A .cargo_vcs_info.json file is included that contains information about the current VCS checkout hash if available, as well as a flag if the worktree is dirty. Symlinks are flattened to their target files. Files and directories are included or excluded based on rules mentioned in the [include] and [exclude] fields . Extract the .crate file and build it to verify it can build. This will rebuild your package from scratch to ensure that it can be built from a pristine state. The --no-verify flag can be used to skip this step. Check that build scripts did not modify any source files. The list of files included can be controlled with the include and exclude fields in the manifest. See the reference for more details about packaging and publishing. .cargo_vcs_info.json format Will generate a .cargo_vcs_info.json in the following format { "git": { "sha1": "aac20b6e7e543e6dd4118b246c77225e3a3a1302", "dirty": true }, "path_in_vcs": "" } dirty indicates that the Git worktree was dirty when the package was built. path_in_vcs will be set to a repo-relative path for packages in subdirectories of the version control repository. The compatibility of this file is maintained under the same policy as the JSON output of cargo-metadata(1) . Note that this file provides a best-effort snapshot of the VCS information. However, the provenance of the package is not verified. There is no guarantee that the source code in the tarball matches the VCS information. OPTIONS Package Options -l --list Print files included in a package without making one. --no-verify Don’t verify the contents by building them. --no-metadata Ignore warnings about a lack of human-usable metadata (such as the description or the license). --allow-dirty Allow working directories with uncommitted VCS changes to be packaged. --exclude-lockfile Don’t include the lock file when packaging. This flag is not for general use. Some tools may expect a lock file to be present (e.g. cargo install --locked ). Consider other options before using this. --index index The URL of the registry index to use. --registry registry Name of the registry to package for; see cargo publish --help for more details about configuration of registry names. The packages will not be published to this registry, but if we are packaging multiple inter-dependent crates, lock-files will be generated under the assumption that dependencies will be published to this registry. --message-format fmt Specifies the output message format. Currently, it only works with --list and affects the file listing format. This is unstable and requires -Zunstable-options . Valid output formats: human (default): Display in a file-per-line format. json : Emit machine-readable JSON information about each package. One package per JSON line (Newline delimited JSON). { /* The Package ID Spec of the package. */ "id": "path+file:///home/foo#0.0.0", /* Files of this package */ "files" { /* Relative path in the archive file. */ "Cargo.toml.orig": { /* Where the file is from. - "generate" for file being generated during packaging - "copy" for file being copied from another location. */ "kind": "copy", /* For the "copy" kind, it is an absolute path to the actual file content. For the "generate" kind, it is the original file the generated one is based on. */ "path": "/home/foo/Cargo.toml" }, "Cargo.toml": { "kind": "generate", "path": "/home/foo/Cargo.toml" }, "src/main.rs": { "kind": "copy", "path": "/home/foo/src/main.rs" } } } Package Selection By default, when no package selection options are given, the packages selected depend on the selected manifest file (based on the current working directory if --manifest-path is not given). If the manifest is the root of a workspace then the workspaces default members are selected, otherwise only the package defined by the manifest will be selected. The default members of a workspace can be set explicitly with the workspace.default-members key in the root manifest. If this is not set, a virtual workspace will include all workspace members (equivalent to passing --workspace ), and a non-virtual workspace will include only the root crate itself. -p spec … --package spec … Package only the specified packages. See cargo-pkgid(1) for the SPEC format. This flag may be specified multiple times and supports common Unix glob patterns like * , ? and [] . However, to avoid your shell accidentally expanding glob patterns before Cargo handles them, you must use single quotes or double quotes around each pattern. --workspace Package all members in the workspace. --exclude SPEC … Exclude the specified packages. Must be used in conjunction with the --workspace flag. This flag may be specified multiple times and supports common Unix glob patterns like * , ? and [] . However, to avoid your shell accidentally expanding glob patterns before Cargo handles them, you must use single quotes or double quotes around each pattern. Compilation Options --target triple Package for the specified target architecture. Flag may be specified multiple times. The default is the host architecture. The general format of the triple is <arch><sub>-<vendor>-<sys>-<abi> . Possible values: Any supported target in rustc --print target-list . "host-tuple" , which will internally be substituted by the host’s target. This can be particularly useful if you’re cross-compiling some crates, and don’t want to specify your host’s machine as a target (for instance, an xtask in a shared project that may be worked on by many hosts). A path to a custom target specification. See Custom Target Lookup Path for more information. This may also be specified with the build.target config value . Note that specifying this flag makes Cargo run in a different mode where the target artifacts are placed in a separate directory. See the build cache documentation for more details. --target-dir directory Directory for all generated artifacts and intermediate files. May also be specified with the CARGO_TARGET_DIR environment variable, or the build.target-dir config value . Defaults to target in the root of the workspace. Feature Selection The feature flags allow you to control which features are enabled. When no feature options are given, the default feature is activated for every selected package. See the features documentation for more details. -F features --features features Space or comma separated list of features to activate. Features of workspace members may be enabled with package-name/feature-name syntax. This flag may be specified multiple times, which enables all specified features. --all-features Activate all available features of all selected packages. --no-default-features Do not activate the default feature of the selected packages. Manifest Options --manifest-path path Path to the Cargo.toml file. By default, Cargo searches for the Cargo.toml file in the current directory or any parent directory. --locked Asserts that the exact same dependencies and versions are used as when the existing Cargo.lock file was originally generated. Cargo will exit with an error when either of the following scenarios arises: The lock file is missing. Cargo attempted to change the lock file due to a different dependency resolution. It may be used in environments where deterministic builds are desired, such as in CI pipelines. --offline Prevents Cargo from accessing the network for any reason. Without this flag, Cargo will stop with an error if it needs to access the network and the network is not available. With this flag, Cargo will attempt to proceed without the network if possible. Beware that this may result in different dependency resolution than online mode. Cargo will restrict itself to crates that are downloaded locally, even if there might be a newer version as indicated in the local copy of the index. See the cargo-fetch(1) command to download dependencies before going offline. May also be specified with the net.offline config value . --frozen Equivalent to specifying both --locked and --offline . --lockfile-path PATH Changes the path of the lockfile from the default ( <workspace_root>/Cargo.lock ) to PATH . PATH must end with Cargo.lock (e.g. --lockfile-path /tmp/temporary-lockfile/Cargo.lock ). Note that providing --lockfile-path will ignore existing lockfile at the default path, and instead will either use the lockfile from PATH , or write a new lockfile into the provided PATH if it doesn’t exist. This flag can be used to run most commands in read-only directories, writing lockfile into the provided PATH . This option is only available on the nightly channel and requires the -Z unstable-options flag to enable (see #14421 ). Miscellaneous Options -j N --jobs N Number of parallel jobs to run. May also be specified with the build.jobs config value . Defaults to the number of logical CPUs. If negative, it sets the maximum number of parallel jobs to the number of logical CPUs plus provided value. If a string default is provided, it sets the value back to defaults. Should not be 0. --keep-going Build as many crates in the dependency graph as possible, rather than aborting the build on the first one that fails to build. For example if the current package depends on dependencies fails and works , one of which fails to build, cargo package -j1 may or may not build the one that succeeds (depending on which one of the two builds Cargo picked to run first), whereas cargo package -j1 --keep-going would definitely run both builds, even if the one run first fails. Display Options -v --verbose Use verbose output. May be specified twice for “very verbose” output which includes extra output such as dependency warnings and build script output. May also be specified with the term.verbose config value . -q --quiet Do not print cargo log messages. May also be specified with the term.quiet config value . --color when Control when colored output is used. Valid values: auto (default): Automatically detect if color support is available on the terminal. always : Always display colors. never : Never display colors. May also be specified with the term.color config value . Common Options + toolchain If Cargo has been installed with rustup, and the first argument to cargo begins with + , it will be interpreted as a rustup toolchain name (such as +stable or +nightly ). See the rustup documentation for more information about how toolchain overrides work. --config KEY=VALUE or PATH Overrides a Cargo configuration value. The argument should be in TOML syntax of KEY=VALUE , or provided as a path to an extra configuration file. This flag may be specified multiple times. See the command-line overrides section for more information. -C PATH Changes the current working directory before executing any specified operations. This affects things like where cargo looks by default for the project manifest ( Cargo.toml ), as well as the directories searched for discovering .cargo/config.toml , for example. This option must appear before the command name, for example cargo -C path/to/my-project build . This option is only available on the nightly channel and requires the -Z unstable-options flag to enable (see #10098 ). -h --help Prints help information. -Z flag Unstable (nightly-only) flags to Cargo. Run cargo -Z help for details. ENVIRONMENT See the reference for details on environment variables that Cargo reads. EXIT STATUS 0 : Cargo succeeded. 101 : Cargo failed to complete. EXAMPLES Create a compressed .crate file of the current package: cargo package SEE ALSO cargo(1) , cargo-publish(1) | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/reference/macros.html#grammar-DelimTokenTree | Macros - The Rust Reference Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Reference [macro] Macros [macro .intro] The functionality and syntax of Rust can be extended with custom definitions called macros. They are given names, and invoked through a consistent syntax: some_extension!(...) . There are two ways to define new macros: Macros by Example define new syntax in a higher-level, declarative way. Procedural Macros define function-like macros, custom derives, and custom attributes using functions that operate on input tokens. [macro .invocation] Macro invocation [macro .invocation .syntax] Syntax MacroInvocation → SimplePath ! DelimTokenTree DelimTokenTree → ( TokenTree * ) | [ TokenTree * ] | { TokenTree * } TokenTree → Token except delimiters | DelimTokenTree MacroInvocationSemi → SimplePath ! ( TokenTree * ) ; | SimplePath ! [ TokenTree * ] ; | SimplePath ! { TokenTree * } Show Railroad MacroInvocation SimplePath ! DelimTokenTree DelimTokenTree ( TokenTree ) [ TokenTree ] { TokenTree } TokenTree except delimiters Token DelimTokenTree MacroInvocationSemi SimplePath ! ( TokenTree ) ; SimplePath ! [ TokenTree ] ; SimplePath ! { TokenTree } [macro .invocation .intro] A macro invocation expands a macro at compile time and replaces the invocation with the result of the macro. Macros may be invoked in the following situations: [macro .invocation .expr] Expressions and statements [macro .invocation .pattern] Patterns [macro .invocation .type] Types [macro .invocation .item] Items including associated items [macro .invocation .nested] macro_rules transcribers [macro .invocation .extern] External blocks [macro .invocation .item-statement] When used as an item or a statement, the MacroInvocationSemi form is used where a semicolon is required at the end when not using curly braces. Visibility qualifiers are never allowed before a macro invocation or macro_rules definition. #![allow(unused)] fn main() { // Used as an expression. let x = vec![1,2,3]; // Used as a statement. println!("Hello!"); // Used in a pattern. macro_rules! pat { ($i:ident) => (Some($i)) } if let pat!(x) = Some(1) { assert_eq!(x, 1); } // Used in a type. macro_rules! Tuple { { $A:ty, $B:ty } => { ($A, $B) }; } type N2 = Tuple!(i32, i32); // Used as an item. use std::cell::RefCell; thread_local!(static FOO: RefCell<u32> = RefCell::new(1)); // Used as an associated item. macro_rules! const_maker { ($t:ty, $v:tt) => { const CONST: $t = $v; }; } trait T { const_maker!{i32, 7} } // Macro calls within macros. macro_rules! example { () => { println!("Macro call in a macro!") }; } // Outer macro `example` is expanded, then inner macro `println` is expanded. example!(); } | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/reference/macros.html#railroad-DelimTokenTree | Macros - The Rust Reference Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Reference [macro] Macros [macro .intro] The functionality and syntax of Rust can be extended with custom definitions called macros. They are given names, and invoked through a consistent syntax: some_extension!(...) . There are two ways to define new macros: Macros by Example define new syntax in a higher-level, declarative way. Procedural Macros define function-like macros, custom derives, and custom attributes using functions that operate on input tokens. [macro .invocation] Macro invocation [macro .invocation .syntax] Syntax MacroInvocation → SimplePath ! DelimTokenTree DelimTokenTree → ( TokenTree * ) | [ TokenTree * ] | { TokenTree * } TokenTree → Token except delimiters | DelimTokenTree MacroInvocationSemi → SimplePath ! ( TokenTree * ) ; | SimplePath ! [ TokenTree * ] ; | SimplePath ! { TokenTree * } Show Railroad MacroInvocation SimplePath ! DelimTokenTree DelimTokenTree ( TokenTree ) [ TokenTree ] { TokenTree } TokenTree except delimiters Token DelimTokenTree MacroInvocationSemi SimplePath ! ( TokenTree ) ; SimplePath ! [ TokenTree ] ; SimplePath ! { TokenTree } [macro .invocation .intro] A macro invocation expands a macro at compile time and replaces the invocation with the result of the macro. Macros may be invoked in the following situations: [macro .invocation .expr] Expressions and statements [macro .invocation .pattern] Patterns [macro .invocation .type] Types [macro .invocation .item] Items including associated items [macro .invocation .nested] macro_rules transcribers [macro .invocation .extern] External blocks [macro .invocation .item-statement] When used as an item or a statement, the MacroInvocationSemi form is used where a semicolon is required at the end when not using curly braces. Visibility qualifiers are never allowed before a macro invocation or macro_rules definition. #![allow(unused)] fn main() { // Used as an expression. let x = vec![1,2,3]; // Used as a statement. println!("Hello!"); // Used in a pattern. macro_rules! pat { ($i:ident) => (Some($i)) } if let pat!(x) = Some(1) { assert_eq!(x, 1); } // Used in a type. macro_rules! Tuple { { $A:ty, $B:ty } => { ($A, $B) }; } type N2 = Tuple!(i32, i32); // Used as an item. use std::cell::RefCell; thread_local!(static FOO: RefCell<u32> = RefCell::new(1)); // Used as an associated item. macro_rules! const_maker { ($t:ty, $v:tt) => { const CONST: $t = $v; }; } trait T { const_maker!{i32, 7} } // Macro calls within macros. macro_rules! example { () => { println!("Macro call in a macro!") }; } // Outer macro `example` is expanded, then inner macro `println` is expanded. example!(); } | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/reference/attributes.html#r-attributes | Attributes - The Rust Reference Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Reference [attributes] Attributes [attributes .syntax] Syntax InnerAttribute → # ! [ Attr ] OuterAttribute → # [ Attr ] Attr → SimplePath AttrInput ? | unsafe ( SimplePath AttrInput ? ) AttrInput → DelimTokenTree | = Expression Show Railroad InnerAttribute # ! [ Attr ] OuterAttribute # [ Attr ] Attr SimplePath AttrInput unsafe ( SimplePath AttrInput ) AttrInput DelimTokenTree = Expression [attributes .intro] An attribute is a general, free-form metadatum that is interpreted according to name, convention, language, and compiler version. Attributes are modeled on Attributes in ECMA-335 , with the syntax coming from ECMA-334 (C#). [attributes .inner] Inner attributes , written with a bang ( ! ) after the hash ( # ), apply to the item that the attribute is declared within. Outer attributes , written without the bang after the hash, apply to the thing that follows the attribute. [attributes .input] The attribute consists of a path to the attribute, followed by an optional delimited token tree whose interpretation is defined by the attribute. Attributes other than macro attributes also allow the input to be an equals sign ( = ) followed by an expression. See the meta item syntax below for more details. [attributes .safety] An attribute may be unsafe to apply. To avoid undefined behavior when using these attributes, certain obligations that cannot be checked by the compiler must be met. To assert these have been, the attribute is wrapped in unsafe(..) , e.g. #[unsafe(no_mangle)] . The following attributes are unsafe: export_name link_section naked no_mangle [attributes .kind] Attributes can be classified into the following kinds: Built-in attributes Proc macro attributes Derive macro helper attributes Tool attributes [attributes .allowed-position] Attributes may be applied to many things in the language: All item declarations accept outer attributes while external blocks , functions , implementations , and modules accept inner attributes. Most statements accept outer attributes (see Expression Attributes for limitations on expression statements). Block expressions accept outer and inner attributes, but only when they are the outer expression of an expression statement or the final expression of another block expression. Enum variants and struct and union fields accept outer attributes. Match expression arms accept outer attributes. Generic lifetime or type parameter accept outer attributes. Expressions accept outer attributes in limited situations, see Expression Attributes for details. Function , closure and function pointer parameters accept outer attributes. This includes attributes on variadic parameters denoted with ... in function pointers and external blocks . Some examples of attributes: #![allow(unused)] fn main() { // General metadata applied to the enclosing module or crate. #![crate_type = "lib"] // A function marked as a unit test #[test] fn test_foo() { /* ... */ } // A conditionally-compiled module #[cfg(target_os = "linux")] mod bar { /* ... */ } // A lint attribute used to suppress a warning/error #[allow(non_camel_case_types)] type int8_t = i8; // Inner attribute applies to the entire function. fn some_unused_variables() { #![allow(unused_variables)] let x = (); let y = (); let z = (); } } [attributes .meta] Meta item attribute syntax [attributes .meta .intro] A “meta item” is the syntax used for the Attr rule by most built-in attributes . It has the following grammar: [attributes .meta .syntax] Syntax MetaItem → SimplePath | SimplePath = Expression | SimplePath ( MetaSeq ? ) MetaSeq → MetaItemInner ( , MetaItemInner ) * , ? MetaItemInner → MetaItem | Expression Show Railroad MetaItem SimplePath SimplePath = Expression SimplePath ( MetaSeq ) MetaSeq MetaItemInner , MetaItemInner , MetaItemInner MetaItem Expression [attributes .meta .literal-expr] Expressions in meta items must macro-expand to literal expressions, which must not include integer or float type suffixes. Expressions which are not literal expressions will be syntactically accepted (and can be passed to proc-macros), but will be rejected after parsing. [attributes .meta .order] Note that if the attribute appears within another macro, it will be expanded after that outer macro. For example, the following code will expand the Serialize proc-macro first, which must preserve the include_str! call in order for it to be expanded: #[derive(Serialize)] struct Foo { #[doc = include_str!("x.md")] x: u32 } [attributes .meta .order-macro] Additionally, macros in attributes will be expanded only after all other attributes applied to the item: #[macro_attr1] // expanded first #[doc = mac!()] // `mac!` is expanded fourth. #[macro_attr2] // expanded second #[derive(MacroDerive1, MacroDerive2)] // expanded third fn foo() {} [attributes .meta .builtin] Various built-in attributes use different subsets of the meta item syntax to specify their inputs. The following grammar rules show some commonly used forms: [attributes .meta .builtin .syntax] Syntax MetaWord → IDENTIFIER MetaNameValueStr → IDENTIFIER = ( STRING_LITERAL | RAW_STRING_LITERAL ) MetaListPaths → IDENTIFIER ( ( SimplePath ( , SimplePath ) * , ? ) ? ) MetaListIdents → IDENTIFIER ( ( IDENTIFIER ( , IDENTIFIER ) * , ? ) ? ) MetaListNameValueStr → IDENTIFIER ( ( MetaNameValueStr ( , MetaNameValueStr ) * , ? ) ? ) Show Railroad MetaWord IDENTIFIER MetaNameValueStr IDENTIFIER = STRING_LITERAL RAW_STRING_LITERAL MetaListPaths IDENTIFIER ( SimplePath , SimplePath , ) MetaListIdents IDENTIFIER ( IDENTIFIER , IDENTIFIER , ) MetaListNameValueStr IDENTIFIER ( MetaNameValueStr , MetaNameValueStr , ) Some examples of meta items are: Style Example MetaWord no_std MetaNameValueStr doc = "example" MetaListPaths allow(unused, clippy::inline_always) MetaListIdents macro_use(foo, bar) MetaListNameValueStr link(name = "CoreFoundation", kind = "framework") [attributes .activity] Active and inert attributes [attributes .activity .intro] An attribute is either active or inert. During attribute processing, active attributes remove themselves from the thing they are on while inert attributes stay on. The cfg and cfg_attr attributes are active. Attribute macros are active. All other attributes are inert. [attributes .tool] Tool attributes [attributes .tool .intro] The compiler may allow attributes for external tools where each tool resides in its own module in the tool prelude . The first segment of the attribute path is the name of the tool, with one or more additional segments whose interpretation is up to the tool. [attributes .tool .ignored] When a tool is not in use, the tool’s attributes are accepted without a warning. When the tool is in use, the tool is responsible for processing and interpretation of its attributes. [attributes .tool .prelude] Tool attributes are not available if the no_implicit_prelude attribute is used. #![allow(unused)] fn main() { // Tells the rustfmt tool to not format the following element. #[rustfmt::skip] struct S { } // Controls the "cyclomatic complexity" threshold for the clippy tool. #[clippy::cyclomatic_complexity = "100"] pub fn f() {} } Note rustc currently recognizes the tools “clippy”, “rustfmt”, “diagnostic”, “miri” and “rust_analyzer”. [attributes .builtin] Built-in attributes index The following is an index of all built-in attributes. Conditional compilation cfg — Controls conditional compilation. cfg_attr — Conditionally includes attributes. Testing test — Marks a function as a test. ignore — Disables a test function. should_panic — Indicates a test should generate a panic. Derive derive — Automatic trait implementations. automatically_derived — Marker for implementations created by derive . Macros macro_export — Exports a macro_rules macro for cross-crate usage. macro_use — Expands macro visibility, or imports macros from other crates. proc_macro — Defines a function-like macro. proc_macro_derive — Defines a derive macro. proc_macro_attribute — Defines an attribute macro. Diagnostics allow , expect , warn , deny , forbid — Alters the default lint level. deprecated — Generates deprecation notices. must_use — Generates a lint for unused values. diagnostic::on_unimplemented — Hints the compiler to emit a certain error message if a trait is not implemented. diagnostic::do_not_recommend — Hints the compiler to not show a certain trait impl in error messages. ABI, linking, symbols, and FFI link — Specifies a native library to link with an extern block. link_name — Specifies the name of the symbol for functions or statics in an extern block. link_ordinal — Specifies the ordinal of the symbol for functions or statics in an extern block. no_link — Prevents linking an extern crate. repr — Controls type layout. crate_type — Specifies the type of crate (library, executable, etc.). no_main — Disables emitting the main symbol. export_name — Specifies the exported symbol name for a function or static. link_section — Specifies the section of an object file to use for a function or static. no_mangle — Disables symbol name encoding. used — Forces the compiler to keep a static item in the output object file. crate_name — Specifies the crate name. Code generation inline — Hint to inline code. cold — Hint that a function is unlikely to be called. naked — Prevent the compiler from emitting a function prologue and epilogue. no_builtins — Disables use of certain built-in functions. target_feature — Configure platform-specific code generation. track_caller — Pass the parent call location to std::panic::Location::caller() . instruction_set — Specify the instruction set used to generate a functions code Documentation doc — Specifies documentation. See The Rustdoc Book for more information. Doc comments are transformed into doc attributes. Preludes no_std — Removes std from the prelude. no_implicit_prelude — Disables prelude lookups within a module. Modules path — Specifies the filename for a module. Limits recursion_limit — Sets the maximum recursion limit for certain compile-time operations. type_length_limit — Sets the maximum size of a polymorphic type. Runtime panic_handler — Sets the function to handle panics. global_allocator — Sets the global memory allocator. windows_subsystem — Specifies the windows subsystem to link with. Features feature — Used to enable unstable or experimental compiler features. See The Unstable Book for features implemented in rustc . Type System non_exhaustive — Indicate that a type will have more fields/variants added in future. Debugger debugger_visualizer — Embeds a file that specifies debugger output for a type. collapse_debuginfo — Controls how macro invocations are encoded in debuginfo. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/cargo/commands/cargo-package.html#option-cargo-package---frozen | cargo package - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book cargo-package(1) NAME cargo-package — Assemble the local package into a distributable tarball SYNOPSIS cargo package [ options ] DESCRIPTION This command will create a distributable, compressed .crate file with the source code of the package in the current directory. The resulting file will be stored in the target/package directory. This performs the following steps: Load and check the current workspace, performing some basic checks. Path dependencies are not allowed unless they have a version key. Cargo will ignore the path key for dependencies in published packages. dev-dependencies do not have this restriction. Create the compressed .crate file. The original Cargo.toml file is rewritten and normalized. [patch] , [replace] , and [workspace] sections are removed from the manifest. Cargo.lock is always included. When missing, a new lock file will be generated unless the --exclude-lockfile flag is used. cargo-install(1) will use the packaged lock file if the --locked flag is used. A .cargo_vcs_info.json file is included that contains information about the current VCS checkout hash if available, as well as a flag if the worktree is dirty. Symlinks are flattened to their target files. Files and directories are included or excluded based on rules mentioned in the [include] and [exclude] fields . Extract the .crate file and build it to verify it can build. This will rebuild your package from scratch to ensure that it can be built from a pristine state. The --no-verify flag can be used to skip this step. Check that build scripts did not modify any source files. The list of files included can be controlled with the include and exclude fields in the manifest. See the reference for more details about packaging and publishing. .cargo_vcs_info.json format Will generate a .cargo_vcs_info.json in the following format { "git": { "sha1": "aac20b6e7e543e6dd4118b246c77225e3a3a1302", "dirty": true }, "path_in_vcs": "" } dirty indicates that the Git worktree was dirty when the package was built. path_in_vcs will be set to a repo-relative path for packages in subdirectories of the version control repository. The compatibility of this file is maintained under the same policy as the JSON output of cargo-metadata(1) . Note that this file provides a best-effort snapshot of the VCS information. However, the provenance of the package is not verified. There is no guarantee that the source code in the tarball matches the VCS information. OPTIONS Package Options -l --list Print files included in a package without making one. --no-verify Don’t verify the contents by building them. --no-metadata Ignore warnings about a lack of human-usable metadata (such as the description or the license). --allow-dirty Allow working directories with uncommitted VCS changes to be packaged. --exclude-lockfile Don’t include the lock file when packaging. This flag is not for general use. Some tools may expect a lock file to be present (e.g. cargo install --locked ). Consider other options before using this. --index index The URL of the registry index to use. --registry registry Name of the registry to package for; see cargo publish --help for more details about configuration of registry names. The packages will not be published to this registry, but if we are packaging multiple inter-dependent crates, lock-files will be generated under the assumption that dependencies will be published to this registry. --message-format fmt Specifies the output message format. Currently, it only works with --list and affects the file listing format. This is unstable and requires -Zunstable-options . Valid output formats: human (default): Display in a file-per-line format. json : Emit machine-readable JSON information about each package. One package per JSON line (Newline delimited JSON). { /* The Package ID Spec of the package. */ "id": "path+file:///home/foo#0.0.0", /* Files of this package */ "files" { /* Relative path in the archive file. */ "Cargo.toml.orig": { /* Where the file is from. - "generate" for file being generated during packaging - "copy" for file being copied from another location. */ "kind": "copy", /* For the "copy" kind, it is an absolute path to the actual file content. For the "generate" kind, it is the original file the generated one is based on. */ "path": "/home/foo/Cargo.toml" }, "Cargo.toml": { "kind": "generate", "path": "/home/foo/Cargo.toml" }, "src/main.rs": { "kind": "copy", "path": "/home/foo/src/main.rs" } } } Package Selection By default, when no package selection options are given, the packages selected depend on the selected manifest file (based on the current working directory if --manifest-path is not given). If the manifest is the root of a workspace then the workspaces default members are selected, otherwise only the package defined by the manifest will be selected. The default members of a workspace can be set explicitly with the workspace.default-members key in the root manifest. If this is not set, a virtual workspace will include all workspace members (equivalent to passing --workspace ), and a non-virtual workspace will include only the root crate itself. -p spec … --package spec … Package only the specified packages. See cargo-pkgid(1) for the SPEC format. This flag may be specified multiple times and supports common Unix glob patterns like * , ? and [] . However, to avoid your shell accidentally expanding glob patterns before Cargo handles them, you must use single quotes or double quotes around each pattern. --workspace Package all members in the workspace. --exclude SPEC … Exclude the specified packages. Must be used in conjunction with the --workspace flag. This flag may be specified multiple times and supports common Unix glob patterns like * , ? and [] . However, to avoid your shell accidentally expanding glob patterns before Cargo handles them, you must use single quotes or double quotes around each pattern. Compilation Options --target triple Package for the specified target architecture. Flag may be specified multiple times. The default is the host architecture. The general format of the triple is <arch><sub>-<vendor>-<sys>-<abi> . Possible values: Any supported target in rustc --print target-list . "host-tuple" , which will internally be substituted by the host’s target. This can be particularly useful if you’re cross-compiling some crates, and don’t want to specify your host’s machine as a target (for instance, an xtask in a shared project that may be worked on by many hosts). A path to a custom target specification. See Custom Target Lookup Path for more information. This may also be specified with the build.target config value . Note that specifying this flag makes Cargo run in a different mode where the target artifacts are placed in a separate directory. See the build cache documentation for more details. --target-dir directory Directory for all generated artifacts and intermediate files. May also be specified with the CARGO_TARGET_DIR environment variable, or the build.target-dir config value . Defaults to target in the root of the workspace. Feature Selection The feature flags allow you to control which features are enabled. When no feature options are given, the default feature is activated for every selected package. See the features documentation for more details. -F features --features features Space or comma separated list of features to activate. Features of workspace members may be enabled with package-name/feature-name syntax. This flag may be specified multiple times, which enables all specified features. --all-features Activate all available features of all selected packages. --no-default-features Do not activate the default feature of the selected packages. Manifest Options --manifest-path path Path to the Cargo.toml file. By default, Cargo searches for the Cargo.toml file in the current directory or any parent directory. --locked Asserts that the exact same dependencies and versions are used as when the existing Cargo.lock file was originally generated. Cargo will exit with an error when either of the following scenarios arises: The lock file is missing. Cargo attempted to change the lock file due to a different dependency resolution. It may be used in environments where deterministic builds are desired, such as in CI pipelines. --offline Prevents Cargo from accessing the network for any reason. Without this flag, Cargo will stop with an error if it needs to access the network and the network is not available. With this flag, Cargo will attempt to proceed without the network if possible. Beware that this may result in different dependency resolution than online mode. Cargo will restrict itself to crates that are downloaded locally, even if there might be a newer version as indicated in the local copy of the index. See the cargo-fetch(1) command to download dependencies before going offline. May also be specified with the net.offline config value . --frozen Equivalent to specifying both --locked and --offline . --lockfile-path PATH Changes the path of the lockfile from the default ( <workspace_root>/Cargo.lock ) to PATH . PATH must end with Cargo.lock (e.g. --lockfile-path /tmp/temporary-lockfile/Cargo.lock ). Note that providing --lockfile-path will ignore existing lockfile at the default path, and instead will either use the lockfile from PATH , or write a new lockfile into the provided PATH if it doesn’t exist. This flag can be used to run most commands in read-only directories, writing lockfile into the provided PATH . This option is only available on the nightly channel and requires the -Z unstable-options flag to enable (see #14421 ). Miscellaneous Options -j N --jobs N Number of parallel jobs to run. May also be specified with the build.jobs config value . Defaults to the number of logical CPUs. If negative, it sets the maximum number of parallel jobs to the number of logical CPUs plus provided value. If a string default is provided, it sets the value back to defaults. Should not be 0. --keep-going Build as many crates in the dependency graph as possible, rather than aborting the build on the first one that fails to build. For example if the current package depends on dependencies fails and works , one of which fails to build, cargo package -j1 may or may not build the one that succeeds (depending on which one of the two builds Cargo picked to run first), whereas cargo package -j1 --keep-going would definitely run both builds, even if the one run first fails. Display Options -v --verbose Use verbose output. May be specified twice for “very verbose” output which includes extra output such as dependency warnings and build script output. May also be specified with the term.verbose config value . -q --quiet Do not print cargo log messages. May also be specified with the term.quiet config value . --color when Control when colored output is used. Valid values: auto (default): Automatically detect if color support is available on the terminal. always : Always display colors. never : Never display colors. May also be specified with the term.color config value . Common Options + toolchain If Cargo has been installed with rustup, and the first argument to cargo begins with + , it will be interpreted as a rustup toolchain name (such as +stable or +nightly ). See the rustup documentation for more information about how toolchain overrides work. --config KEY=VALUE or PATH Overrides a Cargo configuration value. The argument should be in TOML syntax of KEY=VALUE , or provided as a path to an extra configuration file. This flag may be specified multiple times. See the command-line overrides section for more information. -C PATH Changes the current working directory before executing any specified operations. This affects things like where cargo looks by default for the project manifest ( Cargo.toml ), as well as the directories searched for discovering .cargo/config.toml , for example. This option must appear before the command name, for example cargo -C path/to/my-project build . This option is only available on the nightly channel and requires the -Z unstable-options flag to enable (see #10098 ). -h --help Prints help information. -Z flag Unstable (nightly-only) flags to Cargo. Run cargo -Z help for details. ENVIRONMENT See the reference for details on environment variables that Cargo reads. EXIT STATUS 0 : Cargo succeeded. 101 : Cargo failed to complete. EXAMPLES Create a compressed .crate file of the current package: cargo package SEE ALSO cargo(1) , cargo-publish(1) | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/reference/attributes.html#r-attributes.kind | Attributes - The Rust Reference Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Reference [attributes] Attributes [attributes .syntax] Syntax InnerAttribute → # ! [ Attr ] OuterAttribute → # [ Attr ] Attr → SimplePath AttrInput ? | unsafe ( SimplePath AttrInput ? ) AttrInput → DelimTokenTree | = Expression Show Railroad InnerAttribute # ! [ Attr ] OuterAttribute # [ Attr ] Attr SimplePath AttrInput unsafe ( SimplePath AttrInput ) AttrInput DelimTokenTree = Expression [attributes .intro] An attribute is a general, free-form metadatum that is interpreted according to name, convention, language, and compiler version. Attributes are modeled on Attributes in ECMA-335 , with the syntax coming from ECMA-334 (C#). [attributes .inner] Inner attributes , written with a bang ( ! ) after the hash ( # ), apply to the item that the attribute is declared within. Outer attributes , written without the bang after the hash, apply to the thing that follows the attribute. [attributes .input] The attribute consists of a path to the attribute, followed by an optional delimited token tree whose interpretation is defined by the attribute. Attributes other than macro attributes also allow the input to be an equals sign ( = ) followed by an expression. See the meta item syntax below for more details. [attributes .safety] An attribute may be unsafe to apply. To avoid undefined behavior when using these attributes, certain obligations that cannot be checked by the compiler must be met. To assert these have been, the attribute is wrapped in unsafe(..) , e.g. #[unsafe(no_mangle)] . The following attributes are unsafe: export_name link_section naked no_mangle [attributes .kind] Attributes can be classified into the following kinds: Built-in attributes Proc macro attributes Derive macro helper attributes Tool attributes [attributes .allowed-position] Attributes may be applied to many things in the language: All item declarations accept outer attributes while external blocks , functions , implementations , and modules accept inner attributes. Most statements accept outer attributes (see Expression Attributes for limitations on expression statements). Block expressions accept outer and inner attributes, but only when they are the outer expression of an expression statement or the final expression of another block expression. Enum variants and struct and union fields accept outer attributes. Match expression arms accept outer attributes. Generic lifetime or type parameter accept outer attributes. Expressions accept outer attributes in limited situations, see Expression Attributes for details. Function , closure and function pointer parameters accept outer attributes. This includes attributes on variadic parameters denoted with ... in function pointers and external blocks . Some examples of attributes: #![allow(unused)] fn main() { // General metadata applied to the enclosing module or crate. #![crate_type = "lib"] // A function marked as a unit test #[test] fn test_foo() { /* ... */ } // A conditionally-compiled module #[cfg(target_os = "linux")] mod bar { /* ... */ } // A lint attribute used to suppress a warning/error #[allow(non_camel_case_types)] type int8_t = i8; // Inner attribute applies to the entire function. fn some_unused_variables() { #![allow(unused_variables)] let x = (); let y = (); let z = (); } } [attributes .meta] Meta item attribute syntax [attributes .meta .intro] A “meta item” is the syntax used for the Attr rule by most built-in attributes . It has the following grammar: [attributes .meta .syntax] Syntax MetaItem → SimplePath | SimplePath = Expression | SimplePath ( MetaSeq ? ) MetaSeq → MetaItemInner ( , MetaItemInner ) * , ? MetaItemInner → MetaItem | Expression Show Railroad MetaItem SimplePath SimplePath = Expression SimplePath ( MetaSeq ) MetaSeq MetaItemInner , MetaItemInner , MetaItemInner MetaItem Expression [attributes .meta .literal-expr] Expressions in meta items must macro-expand to literal expressions, which must not include integer or float type suffixes. Expressions which are not literal expressions will be syntactically accepted (and can be passed to proc-macros), but will be rejected after parsing. [attributes .meta .order] Note that if the attribute appears within another macro, it will be expanded after that outer macro. For example, the following code will expand the Serialize proc-macro first, which must preserve the include_str! call in order for it to be expanded: #[derive(Serialize)] struct Foo { #[doc = include_str!("x.md")] x: u32 } [attributes .meta .order-macro] Additionally, macros in attributes will be expanded only after all other attributes applied to the item: #[macro_attr1] // expanded first #[doc = mac!()] // `mac!` is expanded fourth. #[macro_attr2] // expanded second #[derive(MacroDerive1, MacroDerive2)] // expanded third fn foo() {} [attributes .meta .builtin] Various built-in attributes use different subsets of the meta item syntax to specify their inputs. The following grammar rules show some commonly used forms: [attributes .meta .builtin .syntax] Syntax MetaWord → IDENTIFIER MetaNameValueStr → IDENTIFIER = ( STRING_LITERAL | RAW_STRING_LITERAL ) MetaListPaths → IDENTIFIER ( ( SimplePath ( , SimplePath ) * , ? ) ? ) MetaListIdents → IDENTIFIER ( ( IDENTIFIER ( , IDENTIFIER ) * , ? ) ? ) MetaListNameValueStr → IDENTIFIER ( ( MetaNameValueStr ( , MetaNameValueStr ) * , ? ) ? ) Show Railroad MetaWord IDENTIFIER MetaNameValueStr IDENTIFIER = STRING_LITERAL RAW_STRING_LITERAL MetaListPaths IDENTIFIER ( SimplePath , SimplePath , ) MetaListIdents IDENTIFIER ( IDENTIFIER , IDENTIFIER , ) MetaListNameValueStr IDENTIFIER ( MetaNameValueStr , MetaNameValueStr , ) Some examples of meta items are: Style Example MetaWord no_std MetaNameValueStr doc = "example" MetaListPaths allow(unused, clippy::inline_always) MetaListIdents macro_use(foo, bar) MetaListNameValueStr link(name = "CoreFoundation", kind = "framework") [attributes .activity] Active and inert attributes [attributes .activity .intro] An attribute is either active or inert. During attribute processing, active attributes remove themselves from the thing they are on while inert attributes stay on. The cfg and cfg_attr attributes are active. Attribute macros are active. All other attributes are inert. [attributes .tool] Tool attributes [attributes .tool .intro] The compiler may allow attributes for external tools where each tool resides in its own module in the tool prelude . The first segment of the attribute path is the name of the tool, with one or more additional segments whose interpretation is up to the tool. [attributes .tool .ignored] When a tool is not in use, the tool’s attributes are accepted without a warning. When the tool is in use, the tool is responsible for processing and interpretation of its attributes. [attributes .tool .prelude] Tool attributes are not available if the no_implicit_prelude attribute is used. #![allow(unused)] fn main() { // Tells the rustfmt tool to not format the following element. #[rustfmt::skip] struct S { } // Controls the "cyclomatic complexity" threshold for the clippy tool. #[clippy::cyclomatic_complexity = "100"] pub fn f() {} } Note rustc currently recognizes the tools “clippy”, “rustfmt”, “diagnostic”, “miri” and “rust_analyzer”. [attributes .builtin] Built-in attributes index The following is an index of all built-in attributes. Conditional compilation cfg — Controls conditional compilation. cfg_attr — Conditionally includes attributes. Testing test — Marks a function as a test. ignore — Disables a test function. should_panic — Indicates a test should generate a panic. Derive derive — Automatic trait implementations. automatically_derived — Marker for implementations created by derive . Macros macro_export — Exports a macro_rules macro for cross-crate usage. macro_use — Expands macro visibility, or imports macros from other crates. proc_macro — Defines a function-like macro. proc_macro_derive — Defines a derive macro. proc_macro_attribute — Defines an attribute macro. Diagnostics allow , expect , warn , deny , forbid — Alters the default lint level. deprecated — Generates deprecation notices. must_use — Generates a lint for unused values. diagnostic::on_unimplemented — Hints the compiler to emit a certain error message if a trait is not implemented. diagnostic::do_not_recommend — Hints the compiler to not show a certain trait impl in error messages. ABI, linking, symbols, and FFI link — Specifies a native library to link with an extern block. link_name — Specifies the name of the symbol for functions or statics in an extern block. link_ordinal — Specifies the ordinal of the symbol for functions or statics in an extern block. no_link — Prevents linking an extern crate. repr — Controls type layout. crate_type — Specifies the type of crate (library, executable, etc.). no_main — Disables emitting the main symbol. export_name — Specifies the exported symbol name for a function or static. link_section — Specifies the section of an object file to use for a function or static. no_mangle — Disables symbol name encoding. used — Forces the compiler to keep a static item in the output object file. crate_name — Specifies the crate name. Code generation inline — Hint to inline code. cold — Hint that a function is unlikely to be called. naked — Prevent the compiler from emitting a function prologue and epilogue. no_builtins — Disables use of certain built-in functions. target_feature — Configure platform-specific code generation. track_caller — Pass the parent call location to std::panic::Location::caller() . instruction_set — Specify the instruction set used to generate a functions code Documentation doc — Specifies documentation. See The Rustdoc Book for more information. Doc comments are transformed into doc attributes. Preludes no_std — Removes std from the prelude. no_implicit_prelude — Disables prelude lookups within a module. Modules path — Specifies the filename for a module. Limits recursion_limit — Sets the maximum recursion limit for certain compile-time operations. type_length_limit — Sets the maximum size of a polymorphic type. Runtime panic_handler — Sets the function to handle panics. global_allocator — Sets the global memory allocator. windows_subsystem — Specifies the windows subsystem to link with. Features feature — Used to enable unstable or experimental compiler features. See The Unstable Book for features implemented in rustc . Type System non_exhaustive — Indicate that a type will have more fields/variants added in future. Debugger debugger_visualizer — Embeds a file that specifies debugger output for a type. collapse_debuginfo — Controls how macro invocations are encoded in debuginfo. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/reference/items.html | Items - The Rust Reference Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Reference [items] Items [items .syntax] Syntax Item → OuterAttribute * ( VisItem | MacroItem ) VisItem → Visibility ? ( Module | ExternCrate | UseDeclaration | Function | TypeAlias | Struct | Enumeration | Union | ConstantItem | StaticItem | Trait | Implementation | ExternBlock ) MacroItem → MacroInvocationSemi | MacroRulesDefinition Show Railroad Item OuterAttribute VisItem MacroItem VisItem Visibility Module ExternCrate UseDeclaration Function TypeAlias Struct Enumeration Union ConstantItem StaticItem Trait Implementation ExternBlock MacroItem MacroInvocationSemi MacroRulesDefinition [items .intro] An item is a component of a crate. Items are organized within a crate by a nested set of modules . Every crate has a single “outermost” anonymous module; all further items within the crate have paths within the module tree of the crate. [items .static-def] Items are entirely determined at compile-time, generally remain fixed during execution, and may reside in read-only memory. [items .kinds] There are several kinds of items: modules extern crate declarations use declarations function definitions type definitions struct definitions enumeration definitions union definitions constant items static items trait definitions implementations extern blocks [items .locations] Items may be declared in the root of the crate , a module , or a block expression . [items .associated-locations] A subset of items, called associated items , may be declared in traits and implementations . [items .extern-locations] A subset of items, called external items, may be declared in extern blocks . [items .decl-order] Items may be defined in any order, with the exception of macro_rules which has its own scoping behavior. [items .name-resolution] Name resolution of item names allows items to be defined before or after where the item is referred to in the module or block. See item scopes for information on the scoping rules of items. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/reference/attributes.html#attributes | Attributes - The Rust Reference Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Reference [attributes] Attributes [attributes .syntax] Syntax InnerAttribute → # ! [ Attr ] OuterAttribute → # [ Attr ] Attr → SimplePath AttrInput ? | unsafe ( SimplePath AttrInput ? ) AttrInput → DelimTokenTree | = Expression Show Railroad InnerAttribute # ! [ Attr ] OuterAttribute # [ Attr ] Attr SimplePath AttrInput unsafe ( SimplePath AttrInput ) AttrInput DelimTokenTree = Expression [attributes .intro] An attribute is a general, free-form metadatum that is interpreted according to name, convention, language, and compiler version. Attributes are modeled on Attributes in ECMA-335 , with the syntax coming from ECMA-334 (C#). [attributes .inner] Inner attributes , written with a bang ( ! ) after the hash ( # ), apply to the item that the attribute is declared within. Outer attributes , written without the bang after the hash, apply to the thing that follows the attribute. [attributes .input] The attribute consists of a path to the attribute, followed by an optional delimited token tree whose interpretation is defined by the attribute. Attributes other than macro attributes also allow the input to be an equals sign ( = ) followed by an expression. See the meta item syntax below for more details. [attributes .safety] An attribute may be unsafe to apply. To avoid undefined behavior when using these attributes, certain obligations that cannot be checked by the compiler must be met. To assert these have been, the attribute is wrapped in unsafe(..) , e.g. #[unsafe(no_mangle)] . The following attributes are unsafe: export_name link_section naked no_mangle [attributes .kind] Attributes can be classified into the following kinds: Built-in attributes Proc macro attributes Derive macro helper attributes Tool attributes [attributes .allowed-position] Attributes may be applied to many things in the language: All item declarations accept outer attributes while external blocks , functions , implementations , and modules accept inner attributes. Most statements accept outer attributes (see Expression Attributes for limitations on expression statements). Block expressions accept outer and inner attributes, but only when they are the outer expression of an expression statement or the final expression of another block expression. Enum variants and struct and union fields accept outer attributes. Match expression arms accept outer attributes. Generic lifetime or type parameter accept outer attributes. Expressions accept outer attributes in limited situations, see Expression Attributes for details. Function , closure and function pointer parameters accept outer attributes. This includes attributes on variadic parameters denoted with ... in function pointers and external blocks . Some examples of attributes: #![allow(unused)] fn main() { // General metadata applied to the enclosing module or crate. #![crate_type = "lib"] // A function marked as a unit test #[test] fn test_foo() { /* ... */ } // A conditionally-compiled module #[cfg(target_os = "linux")] mod bar { /* ... */ } // A lint attribute used to suppress a warning/error #[allow(non_camel_case_types)] type int8_t = i8; // Inner attribute applies to the entire function. fn some_unused_variables() { #![allow(unused_variables)] let x = (); let y = (); let z = (); } } [attributes .meta] Meta item attribute syntax [attributes .meta .intro] A “meta item” is the syntax used for the Attr rule by most built-in attributes . It has the following grammar: [attributes .meta .syntax] Syntax MetaItem → SimplePath | SimplePath = Expression | SimplePath ( MetaSeq ? ) MetaSeq → MetaItemInner ( , MetaItemInner ) * , ? MetaItemInner → MetaItem | Expression Show Railroad MetaItem SimplePath SimplePath = Expression SimplePath ( MetaSeq ) MetaSeq MetaItemInner , MetaItemInner , MetaItemInner MetaItem Expression [attributes .meta .literal-expr] Expressions in meta items must macro-expand to literal expressions, which must not include integer or float type suffixes. Expressions which are not literal expressions will be syntactically accepted (and can be passed to proc-macros), but will be rejected after parsing. [attributes .meta .order] Note that if the attribute appears within another macro, it will be expanded after that outer macro. For example, the following code will expand the Serialize proc-macro first, which must preserve the include_str! call in order for it to be expanded: #[derive(Serialize)] struct Foo { #[doc = include_str!("x.md")] x: u32 } [attributes .meta .order-macro] Additionally, macros in attributes will be expanded only after all other attributes applied to the item: #[macro_attr1] // expanded first #[doc = mac!()] // `mac!` is expanded fourth. #[macro_attr2] // expanded second #[derive(MacroDerive1, MacroDerive2)] // expanded third fn foo() {} [attributes .meta .builtin] Various built-in attributes use different subsets of the meta item syntax to specify their inputs. The following grammar rules show some commonly used forms: [attributes .meta .builtin .syntax] Syntax MetaWord → IDENTIFIER MetaNameValueStr → IDENTIFIER = ( STRING_LITERAL | RAW_STRING_LITERAL ) MetaListPaths → IDENTIFIER ( ( SimplePath ( , SimplePath ) * , ? ) ? ) MetaListIdents → IDENTIFIER ( ( IDENTIFIER ( , IDENTIFIER ) * , ? ) ? ) MetaListNameValueStr → IDENTIFIER ( ( MetaNameValueStr ( , MetaNameValueStr ) * , ? ) ? ) Show Railroad MetaWord IDENTIFIER MetaNameValueStr IDENTIFIER = STRING_LITERAL RAW_STRING_LITERAL MetaListPaths IDENTIFIER ( SimplePath , SimplePath , ) MetaListIdents IDENTIFIER ( IDENTIFIER , IDENTIFIER , ) MetaListNameValueStr IDENTIFIER ( MetaNameValueStr , MetaNameValueStr , ) Some examples of meta items are: Style Example MetaWord no_std MetaNameValueStr doc = "example" MetaListPaths allow(unused, clippy::inline_always) MetaListIdents macro_use(foo, bar) MetaListNameValueStr link(name = "CoreFoundation", kind = "framework") [attributes .activity] Active and inert attributes [attributes .activity .intro] An attribute is either active or inert. During attribute processing, active attributes remove themselves from the thing they are on while inert attributes stay on. The cfg and cfg_attr attributes are active. Attribute macros are active. All other attributes are inert. [attributes .tool] Tool attributes [attributes .tool .intro] The compiler may allow attributes for external tools where each tool resides in its own module in the tool prelude . The first segment of the attribute path is the name of the tool, with one or more additional segments whose interpretation is up to the tool. [attributes .tool .ignored] When a tool is not in use, the tool’s attributes are accepted without a warning. When the tool is in use, the tool is responsible for processing and interpretation of its attributes. [attributes .tool .prelude] Tool attributes are not available if the no_implicit_prelude attribute is used. #![allow(unused)] fn main() { // Tells the rustfmt tool to not format the following element. #[rustfmt::skip] struct S { } // Controls the "cyclomatic complexity" threshold for the clippy tool. #[clippy::cyclomatic_complexity = "100"] pub fn f() {} } Note rustc currently recognizes the tools “clippy”, “rustfmt”, “diagnostic”, “miri” and “rust_analyzer”. [attributes .builtin] Built-in attributes index The following is an index of all built-in attributes. Conditional compilation cfg — Controls conditional compilation. cfg_attr — Conditionally includes attributes. Testing test — Marks a function as a test. ignore — Disables a test function. should_panic — Indicates a test should generate a panic. Derive derive — Automatic trait implementations. automatically_derived — Marker for implementations created by derive . Macros macro_export — Exports a macro_rules macro for cross-crate usage. macro_use — Expands macro visibility, or imports macros from other crates. proc_macro — Defines a function-like macro. proc_macro_derive — Defines a derive macro. proc_macro_attribute — Defines an attribute macro. Diagnostics allow , expect , warn , deny , forbid — Alters the default lint level. deprecated — Generates deprecation notices. must_use — Generates a lint for unused values. diagnostic::on_unimplemented — Hints the compiler to emit a certain error message if a trait is not implemented. diagnostic::do_not_recommend — Hints the compiler to not show a certain trait impl in error messages. ABI, linking, symbols, and FFI link — Specifies a native library to link with an extern block. link_name — Specifies the name of the symbol for functions or statics in an extern block. link_ordinal — Specifies the ordinal of the symbol for functions or statics in an extern block. no_link — Prevents linking an extern crate. repr — Controls type layout. crate_type — Specifies the type of crate (library, executable, etc.). no_main — Disables emitting the main symbol. export_name — Specifies the exported symbol name for a function or static. link_section — Specifies the section of an object file to use for a function or static. no_mangle — Disables symbol name encoding. used — Forces the compiler to keep a static item in the output object file. crate_name — Specifies the crate name. Code generation inline — Hint to inline code. cold — Hint that a function is unlikely to be called. naked — Prevent the compiler from emitting a function prologue and epilogue. no_builtins — Disables use of certain built-in functions. target_feature — Configure platform-specific code generation. track_caller — Pass the parent call location to std::panic::Location::caller() . instruction_set — Specify the instruction set used to generate a functions code Documentation doc — Specifies documentation. See The Rustdoc Book for more information. Doc comments are transformed into doc attributes. Preludes no_std — Removes std from the prelude. no_implicit_prelude — Disables prelude lookups within a module. Modules path — Specifies the filename for a module. Limits recursion_limit — Sets the maximum recursion limit for certain compile-time operations. type_length_limit — Sets the maximum size of a polymorphic type. Runtime panic_handler — Sets the function to handle panics. global_allocator — Sets the global memory allocator. windows_subsystem — Specifies the windows subsystem to link with. Features feature — Used to enable unstable or experimental compiler features. See The Unstable Book for features implemented in rustc . Type System non_exhaustive — Indicate that a type will have more fields/variants added in future. Debugger debugger_visualizer — Embeds a file that specifies debugger output for a type. collapse_debuginfo — Controls how macro invocations are encoded in debuginfo. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/cargo/commands/cargo-package.html#option-cargo-package---verbose | cargo package - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book cargo-package(1) NAME cargo-package — Assemble the local package into a distributable tarball SYNOPSIS cargo package [ options ] DESCRIPTION This command will create a distributable, compressed .crate file with the source code of the package in the current directory. The resulting file will be stored in the target/package directory. This performs the following steps: Load and check the current workspace, performing some basic checks. Path dependencies are not allowed unless they have a version key. Cargo will ignore the path key for dependencies in published packages. dev-dependencies do not have this restriction. Create the compressed .crate file. The original Cargo.toml file is rewritten and normalized. [patch] , [replace] , and [workspace] sections are removed from the manifest. Cargo.lock is always included. When missing, a new lock file will be generated unless the --exclude-lockfile flag is used. cargo-install(1) will use the packaged lock file if the --locked flag is used. A .cargo_vcs_info.json file is included that contains information about the current VCS checkout hash if available, as well as a flag if the worktree is dirty. Symlinks are flattened to their target files. Files and directories are included or excluded based on rules mentioned in the [include] and [exclude] fields . Extract the .crate file and build it to verify it can build. This will rebuild your package from scratch to ensure that it can be built from a pristine state. The --no-verify flag can be used to skip this step. Check that build scripts did not modify any source files. The list of files included can be controlled with the include and exclude fields in the manifest. See the reference for more details about packaging and publishing. .cargo_vcs_info.json format Will generate a .cargo_vcs_info.json in the following format { "git": { "sha1": "aac20b6e7e543e6dd4118b246c77225e3a3a1302", "dirty": true }, "path_in_vcs": "" } dirty indicates that the Git worktree was dirty when the package was built. path_in_vcs will be set to a repo-relative path for packages in subdirectories of the version control repository. The compatibility of this file is maintained under the same policy as the JSON output of cargo-metadata(1) . Note that this file provides a best-effort snapshot of the VCS information. However, the provenance of the package is not verified. There is no guarantee that the source code in the tarball matches the VCS information. OPTIONS Package Options -l --list Print files included in a package without making one. --no-verify Don’t verify the contents by building them. --no-metadata Ignore warnings about a lack of human-usable metadata (such as the description or the license). --allow-dirty Allow working directories with uncommitted VCS changes to be packaged. --exclude-lockfile Don’t include the lock file when packaging. This flag is not for general use. Some tools may expect a lock file to be present (e.g. cargo install --locked ). Consider other options before using this. --index index The URL of the registry index to use. --registry registry Name of the registry to package for; see cargo publish --help for more details about configuration of registry names. The packages will not be published to this registry, but if we are packaging multiple inter-dependent crates, lock-files will be generated under the assumption that dependencies will be published to this registry. --message-format fmt Specifies the output message format. Currently, it only works with --list and affects the file listing format. This is unstable and requires -Zunstable-options . Valid output formats: human (default): Display in a file-per-line format. json : Emit machine-readable JSON information about each package. One package per JSON line (Newline delimited JSON). { /* The Package ID Spec of the package. */ "id": "path+file:///home/foo#0.0.0", /* Files of this package */ "files" { /* Relative path in the archive file. */ "Cargo.toml.orig": { /* Where the file is from. - "generate" for file being generated during packaging - "copy" for file being copied from another location. */ "kind": "copy", /* For the "copy" kind, it is an absolute path to the actual file content. For the "generate" kind, it is the original file the generated one is based on. */ "path": "/home/foo/Cargo.toml" }, "Cargo.toml": { "kind": "generate", "path": "/home/foo/Cargo.toml" }, "src/main.rs": { "kind": "copy", "path": "/home/foo/src/main.rs" } } } Package Selection By default, when no package selection options are given, the packages selected depend on the selected manifest file (based on the current working directory if --manifest-path is not given). If the manifest is the root of a workspace then the workspaces default members are selected, otherwise only the package defined by the manifest will be selected. The default members of a workspace can be set explicitly with the workspace.default-members key in the root manifest. If this is not set, a virtual workspace will include all workspace members (equivalent to passing --workspace ), and a non-virtual workspace will include only the root crate itself. -p spec … --package spec … Package only the specified packages. See cargo-pkgid(1) for the SPEC format. This flag may be specified multiple times and supports common Unix glob patterns like * , ? and [] . However, to avoid your shell accidentally expanding glob patterns before Cargo handles them, you must use single quotes or double quotes around each pattern. --workspace Package all members in the workspace. --exclude SPEC … Exclude the specified packages. Must be used in conjunction with the --workspace flag. This flag may be specified multiple times and supports common Unix glob patterns like * , ? and [] . However, to avoid your shell accidentally expanding glob patterns before Cargo handles them, you must use single quotes or double quotes around each pattern. Compilation Options --target triple Package for the specified target architecture. Flag may be specified multiple times. The default is the host architecture. The general format of the triple is <arch><sub>-<vendor>-<sys>-<abi> . Possible values: Any supported target in rustc --print target-list . "host-tuple" , which will internally be substituted by the host’s target. This can be particularly useful if you’re cross-compiling some crates, and don’t want to specify your host’s machine as a target (for instance, an xtask in a shared project that may be worked on by many hosts). A path to a custom target specification. See Custom Target Lookup Path for more information. This may also be specified with the build.target config value . Note that specifying this flag makes Cargo run in a different mode where the target artifacts are placed in a separate directory. See the build cache documentation for more details. --target-dir directory Directory for all generated artifacts and intermediate files. May also be specified with the CARGO_TARGET_DIR environment variable, or the build.target-dir config value . Defaults to target in the root of the workspace. Feature Selection The feature flags allow you to control which features are enabled. When no feature options are given, the default feature is activated for every selected package. See the features documentation for more details. -F features --features features Space or comma separated list of features to activate. Features of workspace members may be enabled with package-name/feature-name syntax. This flag may be specified multiple times, which enables all specified features. --all-features Activate all available features of all selected packages. --no-default-features Do not activate the default feature of the selected packages. Manifest Options --manifest-path path Path to the Cargo.toml file. By default, Cargo searches for the Cargo.toml file in the current directory or any parent directory. --locked Asserts that the exact same dependencies and versions are used as when the existing Cargo.lock file was originally generated. Cargo will exit with an error when either of the following scenarios arises: The lock file is missing. Cargo attempted to change the lock file due to a different dependency resolution. It may be used in environments where deterministic builds are desired, such as in CI pipelines. --offline Prevents Cargo from accessing the network for any reason. Without this flag, Cargo will stop with an error if it needs to access the network and the network is not available. With this flag, Cargo will attempt to proceed without the network if possible. Beware that this may result in different dependency resolution than online mode. Cargo will restrict itself to crates that are downloaded locally, even if there might be a newer version as indicated in the local copy of the index. See the cargo-fetch(1) command to download dependencies before going offline. May also be specified with the net.offline config value . --frozen Equivalent to specifying both --locked and --offline . --lockfile-path PATH Changes the path of the lockfile from the default ( <workspace_root>/Cargo.lock ) to PATH . PATH must end with Cargo.lock (e.g. --lockfile-path /tmp/temporary-lockfile/Cargo.lock ). Note that providing --lockfile-path will ignore existing lockfile at the default path, and instead will either use the lockfile from PATH , or write a new lockfile into the provided PATH if it doesn’t exist. This flag can be used to run most commands in read-only directories, writing lockfile into the provided PATH . This option is only available on the nightly channel and requires the -Z unstable-options flag to enable (see #14421 ). Miscellaneous Options -j N --jobs N Number of parallel jobs to run. May also be specified with the build.jobs config value . Defaults to the number of logical CPUs. If negative, it sets the maximum number of parallel jobs to the number of logical CPUs plus provided value. If a string default is provided, it sets the value back to defaults. Should not be 0. --keep-going Build as many crates in the dependency graph as possible, rather than aborting the build on the first one that fails to build. For example if the current package depends on dependencies fails and works , one of which fails to build, cargo package -j1 may or may not build the one that succeeds (depending on which one of the two builds Cargo picked to run first), whereas cargo package -j1 --keep-going would definitely run both builds, even if the one run first fails. Display Options -v --verbose Use verbose output. May be specified twice for “very verbose” output which includes extra output such as dependency warnings and build script output. May also be specified with the term.verbose config value . -q --quiet Do not print cargo log messages. May also be specified with the term.quiet config value . --color when Control when colored output is used. Valid values: auto (default): Automatically detect if color support is available on the terminal. always : Always display colors. never : Never display colors. May also be specified with the term.color config value . Common Options + toolchain If Cargo has been installed with rustup, and the first argument to cargo begins with + , it will be interpreted as a rustup toolchain name (such as +stable or +nightly ). See the rustup documentation for more information about how toolchain overrides work. --config KEY=VALUE or PATH Overrides a Cargo configuration value. The argument should be in TOML syntax of KEY=VALUE , or provided as a path to an extra configuration file. This flag may be specified multiple times. See the command-line overrides section for more information. -C PATH Changes the current working directory before executing any specified operations. This affects things like where cargo looks by default for the project manifest ( Cargo.toml ), as well as the directories searched for discovering .cargo/config.toml , for example. This option must appear before the command name, for example cargo -C path/to/my-project build . This option is only available on the nightly channel and requires the -Z unstable-options flag to enable (see #10098 ). -h --help Prints help information. -Z flag Unstable (nightly-only) flags to Cargo. Run cargo -Z help for details. ENVIRONMENT See the reference for details on environment variables that Cargo reads. EXIT STATUS 0 : Cargo succeeded. 101 : Cargo failed to complete. EXAMPLES Create a compressed .crate file of the current package: cargo package SEE ALSO cargo(1) , cargo-publish(1) | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/cargo/commands/cargo-package.html#display-options | cargo package - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book cargo-package(1) NAME cargo-package — Assemble the local package into a distributable tarball SYNOPSIS cargo package [ options ] DESCRIPTION This command will create a distributable, compressed .crate file with the source code of the package in the current directory. The resulting file will be stored in the target/package directory. This performs the following steps: Load and check the current workspace, performing some basic checks. Path dependencies are not allowed unless they have a version key. Cargo will ignore the path key for dependencies in published packages. dev-dependencies do not have this restriction. Create the compressed .crate file. The original Cargo.toml file is rewritten and normalized. [patch] , [replace] , and [workspace] sections are removed from the manifest. Cargo.lock is always included. When missing, a new lock file will be generated unless the --exclude-lockfile flag is used. cargo-install(1) will use the packaged lock file if the --locked flag is used. A .cargo_vcs_info.json file is included that contains information about the current VCS checkout hash if available, as well as a flag if the worktree is dirty. Symlinks are flattened to their target files. Files and directories are included or excluded based on rules mentioned in the [include] and [exclude] fields . Extract the .crate file and build it to verify it can build. This will rebuild your package from scratch to ensure that it can be built from a pristine state. The --no-verify flag can be used to skip this step. Check that build scripts did not modify any source files. The list of files included can be controlled with the include and exclude fields in the manifest. See the reference for more details about packaging and publishing. .cargo_vcs_info.json format Will generate a .cargo_vcs_info.json in the following format { "git": { "sha1": "aac20b6e7e543e6dd4118b246c77225e3a3a1302", "dirty": true }, "path_in_vcs": "" } dirty indicates that the Git worktree was dirty when the package was built. path_in_vcs will be set to a repo-relative path for packages in subdirectories of the version control repository. The compatibility of this file is maintained under the same policy as the JSON output of cargo-metadata(1) . Note that this file provides a best-effort snapshot of the VCS information. However, the provenance of the package is not verified. There is no guarantee that the source code in the tarball matches the VCS information. OPTIONS Package Options -l --list Print files included in a package without making one. --no-verify Don’t verify the contents by building them. --no-metadata Ignore warnings about a lack of human-usable metadata (such as the description or the license). --allow-dirty Allow working directories with uncommitted VCS changes to be packaged. --exclude-lockfile Don’t include the lock file when packaging. This flag is not for general use. Some tools may expect a lock file to be present (e.g. cargo install --locked ). Consider other options before using this. --index index The URL of the registry index to use. --registry registry Name of the registry to package for; see cargo publish --help for more details about configuration of registry names. The packages will not be published to this registry, but if we are packaging multiple inter-dependent crates, lock-files will be generated under the assumption that dependencies will be published to this registry. --message-format fmt Specifies the output message format. Currently, it only works with --list and affects the file listing format. This is unstable and requires -Zunstable-options . Valid output formats: human (default): Display in a file-per-line format. json : Emit machine-readable JSON information about each package. One package per JSON line (Newline delimited JSON). { /* The Package ID Spec of the package. */ "id": "path+file:///home/foo#0.0.0", /* Files of this package */ "files" { /* Relative path in the archive file. */ "Cargo.toml.orig": { /* Where the file is from. - "generate" for file being generated during packaging - "copy" for file being copied from another location. */ "kind": "copy", /* For the "copy" kind, it is an absolute path to the actual file content. For the "generate" kind, it is the original file the generated one is based on. */ "path": "/home/foo/Cargo.toml" }, "Cargo.toml": { "kind": "generate", "path": "/home/foo/Cargo.toml" }, "src/main.rs": { "kind": "copy", "path": "/home/foo/src/main.rs" } } } Package Selection By default, when no package selection options are given, the packages selected depend on the selected manifest file (based on the current working directory if --manifest-path is not given). If the manifest is the root of a workspace then the workspaces default members are selected, otherwise only the package defined by the manifest will be selected. The default members of a workspace can be set explicitly with the workspace.default-members key in the root manifest. If this is not set, a virtual workspace will include all workspace members (equivalent to passing --workspace ), and a non-virtual workspace will include only the root crate itself. -p spec … --package spec … Package only the specified packages. See cargo-pkgid(1) for the SPEC format. This flag may be specified multiple times and supports common Unix glob patterns like * , ? and [] . However, to avoid your shell accidentally expanding glob patterns before Cargo handles them, you must use single quotes or double quotes around each pattern. --workspace Package all members in the workspace. --exclude SPEC … Exclude the specified packages. Must be used in conjunction with the --workspace flag. This flag may be specified multiple times and supports common Unix glob patterns like * , ? and [] . However, to avoid your shell accidentally expanding glob patterns before Cargo handles them, you must use single quotes or double quotes around each pattern. Compilation Options --target triple Package for the specified target architecture. Flag may be specified multiple times. The default is the host architecture. The general format of the triple is <arch><sub>-<vendor>-<sys>-<abi> . Possible values: Any supported target in rustc --print target-list . "host-tuple" , which will internally be substituted by the host’s target. This can be particularly useful if you’re cross-compiling some crates, and don’t want to specify your host’s machine as a target (for instance, an xtask in a shared project that may be worked on by many hosts). A path to a custom target specification. See Custom Target Lookup Path for more information. This may also be specified with the build.target config value . Note that specifying this flag makes Cargo run in a different mode where the target artifacts are placed in a separate directory. See the build cache documentation for more details. --target-dir directory Directory for all generated artifacts and intermediate files. May also be specified with the CARGO_TARGET_DIR environment variable, or the build.target-dir config value . Defaults to target in the root of the workspace. Feature Selection The feature flags allow you to control which features are enabled. When no feature options are given, the default feature is activated for every selected package. See the features documentation for more details. -F features --features features Space or comma separated list of features to activate. Features of workspace members may be enabled with package-name/feature-name syntax. This flag may be specified multiple times, which enables all specified features. --all-features Activate all available features of all selected packages. --no-default-features Do not activate the default feature of the selected packages. Manifest Options --manifest-path path Path to the Cargo.toml file. By default, Cargo searches for the Cargo.toml file in the current directory or any parent directory. --locked Asserts that the exact same dependencies and versions are used as when the existing Cargo.lock file was originally generated. Cargo will exit with an error when either of the following scenarios arises: The lock file is missing. Cargo attempted to change the lock file due to a different dependency resolution. It may be used in environments where deterministic builds are desired, such as in CI pipelines. --offline Prevents Cargo from accessing the network for any reason. Without this flag, Cargo will stop with an error if it needs to access the network and the network is not available. With this flag, Cargo will attempt to proceed without the network if possible. Beware that this may result in different dependency resolution than online mode. Cargo will restrict itself to crates that are downloaded locally, even if there might be a newer version as indicated in the local copy of the index. See the cargo-fetch(1) command to download dependencies before going offline. May also be specified with the net.offline config value . --frozen Equivalent to specifying both --locked and --offline . --lockfile-path PATH Changes the path of the lockfile from the default ( <workspace_root>/Cargo.lock ) to PATH . PATH must end with Cargo.lock (e.g. --lockfile-path /tmp/temporary-lockfile/Cargo.lock ). Note that providing --lockfile-path will ignore existing lockfile at the default path, and instead will either use the lockfile from PATH , or write a new lockfile into the provided PATH if it doesn’t exist. This flag can be used to run most commands in read-only directories, writing lockfile into the provided PATH . This option is only available on the nightly channel and requires the -Z unstable-options flag to enable (see #14421 ). Miscellaneous Options -j N --jobs N Number of parallel jobs to run. May also be specified with the build.jobs config value . Defaults to the number of logical CPUs. If negative, it sets the maximum number of parallel jobs to the number of logical CPUs plus provided value. If a string default is provided, it sets the value back to defaults. Should not be 0. --keep-going Build as many crates in the dependency graph as possible, rather than aborting the build on the first one that fails to build. For example if the current package depends on dependencies fails and works , one of which fails to build, cargo package -j1 may or may not build the one that succeeds (depending on which one of the two builds Cargo picked to run first), whereas cargo package -j1 --keep-going would definitely run both builds, even if the one run first fails. Display Options -v --verbose Use verbose output. May be specified twice for “very verbose” output which includes extra output such as dependency warnings and build script output. May also be specified with the term.verbose config value . -q --quiet Do not print cargo log messages. May also be specified with the term.quiet config value . --color when Control when colored output is used. Valid values: auto (default): Automatically detect if color support is available on the terminal. always : Always display colors. never : Never display colors. May also be specified with the term.color config value . Common Options + toolchain If Cargo has been installed with rustup, and the first argument to cargo begins with + , it will be interpreted as a rustup toolchain name (such as +stable or +nightly ). See the rustup documentation for more information about how toolchain overrides work. --config KEY=VALUE or PATH Overrides a Cargo configuration value. The argument should be in TOML syntax of KEY=VALUE , or provided as a path to an extra configuration file. This flag may be specified multiple times. See the command-line overrides section for more information. -C PATH Changes the current working directory before executing any specified operations. This affects things like where cargo looks by default for the project manifest ( Cargo.toml ), as well as the directories searched for discovering .cargo/config.toml , for example. This option must appear before the command name, for example cargo -C path/to/my-project build . This option is only available on the nightly channel and requires the -Z unstable-options flag to enable (see #10098 ). -h --help Prints help information. -Z flag Unstable (nightly-only) flags to Cargo. Run cargo -Z help for details. ENVIRONMENT See the reference for details on environment variables that Cargo reads. EXIT STATUS 0 : Cargo succeeded. 101 : Cargo failed to complete. EXAMPLES Create a compressed .crate file of the current package: cargo package SEE ALSO cargo(1) , cargo-publish(1) | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/reference/attributes.html#railroad-MetaItem | Attributes - The Rust Reference Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Reference [attributes] Attributes [attributes .syntax] Syntax InnerAttribute → # ! [ Attr ] OuterAttribute → # [ Attr ] Attr → SimplePath AttrInput ? | unsafe ( SimplePath AttrInput ? ) AttrInput → DelimTokenTree | = Expression Show Railroad InnerAttribute # ! [ Attr ] OuterAttribute # [ Attr ] Attr SimplePath AttrInput unsafe ( SimplePath AttrInput ) AttrInput DelimTokenTree = Expression [attributes .intro] An attribute is a general, free-form metadatum that is interpreted according to name, convention, language, and compiler version. Attributes are modeled on Attributes in ECMA-335 , with the syntax coming from ECMA-334 (C#). [attributes .inner] Inner attributes , written with a bang ( ! ) after the hash ( # ), apply to the item that the attribute is declared within. Outer attributes , written without the bang after the hash, apply to the thing that follows the attribute. [attributes .input] The attribute consists of a path to the attribute, followed by an optional delimited token tree whose interpretation is defined by the attribute. Attributes other than macro attributes also allow the input to be an equals sign ( = ) followed by an expression. See the meta item syntax below for more details. [attributes .safety] An attribute may be unsafe to apply. To avoid undefined behavior when using these attributes, certain obligations that cannot be checked by the compiler must be met. To assert these have been, the attribute is wrapped in unsafe(..) , e.g. #[unsafe(no_mangle)] . The following attributes are unsafe: export_name link_section naked no_mangle [attributes .kind] Attributes can be classified into the following kinds: Built-in attributes Proc macro attributes Derive macro helper attributes Tool attributes [attributes .allowed-position] Attributes may be applied to many things in the language: All item declarations accept outer attributes while external blocks , functions , implementations , and modules accept inner attributes. Most statements accept outer attributes (see Expression Attributes for limitations on expression statements). Block expressions accept outer and inner attributes, but only when they are the outer expression of an expression statement or the final expression of another block expression. Enum variants and struct and union fields accept outer attributes. Match expression arms accept outer attributes. Generic lifetime or type parameter accept outer attributes. Expressions accept outer attributes in limited situations, see Expression Attributes for details. Function , closure and function pointer parameters accept outer attributes. This includes attributes on variadic parameters denoted with ... in function pointers and external blocks . Some examples of attributes: #![allow(unused)] fn main() { // General metadata applied to the enclosing module or crate. #![crate_type = "lib"] // A function marked as a unit test #[test] fn test_foo() { /* ... */ } // A conditionally-compiled module #[cfg(target_os = "linux")] mod bar { /* ... */ } // A lint attribute used to suppress a warning/error #[allow(non_camel_case_types)] type int8_t = i8; // Inner attribute applies to the entire function. fn some_unused_variables() { #![allow(unused_variables)] let x = (); let y = (); let z = (); } } [attributes .meta] Meta item attribute syntax [attributes .meta .intro] A “meta item” is the syntax used for the Attr rule by most built-in attributes . It has the following grammar: [attributes .meta .syntax] Syntax MetaItem → SimplePath | SimplePath = Expression | SimplePath ( MetaSeq ? ) MetaSeq → MetaItemInner ( , MetaItemInner ) * , ? MetaItemInner → MetaItem | Expression Show Railroad MetaItem SimplePath SimplePath = Expression SimplePath ( MetaSeq ) MetaSeq MetaItemInner , MetaItemInner , MetaItemInner MetaItem Expression [attributes .meta .literal-expr] Expressions in meta items must macro-expand to literal expressions, which must not include integer or float type suffixes. Expressions which are not literal expressions will be syntactically accepted (and can be passed to proc-macros), but will be rejected after parsing. [attributes .meta .order] Note that if the attribute appears within another macro, it will be expanded after that outer macro. For example, the following code will expand the Serialize proc-macro first, which must preserve the include_str! call in order for it to be expanded: #[derive(Serialize)] struct Foo { #[doc = include_str!("x.md")] x: u32 } [attributes .meta .order-macro] Additionally, macros in attributes will be expanded only after all other attributes applied to the item: #[macro_attr1] // expanded first #[doc = mac!()] // `mac!` is expanded fourth. #[macro_attr2] // expanded second #[derive(MacroDerive1, MacroDerive2)] // expanded third fn foo() {} [attributes .meta .builtin] Various built-in attributes use different subsets of the meta item syntax to specify their inputs. The following grammar rules show some commonly used forms: [attributes .meta .builtin .syntax] Syntax MetaWord → IDENTIFIER MetaNameValueStr → IDENTIFIER = ( STRING_LITERAL | RAW_STRING_LITERAL ) MetaListPaths → IDENTIFIER ( ( SimplePath ( , SimplePath ) * , ? ) ? ) MetaListIdents → IDENTIFIER ( ( IDENTIFIER ( , IDENTIFIER ) * , ? ) ? ) MetaListNameValueStr → IDENTIFIER ( ( MetaNameValueStr ( , MetaNameValueStr ) * , ? ) ? ) Show Railroad MetaWord IDENTIFIER MetaNameValueStr IDENTIFIER = STRING_LITERAL RAW_STRING_LITERAL MetaListPaths IDENTIFIER ( SimplePath , SimplePath , ) MetaListIdents IDENTIFIER ( IDENTIFIER , IDENTIFIER , ) MetaListNameValueStr IDENTIFIER ( MetaNameValueStr , MetaNameValueStr , ) Some examples of meta items are: Style Example MetaWord no_std MetaNameValueStr doc = "example" MetaListPaths allow(unused, clippy::inline_always) MetaListIdents macro_use(foo, bar) MetaListNameValueStr link(name = "CoreFoundation", kind = "framework") [attributes .activity] Active and inert attributes [attributes .activity .intro] An attribute is either active or inert. During attribute processing, active attributes remove themselves from the thing they are on while inert attributes stay on. The cfg and cfg_attr attributes are active. Attribute macros are active. All other attributes are inert. [attributes .tool] Tool attributes [attributes .tool .intro] The compiler may allow attributes for external tools where each tool resides in its own module in the tool prelude . The first segment of the attribute path is the name of the tool, with one or more additional segments whose interpretation is up to the tool. [attributes .tool .ignored] When a tool is not in use, the tool’s attributes are accepted without a warning. When the tool is in use, the tool is responsible for processing and interpretation of its attributes. [attributes .tool .prelude] Tool attributes are not available if the no_implicit_prelude attribute is used. #![allow(unused)] fn main() { // Tells the rustfmt tool to not format the following element. #[rustfmt::skip] struct S { } // Controls the "cyclomatic complexity" threshold for the clippy tool. #[clippy::cyclomatic_complexity = "100"] pub fn f() {} } Note rustc currently recognizes the tools “clippy”, “rustfmt”, “diagnostic”, “miri” and “rust_analyzer”. [attributes .builtin] Built-in attributes index The following is an index of all built-in attributes. Conditional compilation cfg — Controls conditional compilation. cfg_attr — Conditionally includes attributes. Testing test — Marks a function as a test. ignore — Disables a test function. should_panic — Indicates a test should generate a panic. Derive derive — Automatic trait implementations. automatically_derived — Marker for implementations created by derive . Macros macro_export — Exports a macro_rules macro for cross-crate usage. macro_use — Expands macro visibility, or imports macros from other crates. proc_macro — Defines a function-like macro. proc_macro_derive — Defines a derive macro. proc_macro_attribute — Defines an attribute macro. Diagnostics allow , expect , warn , deny , forbid — Alters the default lint level. deprecated — Generates deprecation notices. must_use — Generates a lint for unused values. diagnostic::on_unimplemented — Hints the compiler to emit a certain error message if a trait is not implemented. diagnostic::do_not_recommend — Hints the compiler to not show a certain trait impl in error messages. ABI, linking, symbols, and FFI link — Specifies a native library to link with an extern block. link_name — Specifies the name of the symbol for functions or statics in an extern block. link_ordinal — Specifies the ordinal of the symbol for functions or statics in an extern block. no_link — Prevents linking an extern crate. repr — Controls type layout. crate_type — Specifies the type of crate (library, executable, etc.). no_main — Disables emitting the main symbol. export_name — Specifies the exported symbol name for a function or static. link_section — Specifies the section of an object file to use for a function or static. no_mangle — Disables symbol name encoding. used — Forces the compiler to keep a static item in the output object file. crate_name — Specifies the crate name. Code generation inline — Hint to inline code. cold — Hint that a function is unlikely to be called. naked — Prevent the compiler from emitting a function prologue and epilogue. no_builtins — Disables use of certain built-in functions. target_feature — Configure platform-specific code generation. track_caller — Pass the parent call location to std::panic::Location::caller() . instruction_set — Specify the instruction set used to generate a functions code Documentation doc — Specifies documentation. See The Rustdoc Book for more information. Doc comments are transformed into doc attributes. Preludes no_std — Removes std from the prelude. no_implicit_prelude — Disables prelude lookups within a module. Modules path — Specifies the filename for a module. Limits recursion_limit — Sets the maximum recursion limit for certain compile-time operations. type_length_limit — Sets the maximum size of a polymorphic type. Runtime panic_handler — Sets the function to handle panics. global_allocator — Sets the global memory allocator. windows_subsystem — Specifies the windows subsystem to link with. Features feature — Used to enable unstable or experimental compiler features. See The Unstable Book for features implemented in rustc . Type System non_exhaustive — Indicate that a type will have more fields/variants added in future. Debugger debugger_visualizer — Embeds a file that specifies debugger output for a type. collapse_debuginfo — Controls how macro invocations are encoded in debuginfo. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/edition-guide/editions/creating-a-new-project.html | Creating a new project - The Rust Edition Guide Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Edition Guide Creating a new project A new project created with Cargo is configured to use the latest edition by default: $ cargo new foo Creating binary (application) `foo` package note: see more `Cargo.toml` keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html $ cat foo/Cargo.toml [package] name = "foo" version = "0.1.0" edition = "2024" [dependencies] That edition = "2024" setting configures your package to be built using the Rust 2024 edition. No further configuration needed! You can use the --edition <YEAR> option of cargo new to create the project using some specific edition. For example, creating a new project to use the Rust 2018 edition could be done like this: $ cargo new --edition 2018 foo Creating binary (application) `foo` package note: see more `Cargo.toml` keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html $ cat foo/Cargo.toml [package] name = "foo" version = "0.1.0" edition = "2018" [dependencies] Don't worry about accidentally using an invalid year for the edition; the cargo new invocation will not accept an invalid edition year value: $ cargo new --edition 2019 foo error: invalid value '2019' for '--edition <YEAR>' [possible values: 2015, 2018, 2021, 2024] tip: a similar value exists: '2021' For more information, try '--help'. You can change the value of the edition key by simply editing the Cargo.toml file. For example, to cause your package to be built using the Rust 2015 edition, you would set the key as in the following example: [package] name = "foo" version = "0.1.0" edition = "2015" [dependencies] | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/reference/attributes.html#r-attributes.syntax | Attributes - The Rust Reference Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Reference [attributes] Attributes [attributes .syntax] Syntax InnerAttribute → # ! [ Attr ] OuterAttribute → # [ Attr ] Attr → SimplePath AttrInput ? | unsafe ( SimplePath AttrInput ? ) AttrInput → DelimTokenTree | = Expression Show Railroad InnerAttribute # ! [ Attr ] OuterAttribute # [ Attr ] Attr SimplePath AttrInput unsafe ( SimplePath AttrInput ) AttrInput DelimTokenTree = Expression [attributes .intro] An attribute is a general, free-form metadatum that is interpreted according to name, convention, language, and compiler version. Attributes are modeled on Attributes in ECMA-335 , with the syntax coming from ECMA-334 (C#). [attributes .inner] Inner attributes , written with a bang ( ! ) after the hash ( # ), apply to the item that the attribute is declared within. Outer attributes , written without the bang after the hash, apply to the thing that follows the attribute. [attributes .input] The attribute consists of a path to the attribute, followed by an optional delimited token tree whose interpretation is defined by the attribute. Attributes other than macro attributes also allow the input to be an equals sign ( = ) followed by an expression. See the meta item syntax below for more details. [attributes .safety] An attribute may be unsafe to apply. To avoid undefined behavior when using these attributes, certain obligations that cannot be checked by the compiler must be met. To assert these have been, the attribute is wrapped in unsafe(..) , e.g. #[unsafe(no_mangle)] . The following attributes are unsafe: export_name link_section naked no_mangle [attributes .kind] Attributes can be classified into the following kinds: Built-in attributes Proc macro attributes Derive macro helper attributes Tool attributes [attributes .allowed-position] Attributes may be applied to many things in the language: All item declarations accept outer attributes while external blocks , functions , implementations , and modules accept inner attributes. Most statements accept outer attributes (see Expression Attributes for limitations on expression statements). Block expressions accept outer and inner attributes, but only when they are the outer expression of an expression statement or the final expression of another block expression. Enum variants and struct and union fields accept outer attributes. Match expression arms accept outer attributes. Generic lifetime or type parameter accept outer attributes. Expressions accept outer attributes in limited situations, see Expression Attributes for details. Function , closure and function pointer parameters accept outer attributes. This includes attributes on variadic parameters denoted with ... in function pointers and external blocks . Some examples of attributes: #![allow(unused)] fn main() { // General metadata applied to the enclosing module or crate. #![crate_type = "lib"] // A function marked as a unit test #[test] fn test_foo() { /* ... */ } // A conditionally-compiled module #[cfg(target_os = "linux")] mod bar { /* ... */ } // A lint attribute used to suppress a warning/error #[allow(non_camel_case_types)] type int8_t = i8; // Inner attribute applies to the entire function. fn some_unused_variables() { #![allow(unused_variables)] let x = (); let y = (); let z = (); } } [attributes .meta] Meta item attribute syntax [attributes .meta .intro] A “meta item” is the syntax used for the Attr rule by most built-in attributes . It has the following grammar: [attributes .meta .syntax] Syntax MetaItem → SimplePath | SimplePath = Expression | SimplePath ( MetaSeq ? ) MetaSeq → MetaItemInner ( , MetaItemInner ) * , ? MetaItemInner → MetaItem | Expression Show Railroad MetaItem SimplePath SimplePath = Expression SimplePath ( MetaSeq ) MetaSeq MetaItemInner , MetaItemInner , MetaItemInner MetaItem Expression [attributes .meta .literal-expr] Expressions in meta items must macro-expand to literal expressions, which must not include integer or float type suffixes. Expressions which are not literal expressions will be syntactically accepted (and can be passed to proc-macros), but will be rejected after parsing. [attributes .meta .order] Note that if the attribute appears within another macro, it will be expanded after that outer macro. For example, the following code will expand the Serialize proc-macro first, which must preserve the include_str! call in order for it to be expanded: #[derive(Serialize)] struct Foo { #[doc = include_str!("x.md")] x: u32 } [attributes .meta .order-macro] Additionally, macros in attributes will be expanded only after all other attributes applied to the item: #[macro_attr1] // expanded first #[doc = mac!()] // `mac!` is expanded fourth. #[macro_attr2] // expanded second #[derive(MacroDerive1, MacroDerive2)] // expanded third fn foo() {} [attributes .meta .builtin] Various built-in attributes use different subsets of the meta item syntax to specify their inputs. The following grammar rules show some commonly used forms: [attributes .meta .builtin .syntax] Syntax MetaWord → IDENTIFIER MetaNameValueStr → IDENTIFIER = ( STRING_LITERAL | RAW_STRING_LITERAL ) MetaListPaths → IDENTIFIER ( ( SimplePath ( , SimplePath ) * , ? ) ? ) MetaListIdents → IDENTIFIER ( ( IDENTIFIER ( , IDENTIFIER ) * , ? ) ? ) MetaListNameValueStr → IDENTIFIER ( ( MetaNameValueStr ( , MetaNameValueStr ) * , ? ) ? ) Show Railroad MetaWord IDENTIFIER MetaNameValueStr IDENTIFIER = STRING_LITERAL RAW_STRING_LITERAL MetaListPaths IDENTIFIER ( SimplePath , SimplePath , ) MetaListIdents IDENTIFIER ( IDENTIFIER , IDENTIFIER , ) MetaListNameValueStr IDENTIFIER ( MetaNameValueStr , MetaNameValueStr , ) Some examples of meta items are: Style Example MetaWord no_std MetaNameValueStr doc = "example" MetaListPaths allow(unused, clippy::inline_always) MetaListIdents macro_use(foo, bar) MetaListNameValueStr link(name = "CoreFoundation", kind = "framework") [attributes .activity] Active and inert attributes [attributes .activity .intro] An attribute is either active or inert. During attribute processing, active attributes remove themselves from the thing they are on while inert attributes stay on. The cfg and cfg_attr attributes are active. Attribute macros are active. All other attributes are inert. [attributes .tool] Tool attributes [attributes .tool .intro] The compiler may allow attributes for external tools where each tool resides in its own module in the tool prelude . The first segment of the attribute path is the name of the tool, with one or more additional segments whose interpretation is up to the tool. [attributes .tool .ignored] When a tool is not in use, the tool’s attributes are accepted without a warning. When the tool is in use, the tool is responsible for processing and interpretation of its attributes. [attributes .tool .prelude] Tool attributes are not available if the no_implicit_prelude attribute is used. #![allow(unused)] fn main() { // Tells the rustfmt tool to not format the following element. #[rustfmt::skip] struct S { } // Controls the "cyclomatic complexity" threshold for the clippy tool. #[clippy::cyclomatic_complexity = "100"] pub fn f() {} } Note rustc currently recognizes the tools “clippy”, “rustfmt”, “diagnostic”, “miri” and “rust_analyzer”. [attributes .builtin] Built-in attributes index The following is an index of all built-in attributes. Conditional compilation cfg — Controls conditional compilation. cfg_attr — Conditionally includes attributes. Testing test — Marks a function as a test. ignore — Disables a test function. should_panic — Indicates a test should generate a panic. Derive derive — Automatic trait implementations. automatically_derived — Marker for implementations created by derive . Macros macro_export — Exports a macro_rules macro for cross-crate usage. macro_use — Expands macro visibility, or imports macros from other crates. proc_macro — Defines a function-like macro. proc_macro_derive — Defines a derive macro. proc_macro_attribute — Defines an attribute macro. Diagnostics allow , expect , warn , deny , forbid — Alters the default lint level. deprecated — Generates deprecation notices. must_use — Generates a lint for unused values. diagnostic::on_unimplemented — Hints the compiler to emit a certain error message if a trait is not implemented. diagnostic::do_not_recommend — Hints the compiler to not show a certain trait impl in error messages. ABI, linking, symbols, and FFI link — Specifies a native library to link with an extern block. link_name — Specifies the name of the symbol for functions or statics in an extern block. link_ordinal — Specifies the ordinal of the symbol for functions or statics in an extern block. no_link — Prevents linking an extern crate. repr — Controls type layout. crate_type — Specifies the type of crate (library, executable, etc.). no_main — Disables emitting the main symbol. export_name — Specifies the exported symbol name for a function or static. link_section — Specifies the section of an object file to use for a function or static. no_mangle — Disables symbol name encoding. used — Forces the compiler to keep a static item in the output object file. crate_name — Specifies the crate name. Code generation inline — Hint to inline code. cold — Hint that a function is unlikely to be called. naked — Prevent the compiler from emitting a function prologue and epilogue. no_builtins — Disables use of certain built-in functions. target_feature — Configure platform-specific code generation. track_caller — Pass the parent call location to std::panic::Location::caller() . instruction_set — Specify the instruction set used to generate a functions code Documentation doc — Specifies documentation. See The Rustdoc Book for more information. Doc comments are transformed into doc attributes. Preludes no_std — Removes std from the prelude. no_implicit_prelude — Disables prelude lookups within a module. Modules path — Specifies the filename for a module. Limits recursion_limit — Sets the maximum recursion limit for certain compile-time operations. type_length_limit — Sets the maximum size of a polymorphic type. Runtime panic_handler — Sets the function to handle panics. global_allocator — Sets the global memory allocator. windows_subsystem — Specifies the windows subsystem to link with. Features feature — Used to enable unstable or experimental compiler features. See The Unstable Book for features implemented in rustc . Type System non_exhaustive — Indicate that a type will have more fields/variants added in future. Debugger debugger_visualizer — Embeds a file that specifies debugger output for a type. collapse_debuginfo — Controls how macro invocations are encoded in debuginfo. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/edition-guide/introduction.html | Introduction - The Rust Edition Guide Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Edition Guide Introduction Welcome to The Rust Edition Guide! "Editions" are Rust's way of introducing changes into the language that would not otherwise be backwards compatible. In this guide, we'll discuss: What editions are Which changes are contained in each edition How to migrate your code from one edition to another | 2026-01-13T09:29:13 |
https://www.linkedin.com/legal/cookie-policy?session_redirect=https%3A%2F%2Fwww%2Elinkedin%2Ecom%2FshareArticle%3Fmini%3Dtrue%26amp%3Burl%3Dhttps%3A%2F%2Fwww%2Eanthropic%2Ecom%2Fnews%2Fdeveloping-computer-use&trk=registration-frontend_join-form-cookie-policy | Cookie Policy | LinkedIn Skip to main content User Agreement Summary of User Agreement Privacy Policy Professional Community Policies Cookie Policy Copyright Policy Regional Info EU Notice California Privacy Disclosure U.S. State Privacy Laws User Agreement Summary of User Agreement Privacy Policy Professional Community Policies Cookie Policy Copyright Policy Regional Info EU Notice California Privacy Disclosure U.S. State Privacy Laws Cookie Policy Effective on June 3, 2022 At LinkedIn, we believe in being clear and open about how we collect and use data related to you. This Cookie Policy applies to any LinkedIn product or service that links to this policy or incorporates it by reference. We use cookies and similar technologies such as pixels, local storage and mobile ad IDs (collectively referred to in this policy as “cookies”) to collect and use data as part of our Services, as defined in our Privacy Policy (“Services”) and which includes our sites, communications, mobile applications and off-site Services, such as our ad services and the “Apply with LinkedIn” and “Share with LinkedIn” plugins or tags. In the spirit of transparency, this policy provides detailed information about how and when we use these technologies. By continuing to visit or use our Services, you are agreeing to the use of cookies and similar technologies for the purposes described in this policy. What technologies are used? ENTER A SUMMARY Type of technology Description Cookies A cookie is a small file placed onto your device that enables LinkedIn features and functionality. Any browser visiting our sites may receive cookies from us or cookies from third parties such as our customers, partners or service providers. We or third parties may also place cookies in your browser when you visit non-LinkedIn sites that display ads or that host our plugins or tags . We use two types of cookies: persistent cookies and session cookies. A persistent cookie lasts beyond the current session and is used for many purposes, such as recognizing you as an existing user, so it’s easier to return to LinkedIn and interact with our Services without signing in again. Since a persistent cookie stays in your browser, it will be read by LinkedIn when you return to one of our sites or visit a third party site that uses our Services. Session cookies last only as long as the session (usually the current visit to a website or a browser session). Pixels A pixel is a tiny image that may be embedded within web pages and emails, requiring a call (which provides device and visit information) to our servers in order for the pixel to be rendered in those web pages and emails. We use pixels to learn more about your interactions with email content or web content, such as whether you interacted with ads or posts. Pixels can also enable us and third parties to place cookies on your browser. Local storage Local storage enables a website or application to store information locally on your device(s). Local storage may be used to improve the LinkedIn experience, for example, by enabling features, remembering your preferences and speeding up site functionality. Other similar technologies We also use other tracking technologies, such as mobile advertising IDs and tags for similar purposes as described in this Cookie Policy. References to similar technologies in this policy includes pixels, local storage, and other tracking technologies. Our cookie tables lists cookies and similar technologies that are used as part of our Services. Please note that the names of cookies and similar technologies may change over time. What are these technologies used for? Below we describe the purposes for which we use these technologies. ENTER SUMMARY Purpose Description Authentication We use cookies and similar technologies to recognize you when you visit our Services. If you’re signed into LinkedIn, these technologies help us show you the right information and personalize your experience in line with your settings. For example, cookies enable LinkedIn to identify you and verify your account. Security We use cookies and similar technologies to make your interactions with our Services faster and more secure. For example, we use cookies to enable and support our security features, keep your account safe and to help us detect malicious activity and violations of our User Agreement. Preferences, features and services We use cookies and similar technologies to enable the functionality of our Services, such as helping you to fill out forms on our Services more easily and providing you with features, insights and customized content in conjunction with our plugins. We also use these technologies to remember information about your browser and your preferences. For example, cookies can tell us which language you prefer and what your communications preferences are. We may also use local storage to speed up site functionality. Customized content We use cookies and similar technologies to customize your experience on our Services. For example, we may use cookies to remember previous searches so that when you return to our services, we can offer additional information that relates to your previous search. Plugins on and off LinkedIn We use cookies and similar technologies to enable LinkedIn plugins both on and off the LinkedIn sites. For example, our plugins, including the "Apply with LinkedIn" button or the "Share" button may be found on LinkedIn or third-party sites, such as the sites of our customers and partners. Our plugins use cookies and other technologies to provide analytics and recognize you on LinkedIn and third-party sites. If you interact with a plugin (for instance, by clicking "Apply"), the plugin will use cookies to identify you and initiate your request to apply. You can learn more about plugins in our Privacy Policy . Advertising Cookies and similar technologies help us show relevant advertising to you more effectively, both on and off our Services and to measure the performance of such ads. We use these technologies to learn whether content has been shown to you or whether someone who was presented with an ad later came back and took an action (e.g., downloaded a white paper or made a purchase) on another site. Similarly, our partners or service providers may use these technologies to determine whether we've shown an ad or a post and how it performed or provide us with information about how you interact with ads. We may also work with our customers and partners to show you an ad on or off LinkedIn, such as after you’ve visited a customer’s or partner’s site or application. These technologies help us provide aggregated information to our customers and partners. For further information regarding the use of cookies for advertising purposes, please see Sections 1.4 and 2.4 of the Privacy Policy . As noted in Section 1.4 of our Privacy Policy, outside Designated Countries , we also collect (or rely on others who collect) information about your device where you have not engaged with our Services (e.g., ad ID, IP address, operating system and browser information) so we can provide our Members with relevant ads and better understand their effectiveness. For further information, please see Section 1.4 of the Privacy Policy . Analytics and research Cookies and similar technologies help us learn more about how well our Services and plugins perform in different locations. We or our service providers use these technologies to understand, improve, and research products, features and services, including as you navigate through our sites or when you access LinkedIn from other sites, applications or devices. We or our service providers, use these technologies to determine and measure the performance of ads or posts on and off LinkedIn and to learn whether you have interacted with our websites, content or emails and provide analytics based on those interactions. We also use these technologies to provide aggregated information to our customers and partners as part of our Services. If you are a LinkedIn member but logged out of your account on a browser, LinkedIn may still continue to log your interaction with our Services on that browser until the expiration of the cookie in order to generate usage analytics for our Services. We may share these analytics in aggregate form with our customers. What third parties use these technologies in connection with our Services? Third parties such as our customers, partners and service providers may use cookies in connection with our Services. For example, third parties may use cookies in their LinkedIn pages, job posts and their advertisements on and off LinkedIn for their own marketing purposes. For an illustration, please visit LinkedIn’s Help Center . Third parties may also use cookies in connection with our off-site Services, such as LinkedIn ad services. Third parties may use cookies to help us to provide our Services. We may also work with third parties for our own marketing purposes and to enable us to analyze and research our Services. Your Choices You have choices on how LinkedIn uses cookies and similar technologies. Please note that if you limit the ability of LinkedIn to set cookies and similar technologies, you may worsen your overall user experience, since it may no longer be personalized to you. It may also stop you from saving customized settings like login information. Opt out of targeted advertising As described in Section 2.4 of the Privacy Policy , you have choices regarding the personalized ads you may see. LinkedIn Members can adjust their settings here . Visitor controls can be found here . Some mobile device operating systems such as Android provide the ability to control the use of mobile advertising IDs for ads personalization. You can learn how to use these controls by visiting the manufacturer’s website. We do not use iOS mobile advertising IDs for targeted advertising. Browser Controls Most browsers allow you to control cookies through their settings, which may be adapted to reflect your consent to the use of cookies. Further, most browsers also enable you to review and erase cookies, including LinkedIn cookies. To learn more about browser controls, please consult the documentation that your browser manufacturer provides. What is Do Not Track (DNT)? DNT is a concept that has been promoted by regulatory agencies such as the U.S. Federal Trade Commission (FTC), for the Internet industry to develop and implement a mechanism for allowing Internet users to control the tracking of their online activities across websites by using browser settings. As such, LinkedIn does not generally respond to “do not track” signals. Other helpful resources To learn more about advertisers’ use of cookies, please visit the following links: Internet Advertising Bureau (US) European Interactive Digital Advertising Alliance (EU) Internet Advertising Bureau (EU) LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language | 2026-01-13T09:29:13 |
https://www.linkedin.com/uas/login?session_redirect=https%3A%2F%2Fwww%2Elinkedin%2Ecom%2FshareArticle%3Fmini%3Dtrue%26amp%3Burl%3Dhttps%3A%2F%2Fwww%2Eanthropic%2Ecom%2Fnews%2Fdeveloping-computer-use&fromSignIn=true&trk=cold_join_sign_in | LinkedIn Login, Sign in | LinkedIn Sign in Sign in with Apple Sign in with a passkey By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . or Email or phone Password Show Forgot password? Keep me logged in Sign in We’ve emailed a one-time link to your primary email address Click on the link to sign in instantly to your LinkedIn account. If you don’t see the email in your inbox, check your spam folder. Resend email Back New to LinkedIn? Join now Agree & Join LinkedIn By clicking Continue, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . LinkedIn © 2026 User Agreement Privacy Policy Community Guidelines Cookie Policy Copyright Policy Send Feedback Language العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/reference/procedural-macros.html#attribute-macros | Procedural macros - The Rust Reference Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Rust Reference [macro .proc] Procedural macros [macro .proc .intro] Procedural macros allow creating syntax extensions as execution of a function. Procedural macros come in one of three flavors: Function-like macros - custom!(...) Derive macros - #[derive(CustomDerive)] Attribute macros - #[CustomAttribute] Procedural macros allow you to run code at compile time that operates over Rust syntax, both consuming and producing Rust syntax. You can sort of think of procedural macros as functions from an AST to another AST. [macro .proc .def] Procedural macros must be defined in the root of a crate with the crate type of proc-macro . The macros may not be used from the crate where they are defined, and can only be used when imported in another crate. Note When using Cargo, Procedural macro crates are defined with the proc-macro key in your manifest: [lib] proc-macro = true [macro .proc .result] As functions, they must either return syntax, panic, or loop endlessly. Returned syntax either replaces or adds the syntax depending on the kind of procedural macro. Panics are caught by the compiler and are turned into a compiler error. Endless loops are not caught by the compiler which hangs the compiler. Procedural macros run during compilation, and thus have the same resources that the compiler has. For example, standard input, error, and output are the same that the compiler has access to. Similarly, file access is the same. Because of this, procedural macros have the same security concerns that Cargo’s build scripts have. [macro .proc .error] Procedural macros have two ways of reporting errors. The first is to panic. The second is to emit a compile_error macro invocation. [macro .proc .proc_macro] The proc_macro crate [macro .proc .proc_macro .intro] Procedural macro crates almost always will link to the compiler-provided proc_macro crate . The proc_macro crate provides types required for writing procedural macros and facilities to make it easier. [macro .proc .proc_macro .token-stream] This crate primarily contains a TokenStream type. Procedural macros operate over token streams instead of AST nodes, which is a far more stable interface over time for both the compiler and for procedural macros to target. A token stream is roughly equivalent to Vec<TokenTree> where a TokenTree can roughly be thought of as lexical token. For example foo is an Ident token, . is a Punct token, and 1.2 is a Literal token. The TokenStream type, unlike Vec<TokenTree> , is cheap to clone. [macro .proc .proc_macro .span] All tokens have an associated Span . A Span is an opaque value that cannot be modified but can be manufactured. Span s represent an extent of source code within a program and are primarily used for error reporting. While you cannot modify a Span itself, you can always change the Span associated with any token, such as through getting a Span from another token. [macro .proc .hygiene] Procedural macro hygiene Procedural macros are unhygienic . This means they behave as if the output token stream was simply written inline to the code it’s next to. This means that it’s affected by external items and also affects external imports. Macro authors need to be careful to ensure their macros work in as many contexts as possible given this limitation. This often includes using absolute paths to items in libraries (for example, ::std::option::Option instead of Option ) or by ensuring that generated functions have names that are unlikely to clash with other functions (like __internal_foo instead of foo ). [macro .proc .function] Function-like procedural macros [macro .proc .function .intro] Function-like procedural macros are procedural macros that are invoked using the macro invocation operator ( ! ). [macro .proc .function .def] These macros are defined by a public function with the proc_macro attribute and a signature of (TokenStream) -> TokenStream . The input TokenStream is what is inside the delimiters of the macro invocation and the output TokenStream replaces the entire macro invocation. [macro .proc .function .namespace] The proc_macro attribute defines the macro in the macro namespace in the root of the crate. For example, the following macro definition ignores its input and outputs a function answer into its scope. #![crate_type = "proc-macro"] extern crate proc_macro; use proc_macro::TokenStream; #[proc_macro] pub fn make_answer(_item: TokenStream) -> TokenStream { "fn answer() -> u32 { 42 }".parse().unwrap() } And then we use it in a binary crate to print “42” to standard output. extern crate proc_macro_examples; use proc_macro_examples::make_answer; make_answer!(); fn main() { println!("{}", answer()); } [macro .proc .function .invocation] Function-like procedural macros may be invoked in any macro invocation position, which includes statements , expressions , patterns , type expressions , item positions, including items in extern blocks , inherent and trait implementations , and trait definitions . [macro .proc .derive] The proc_macro_derive attribute [macro .proc .derive .intro] Applying the proc_macro_derive attribute to a function defines a derive macro that can be invoked by the derive attribute . These macros are given the token stream of a struct , enum , or union definition and can emit new items after it. They can also declare and use derive macro helper attributes . Example This derive macro ignores its input and appends tokens that define a function. #![crate_type = "proc-macro"] extern crate proc_macro; use proc_macro::TokenStream; #[proc_macro_derive(AnswerFn)] pub fn derive_answer_fn(_item: TokenStream) -> TokenStream { "fn answer() -> u32 { 42 }".parse().unwrap() } To use it, we might write: extern crate proc_macro_examples; use proc_macro_examples::AnswerFn; #[derive(AnswerFn)] struct Struct; fn main() { assert_eq!(42, answer()); } [macro .proc .derive .syntax] The syntax for the proc_macro_derive attribute is: Syntax ProcMacroDeriveAttribute → proc_macro_derive ( DeriveMacroName ( , DeriveMacroAttributes ) ? , ? ) DeriveMacroName → IDENTIFIER DeriveMacroAttributes → attributes ( ( IDENTIFIER ( , IDENTIFIER ) * , ? ) ? ) Show Railroad ProcMacroDeriveAttribute proc_macro_derive ( DeriveMacroName , DeriveMacroAttributes , ) DeriveMacroName IDENTIFIER DeriveMacroAttributes attributes ( IDENTIFIER , IDENTIFIER , ) The name of the derive macro is given by DeriveMacroName . The optional attributes argument is described in macro.proc.derive.attributes . [macro .proc .derive .allowed-positions] The proc_macro_derive attribute may only be applied to a pub function with the Rust ABI defined in the root of the crate with a type of fn(TokenStream) -> TokenStream where TokenStream comes from the proc_macro crate . The function may be const and may use extern to explicitly specify the Rust ABI, but it may not use any other qualifiers (e.g. it may not be async or unsafe ). [macro .proc .derive .duplicates] The proc_macro_derive attribute may be used only once on a function. [macro .proc .derive .namespace] The proc_macro_derive attribute publicly defines the derive macro in the macro namespace in the root of the crate. [macro .proc .derive .output] The input TokenStream is the token stream of the item to which the derive attribute is applied. The output TokenStream must be a (possibly empty) set of items. These items are appended following the input item within the same module or block . [macro .proc .derive .attributes] Derive macro helper attributes [macro .proc .derive .attributes .intro] Derive macros can declare derive macro helper attributes to be used within the scope of the item to which the derive macro is applied. These attributes are inert . While their purpose is to be used by the macro that declared them, they can be seen by any macro. [macro .proc .derive .attributes .decl] A helper attribute for a derive macro is declared by adding its identifier to the attributes list in the proc_macro_derive attribute. Example This declares a helper attribute and then ignores it. #![crate_type="proc-macro"] extern crate proc_macro; use proc_macro::TokenStream; #[proc_macro_derive(WithHelperAttr, attributes(helper))] pub fn derive_with_helper_attr(_item: TokenStream) -> TokenStream { TokenStream::new() } To use it, we might write: #[derive(WithHelperAttr)] struct Struct { #[helper] field: (), } [macro .proc .attribute] Attribute macros [macro .proc .attribute .intro] Attribute macros define new outer attributes which can be attached to items , including items in extern blocks , inherent and trait implementations , and trait definitions . [macro .proc .attribute .def] Attribute macros are defined by a public function with the proc_macro_attribute attribute that has a signature of (TokenStream, TokenStream) -> TokenStream . The first TokenStream is the delimited token tree following the attribute’s name, not including the outer delimiters. If the attribute is written as a bare attribute name, the attribute TokenStream is empty. The second TokenStream is the rest of the item including other attributes on the item . The returned TokenStream replaces the item with an arbitrary number of items . [macro .proc .attribute .namespace] The proc_macro_attribute attribute defines the attribute in the macro namespace in the root of the crate. For example, this attribute macro takes the input stream and returns it as is, effectively being the no-op of attributes. #![crate_type = "proc-macro"] extern crate proc_macro; use proc_macro::TokenStream; #[proc_macro_attribute] pub fn return_as_is(_attr: TokenStream, item: TokenStream) -> TokenStream { item } This following example shows the stringified TokenStream s that the attribute macros see. The output will show in the output of the compiler. The output is shown in the comments after the function prefixed with “out:”. // my-macro/src/lib.rs extern crate proc_macro; use proc_macro::TokenStream; #[proc_macro_attribute] pub fn show_streams(attr: TokenStream, item: TokenStream) -> TokenStream { println!("attr: \"{attr}\""); println!("item: \"{item}\""); item } // src/lib.rs extern crate my_macro; use my_macro::show_streams; // Example: Basic function #[show_streams] fn invoke1() {} // out: attr: "" // out: item: "fn invoke1() {}" // Example: Attribute with input #[show_streams(bar)] fn invoke2() {} // out: attr: "bar" // out: item: "fn invoke2() {}" // Example: Multiple tokens in the input #[show_streams(multiple => tokens)] fn invoke3() {} // out: attr: "multiple => tokens" // out: item: "fn invoke3() {}" // Example: #[show_streams { delimiters }] fn invoke4() {} // out: attr: "delimiters" // out: item: "fn invoke4() {}" [macro .proc .token] Declarative macro tokens and procedural macro tokens [macro .proc .token .intro] Declarative macro_rules macros and procedural macros use similar, but different definitions for tokens (or rather TokenTree s .) [macro .proc .token .macro_rules] Token trees in macro_rules (corresponding to tt matchers) are defined as Delimited groups ( (...) , {...} , etc) All operators supported by the language, both single-character and multi-character ones ( + , += ). Note that this set doesn’t include the single quote ' . Literals ( "string" , 1 , etc) Note that negation (e.g. -1 ) is never a part of such literal tokens, but a separate operator token. Identifiers, including keywords ( ident , r#ident , fn ) Lifetimes ( 'ident ) Metavariable substitutions in macro_rules (e.g. $my_expr in macro_rules! mac { ($my_expr: expr) => { $my_expr } } after the mac ’s expansion, which will be considered a single token tree regardless of the passed expression) [macro .proc .token .tree] Token trees in procedural macros are defined as Delimited groups ( (...) , {...} , etc) All punctuation characters used in operators supported by the language ( + , but not += ), and also the single quote ' character (typically used in lifetimes, see below for lifetime splitting and joining behavior) Literals ( "string" , 1 , etc) Negation (e.g. -1 ) is supported as a part of integer and floating point literals. Identifiers, including keywords ( ident , r#ident , fn ) [macro .proc .token .conversion .intro] Mismatches between these two definitions are accounted for when token streams are passed to and from procedural macros. Note that the conversions below may happen lazily, so they might not happen if the tokens are not actually inspected. [macro .proc .token .conversion .to-proc_macro] When passed to a proc-macro All multi-character operators are broken into single characters. Lifetimes are broken into a ' character and an identifier. The keyword metavariable $crate is passed as a single identifier. All other metavariable substitutions are represented as their underlying token streams. Such token streams may be wrapped into delimited groups ( Group ) with implicit delimiters ( Delimiter::None ) when it’s necessary for preserving parsing priorities. tt and ident substitutions are never wrapped into such groups and always represented as their underlying token trees. [macro .proc .token .conversion .from-proc_macro] When emitted from a proc macro Punctuation characters are glued into multi-character operators when applicable. Single quotes ' joined with identifiers are glued into lifetimes. Negative literals are converted into two tokens (the - and the literal) possibly wrapped into a delimited group ( Group ) with implicit delimiters ( Delimiter::None ) when it’s necessary for preserving parsing priorities. [macro .proc .token .doc-comment] Note that neither declarative nor procedural macros support doc comment tokens (e.g. /// Doc ), so they are always converted to token streams representing their equivalent #[doc = r"str"] attributes when passed to macros. | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/cargo/commands/cargo-package.html#option-cargo-package-+toolchain | cargo package - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book cargo-package(1) NAME cargo-package — Assemble the local package into a distributable tarball SYNOPSIS cargo package [ options ] DESCRIPTION This command will create a distributable, compressed .crate file with the source code of the package in the current directory. The resulting file will be stored in the target/package directory. This performs the following steps: Load and check the current workspace, performing some basic checks. Path dependencies are not allowed unless they have a version key. Cargo will ignore the path key for dependencies in published packages. dev-dependencies do not have this restriction. Create the compressed .crate file. The original Cargo.toml file is rewritten and normalized. [patch] , [replace] , and [workspace] sections are removed from the manifest. Cargo.lock is always included. When missing, a new lock file will be generated unless the --exclude-lockfile flag is used. cargo-install(1) will use the packaged lock file if the --locked flag is used. A .cargo_vcs_info.json file is included that contains information about the current VCS checkout hash if available, as well as a flag if the worktree is dirty. Symlinks are flattened to their target files. Files and directories are included or excluded based on rules mentioned in the [include] and [exclude] fields . Extract the .crate file and build it to verify it can build. This will rebuild your package from scratch to ensure that it can be built from a pristine state. The --no-verify flag can be used to skip this step. Check that build scripts did not modify any source files. The list of files included can be controlled with the include and exclude fields in the manifest. See the reference for more details about packaging and publishing. .cargo_vcs_info.json format Will generate a .cargo_vcs_info.json in the following format { "git": { "sha1": "aac20b6e7e543e6dd4118b246c77225e3a3a1302", "dirty": true }, "path_in_vcs": "" } dirty indicates that the Git worktree was dirty when the package was built. path_in_vcs will be set to a repo-relative path for packages in subdirectories of the version control repository. The compatibility of this file is maintained under the same policy as the JSON output of cargo-metadata(1) . Note that this file provides a best-effort snapshot of the VCS information. However, the provenance of the package is not verified. There is no guarantee that the source code in the tarball matches the VCS information. OPTIONS Package Options -l --list Print files included in a package without making one. --no-verify Don’t verify the contents by building them. --no-metadata Ignore warnings about a lack of human-usable metadata (such as the description or the license). --allow-dirty Allow working directories with uncommitted VCS changes to be packaged. --exclude-lockfile Don’t include the lock file when packaging. This flag is not for general use. Some tools may expect a lock file to be present (e.g. cargo install --locked ). Consider other options before using this. --index index The URL of the registry index to use. --registry registry Name of the registry to package for; see cargo publish --help for more details about configuration of registry names. The packages will not be published to this registry, but if we are packaging multiple inter-dependent crates, lock-files will be generated under the assumption that dependencies will be published to this registry. --message-format fmt Specifies the output message format. Currently, it only works with --list and affects the file listing format. This is unstable and requires -Zunstable-options . Valid output formats: human (default): Display in a file-per-line format. json : Emit machine-readable JSON information about each package. One package per JSON line (Newline delimited JSON). { /* The Package ID Spec of the package. */ "id": "path+file:///home/foo#0.0.0", /* Files of this package */ "files" { /* Relative path in the archive file. */ "Cargo.toml.orig": { /* Where the file is from. - "generate" for file being generated during packaging - "copy" for file being copied from another location. */ "kind": "copy", /* For the "copy" kind, it is an absolute path to the actual file content. For the "generate" kind, it is the original file the generated one is based on. */ "path": "/home/foo/Cargo.toml" }, "Cargo.toml": { "kind": "generate", "path": "/home/foo/Cargo.toml" }, "src/main.rs": { "kind": "copy", "path": "/home/foo/src/main.rs" } } } Package Selection By default, when no package selection options are given, the packages selected depend on the selected manifest file (based on the current working directory if --manifest-path is not given). If the manifest is the root of a workspace then the workspaces default members are selected, otherwise only the package defined by the manifest will be selected. The default members of a workspace can be set explicitly with the workspace.default-members key in the root manifest. If this is not set, a virtual workspace will include all workspace members (equivalent to passing --workspace ), and a non-virtual workspace will include only the root crate itself. -p spec … --package spec … Package only the specified packages. See cargo-pkgid(1) for the SPEC format. This flag may be specified multiple times and supports common Unix glob patterns like * , ? and [] . However, to avoid your shell accidentally expanding glob patterns before Cargo handles them, you must use single quotes or double quotes around each pattern. --workspace Package all members in the workspace. --exclude SPEC … Exclude the specified packages. Must be used in conjunction with the --workspace flag. This flag may be specified multiple times and supports common Unix glob patterns like * , ? and [] . However, to avoid your shell accidentally expanding glob patterns before Cargo handles them, you must use single quotes or double quotes around each pattern. Compilation Options --target triple Package for the specified target architecture. Flag may be specified multiple times. The default is the host architecture. The general format of the triple is <arch><sub>-<vendor>-<sys>-<abi> . Possible values: Any supported target in rustc --print target-list . "host-tuple" , which will internally be substituted by the host’s target. This can be particularly useful if you’re cross-compiling some crates, and don’t want to specify your host’s machine as a target (for instance, an xtask in a shared project that may be worked on by many hosts). A path to a custom target specification. See Custom Target Lookup Path for more information. This may also be specified with the build.target config value . Note that specifying this flag makes Cargo run in a different mode where the target artifacts are placed in a separate directory. See the build cache documentation for more details. --target-dir directory Directory for all generated artifacts and intermediate files. May also be specified with the CARGO_TARGET_DIR environment variable, or the build.target-dir config value . Defaults to target in the root of the workspace. Feature Selection The feature flags allow you to control which features are enabled. When no feature options are given, the default feature is activated for every selected package. See the features documentation for more details. -F features --features features Space or comma separated list of features to activate. Features of workspace members may be enabled with package-name/feature-name syntax. This flag may be specified multiple times, which enables all specified features. --all-features Activate all available features of all selected packages. --no-default-features Do not activate the default feature of the selected packages. Manifest Options --manifest-path path Path to the Cargo.toml file. By default, Cargo searches for the Cargo.toml file in the current directory or any parent directory. --locked Asserts that the exact same dependencies and versions are used as when the existing Cargo.lock file was originally generated. Cargo will exit with an error when either of the following scenarios arises: The lock file is missing. Cargo attempted to change the lock file due to a different dependency resolution. It may be used in environments where deterministic builds are desired, such as in CI pipelines. --offline Prevents Cargo from accessing the network for any reason. Without this flag, Cargo will stop with an error if it needs to access the network and the network is not available. With this flag, Cargo will attempt to proceed without the network if possible. Beware that this may result in different dependency resolution than online mode. Cargo will restrict itself to crates that are downloaded locally, even if there might be a newer version as indicated in the local copy of the index. See the cargo-fetch(1) command to download dependencies before going offline. May also be specified with the net.offline config value . --frozen Equivalent to specifying both --locked and --offline . --lockfile-path PATH Changes the path of the lockfile from the default ( <workspace_root>/Cargo.lock ) to PATH . PATH must end with Cargo.lock (e.g. --lockfile-path /tmp/temporary-lockfile/Cargo.lock ). Note that providing --lockfile-path will ignore existing lockfile at the default path, and instead will either use the lockfile from PATH , or write a new lockfile into the provided PATH if it doesn’t exist. This flag can be used to run most commands in read-only directories, writing lockfile into the provided PATH . This option is only available on the nightly channel and requires the -Z unstable-options flag to enable (see #14421 ). Miscellaneous Options -j N --jobs N Number of parallel jobs to run. May also be specified with the build.jobs config value . Defaults to the number of logical CPUs. If negative, it sets the maximum number of parallel jobs to the number of logical CPUs plus provided value. If a string default is provided, it sets the value back to defaults. Should not be 0. --keep-going Build as many crates in the dependency graph as possible, rather than aborting the build on the first one that fails to build. For example if the current package depends on dependencies fails and works , one of which fails to build, cargo package -j1 may or may not build the one that succeeds (depending on which one of the two builds Cargo picked to run first), whereas cargo package -j1 --keep-going would definitely run both builds, even if the one run first fails. Display Options -v --verbose Use verbose output. May be specified twice for “very verbose” output which includes extra output such as dependency warnings and build script output. May also be specified with the term.verbose config value . -q --quiet Do not print cargo log messages. May also be specified with the term.quiet config value . --color when Control when colored output is used. Valid values: auto (default): Automatically detect if color support is available on the terminal. always : Always display colors. never : Never display colors. May also be specified with the term.color config value . Common Options + toolchain If Cargo has been installed with rustup, and the first argument to cargo begins with + , it will be interpreted as a rustup toolchain name (such as +stable or +nightly ). See the rustup documentation for more information about how toolchain overrides work. --config KEY=VALUE or PATH Overrides a Cargo configuration value. The argument should be in TOML syntax of KEY=VALUE , or provided as a path to an extra configuration file. This flag may be specified multiple times. See the command-line overrides section for more information. -C PATH Changes the current working directory before executing any specified operations. This affects things like where cargo looks by default for the project manifest ( Cargo.toml ), as well as the directories searched for discovering .cargo/config.toml , for example. This option must appear before the command name, for example cargo -C path/to/my-project build . This option is only available on the nightly channel and requires the -Z unstable-options flag to enable (see #10098 ). -h --help Prints help information. -Z flag Unstable (nightly-only) flags to Cargo. Run cargo -Z help for details. ENVIRONMENT See the reference for details on environment variables that Cargo reads. EXIT STATUS 0 : Cargo succeeded. 101 : Cargo failed to complete. EXAMPLES Create a compressed .crate file of the current package: cargo package SEE ALSO cargo(1) , cargo-publish(1) | 2026-01-13T09:29:13 |
https://doc.rust-lang.org/cargo/reference/config.html#buildrustc | Configuration - The Cargo Book Keyboard shortcuts Press ← or → to navigate between chapters Press S or / to search in the book Press ? to show this help Press Esc to hide this help Auto Light Rust Coal Navy Ayu The Cargo Book Configuration This document explains how Cargo’s configuration system works, as well as available keys or configuration. For configuration of a package through its manifest, see the manifest format . Hierarchical structure Cargo allows local configuration for a particular package as well as global configuration. It looks for configuration files in the current directory and all parent directories. If, for example, Cargo were invoked in /projects/foo/bar/baz , then the following configuration files would be probed for and unified in this order: /projects/foo/bar/baz/.cargo/config.toml /projects/foo/bar/.cargo/config.toml /projects/foo/.cargo/config.toml /projects/.cargo/config.toml /.cargo/config.toml $CARGO_HOME/config.toml which defaults to: Windows: %USERPROFILE%\.cargo\config.toml Unix: $HOME/.cargo/config.toml With this structure, you can specify configuration per-package, and even possibly check it into version control. You can also specify personal defaults with a configuration file in your home directory. If a key is specified in multiple config files, the values will get merged together. Numbers, strings, and booleans will use the value in the deeper config directory taking precedence over ancestor directories, where the home directory is the lowest priority. Arrays will be joined together with higher precedence items being placed later in the merged array. At present, when being invoked from a workspace, Cargo does not read config files from crates within the workspace. i.e. if a workspace has two crates in it, named /projects/foo/bar/baz/mylib and /projects/foo/bar/baz/mybin , and there are Cargo configs at /projects/foo/bar/baz/mylib/.cargo/config.toml and /projects/foo/bar/baz/mybin/.cargo/config.toml , Cargo does not read those configuration files if it is invoked from the workspace root ( /projects/foo/bar/baz/ ). Note: Cargo also reads config files without the .toml extension, such as .cargo/config . Support for the .toml extension was added in version 1.39 and is the preferred form. If both files exist, Cargo will use the file without the extension. Configuration format Configuration files are written in the TOML format (like the manifest), with simple key-value pairs inside of sections (tables). The following is a quick overview of all settings, with detailed descriptions found below. paths = ["/path/to/override"] # path dependency overrides [alias] # command aliases b = "build" c = "check" t = "test" r = "run" rr = "run --release" recursive_example = "rr --example recursions" space_example = ["run", "--release", "--", "\"command list\""] [build] jobs = 1 # number of parallel jobs, defaults to # of CPUs rustc = "rustc" # the rust compiler tool rustc-wrapper = "…" # run this wrapper instead of `rustc` rustc-workspace-wrapper = "…" # run this wrapper instead of `rustc` for workspace members rustdoc = "rustdoc" # the doc generator tool target = "triple" # build for the target triple (ignored by `cargo install`) target-dir = "target" # path of where to place generated artifacts build-dir = "target" # path of where to place intermediate build artifacts rustflags = ["…", "…"] # custom flags to pass to all compiler invocations rustdocflags = ["…", "…"] # custom flags to pass to rustdoc incremental = true # whether or not to enable incremental compilation dep-info-basedir = "…" # path for the base directory for targets in depfiles [credential-alias] # Provides a way to define aliases for credential providers. my-alias = ["/usr/bin/cargo-credential-example", "--argument", "value", "--flag"] [doc] browser = "chromium" # browser to use with `cargo doc --open`, # overrides the `BROWSER` environment variable [env] # Set ENV_VAR_NAME=value for any process run by Cargo ENV_VAR_NAME = "value" # Set even if already present in environment ENV_VAR_NAME_2 = { value = "value", force = true } # `value` is relative to the parent of `.cargo/config.toml`, env var will be the full absolute path ENV_VAR_NAME_3 = { value = "relative/path", relative = true } [future-incompat-report] frequency = 'always' # when to display a notification about a future incompat report [cache] auto-clean-frequency = "1 day" # How often to perform automatic cache cleaning [cargo-new] vcs = "none" # VCS to use ('git', 'hg', 'pijul', 'fossil', 'none') [http] debug = false # HTTP debugging proxy = "host:port" # HTTP proxy in libcurl format ssl-version = "tlsv1.3" # TLS version to use ssl-version.max = "tlsv1.3" # maximum TLS version ssl-version.min = "tlsv1.1" # minimum TLS version timeout = 30 # timeout for each HTTP request, in seconds low-speed-limit = 10 # network timeout threshold (bytes/sec) cainfo = "cert.pem" # path to Certificate Authority (CA) bundle proxy-cainfo = "cert.pem" # path to proxy Certificate Authority (CA) bundle check-revoke = true # check for SSL certificate revocation multiplexing = true # HTTP/2 multiplexing user-agent = "…" # the user-agent header [install] root = "/some/path" # `cargo install` destination directory [net] retry = 3 # network retries git-fetch-with-cli = true # use the `git` executable for git operations offline = true # do not access the network [net.ssh] known-hosts = ["..."] # known SSH host keys [patch.<registry>] # Same keys as for [patch] in Cargo.toml [profile.<name>] # Modify profile settings via config. inherits = "dev" # Inherits settings from [profile.dev]. opt-level = 0 # Optimization level. debug = true # Include debug info. split-debuginfo = '...' # Debug info splitting behavior. strip = "none" # Removes symbols or debuginfo. debug-assertions = true # Enables debug assertions. overflow-checks = true # Enables runtime integer overflow checks. lto = false # Sets link-time optimization. panic = 'unwind' # The panic strategy. incremental = true # Incremental compilation. codegen-units = 16 # Number of code generation units. rpath = false # Sets the rpath linking option. [profile.<name>.build-override] # Overrides build-script settings. # Same keys for a normal profile. [profile.<name>.package.<name>] # Override profile for a package. # Same keys for a normal profile (minus `panic`, `lto`, and `rpath`). [resolver] incompatible-rust-versions = "allow" # Specifies how resolver reacts to these [registries.<name>] # registries other than crates.io index = "…" # URL of the registry index token = "…" # authentication token for the registry credential-provider = "cargo:token" # The credential provider for this registry. [registries.crates-io] protocol = "sparse" # The protocol to use to access crates.io. [registry] default = "…" # name of the default registry token = "…" # authentication token for crates.io credential-provider = "cargo:token" # The credential provider for crates.io. global-credential-providers = ["cargo:token"] # The credential providers to use by default. [source.<name>] # source definition and replacement replace-with = "…" # replace this source with the given named source directory = "…" # path to a directory source registry = "…" # URL to a registry source local-registry = "…" # path to a local registry source git = "…" # URL of a git repository source branch = "…" # branch name for the git repository tag = "…" # tag name for the git repository rev = "…" # revision for the git repository [target.<triple>] linker = "…" # linker to use runner = "…" # wrapper to run executables rustflags = ["…", "…"] # custom flags for `rustc` rustdocflags = ["…", "…"] # custom flags for `rustdoc` [target.<cfg>] linker = "…" # linker to use runner = "…" # wrapper to run executables rustflags = ["…", "…"] # custom flags for `rustc` [target.<triple>.<links>] # `links` build script override rustc-link-lib = ["foo"] rustc-link-search = ["/path/to/foo"] rustc-flags = "-L /some/path" rustc-cfg = ['key="value"'] rustc-env = {key = "value"} rustc-cdylib-link-arg = ["…"] metadata_key1 = "value" metadata_key2 = "value" [term] quiet = false # whether cargo output is quiet verbose = false # whether cargo provides verbose output color = 'auto' # whether cargo colorizes output hyperlinks = true # whether cargo inserts links into output unicode = true # whether cargo can render output using non-ASCII unicode characters progress.when = 'auto' # whether cargo shows progress bar progress.width = 80 # width of progress bar progress.term-integration = true # whether cargo reports progress to terminal emulator Environment variables Cargo can also be configured through environment variables in addition to the TOML configuration files. For each configuration key of the form foo.bar the environment variable CARGO_FOO_BAR can also be used to define the value. Keys are converted to uppercase, dots and dashes are converted to underscores. For example the target.x86_64-unknown-linux-gnu.runner key can also be defined by the CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_RUNNER environment variable. Environment variables will take precedence over TOML configuration files. Currently only integer, boolean, string and some array values are supported to be defined by environment variables. Descriptions below indicate which keys support environment variables and otherwise they are not supported due to technical issues . In addition to the system above, Cargo recognizes a few other specific environment variables . Command-line overrides Cargo also accepts arbitrary configuration overrides through the --config command-line option. The argument should be in TOML syntax of KEY=VALUE or provided as a path to an extra configuration file: # With `KEY=VALUE` in TOML syntax cargo --config net.git-fetch-with-cli=true fetch # With a path to a configuration file cargo --config ./path/to/my/extra-config.toml fetch The --config option may be specified multiple times, in which case the values are merged in left-to-right order, using the same merging logic that is used when multiple configuration files apply. Configuration values specified this way take precedence over environment variables, which take precedence over configuration files. When the --config option is provided as an extra configuration file, The configuration file loaded this way follow the same precedence rules as other options specified directly with --config . Some examples of what it looks like using Bourne shell syntax: # Most shells will require escaping. cargo --config http.proxy=\"http://example.com\" … # Spaces may be used. cargo --config "net.git-fetch-with-cli = true" … # TOML array example. Single quotes make it easier to read and write. cargo --config 'build.rustdocflags = ["--html-in-header", "header.html"]' … # Example of a complex TOML key. cargo --config "target.'cfg(all(target_arch = \"arm\", target_os = \"none\"))'.runner = 'my-runner'" … # Example of overriding a profile setting. cargo --config profile.dev.package.image.opt-level=3 … Config-relative paths Paths in config files may be absolute, relative, or a bare name without any path separators. Paths for executables without a path separator will use the PATH environment variable to search for the executable. Paths for non-executables will be relative to where the config value is defined. In particular, rules are: For environment variables, paths are relative to the current working directory. For config values loaded directly from the --config KEY=VALUE option, paths are relative to the current working directory. For config files, paths are relative to the parent directory of the directory where the config files were defined, no matter those files are from either the hierarchical probing or the --config <path> option. Note: To maintain consistency with existing .cargo/config.toml probing behavior, it is by design that a path in a config file passed via --config <path> is also relative to two levels up from the config file itself. To avoid unexpected results, the rule of thumb is putting your extra config files at the same level of discovered .cargo/config.toml in your project. For instance, given a project /my/project , it is recommended to put config files under /my/project/.cargo or a new directory at the same level, such as /my/project/.config . # Relative path examples. [target.x86_64-unknown-linux-gnu] runner = "foo" # Searches `PATH` for `foo`. [source.vendored-sources] # Directory is relative to the parent where `.cargo/config.toml` is located. # For example, `/my/project/.cargo/config.toml` would result in `/my/project/vendor`. directory = "vendor" Executable paths with arguments Some Cargo commands invoke external programs, which can be configured as a path and some number of arguments. The value may be an array of strings like ['/path/to/program', 'somearg'] or a space-separated string like '/path/to/program somearg' . If the path to the executable contains a space, the list form must be used. If Cargo is passing other arguments to the program such as a path to open or run, they will be passed after the last specified argument in the value of an option of this format. If the specified program does not have path separators, Cargo will search PATH for its executable. Credentials Configuration values with sensitive information are stored in the $CARGO_HOME/credentials.toml file. This file is automatically created and updated by cargo login and cargo logout when using the cargo:token credential provider. Tokens are used by some Cargo commands such as cargo publish for authenticating with remote registries. Care should be taken to protect the tokens and to keep them secret. It follows the same format as Cargo config files. [registry] token = "…" # Access token for crates.io [registries.<name>] token = "…" # Access token for the named registry As with most other config values, tokens may be specified with environment variables. The token for crates.io may be specified with the CARGO_REGISTRY_TOKEN environment variable. Tokens for other registries may be specified with environment variables of the form CARGO_REGISTRIES_<name>_TOKEN where <name> is the name of the registry in all capital letters. Note: Cargo also reads and writes credential files without the .toml extension, such as .cargo/credentials . Support for the .toml extension was added in version 1.39. In version 1.68, Cargo writes to the file with the extension by default. However, for backward compatibility reason, when both files exist, Cargo will read and write the file without the extension. Configuration keys This section documents all configuration keys. The description for keys with variable parts are annotated with angled brackets like target.<triple> where the <triple> part can be any target triple like target.x86_64-pc-windows-msvc . paths Type: array of strings (paths) Default: none Environment: not supported An array of paths to local packages which are to be used as overrides for dependencies. For more information see the Overriding Dependencies guide . [alias] Type: string or array of strings Default: see below Environment: CARGO_ALIAS_<name> The [alias] table defines CLI command aliases. For example, running cargo b is an alias for running cargo build . Each key in the table is the subcommand, and the value is the actual command to run. The value may be an array of strings, where the first element is the command and the following are arguments. It may also be a string, which will be split on spaces into subcommand and arguments. The following aliases are built-in to Cargo: [alias] b = "build" c = "check" d = "doc" t = "test" r = "run" rm = "remove" Aliases are not allowed to redefine existing built-in commands. Aliases are recursive: [alias] rr = "run --release" recursive_example = "rr --example recursions" [build] The [build] table controls build-time operations and compiler settings. build.jobs Type: integer or string Default: number of logical CPUs Environment: CARGO_BUILD_JOBS Sets the maximum number of compiler processes to run in parallel. If negative, it sets the maximum number of compiler processes to the number of logical CPUs plus provided value. Should not be 0. If a string default is provided, it sets the value back to defaults. Can be overridden with the --jobs CLI option. build.rustc Type: string (program path) Default: "rustc" Environment: CARGO_BUILD_RUSTC or RUSTC Sets the executable to use for rustc . build.rustc-wrapper Type: string (program path) Default: none Environment: CARGO_BUILD_RUSTC_WRAPPER or RUSTC_WRAPPER Sets a wrapper to execute instead of rustc . The first argument passed to the wrapper is the path to the actual executable to use (i.e., build.rustc , if that is set, or "rustc" otherwise). build.rustc-workspace-wrapper Type: string (program path) Default: none Environment: CARGO_BUILD_RUSTC_WORKSPACE_WRAPPER or RUSTC_WORKSPACE_WRAPPER Sets a wrapper to execute instead of rustc , for workspace members only. When building a single-package project without workspaces, that package is considered to be the workspace. The first argument passed to the wrapper is the path to the actual executable to use (i.e., build.rustc , if that is set, or "rustc" otherwise). It affects the filename hash so that artifacts produced by the wrapper are cached separately. If both rustc-wrapper and rustc-workspace-wrapper are set, then they will be nested: the final invocation is $RUSTC_WRAPPER $RUSTC_WORKSPACE_WRAPPER $RUSTC . build.rustdoc Type: string (program path) Default: "rustdoc" Environment: CARGO_BUILD_RUSTDOC or RUSTDOC Sets the executable to use for rustdoc . build.target Type: string or array of strings Default: host platform Environment: CARGO_BUILD_TARGET The default target platform triples to compile to. Possible values: Any supported target in rustc --print target-list . "host-tuple" , which will internally be substituted by the host’s target. This can be particularly useful if you’re cross-compiling some crates, and don’t want to specify your host’s machine as a target (for instance, an xtask in a shared project that may be worked on by many hosts). A path to a custom target specification. See Custom Target Lookup Path for more information. Can be overridden with the --target CLI option. [build] target = ["x86_64-unknown-linux-gnu", "i686-unknown-linux-gnu"] build.target-dir Type: string (path) Default: "target" Environment: CARGO_BUILD_TARGET_DIR or CARGO_TARGET_DIR The path to where all compiler output is placed. The default if not specified is a directory named target located at the root of the workspace. Can be overridden with the --target-dir CLI option. For more information see the build cache documentation . build.build-dir Type: string (path) Default: Defaults to the value of build.target-dir Environment: CARGO_BUILD_BUILD_DIR The directory where intermediate build artifacts will be stored. Intermediate artifacts are produced by Rustc/Cargo during the build process. This option supports path templating. Available template variables: {workspace-root} resolves to root of the current workspace. {cargo-cache-home} resolves to CARGO_HOME {workspace-path-hash} resolves to a hash of the manifest path For more information see the build cache documentation . build.rustflags Type: string or array of strings Default: none Environment: CARGO_BUILD_RUSTFLAGS or CARGO_ENCODED_RUSTFLAGS or RUSTFLAGS Extra command-line flags to pass to rustc . The value may be an array of strings or a space-separated string. There are four mutually exclusive sources of extra flags. They are checked in order, with the first one being used: CARGO_ENCODED_RUSTFLAGS environment variable. RUSTFLAGS environment variable. All matching target.<triple>.rustflags and target.<cfg>.rustflags config entries joined together. build.rustflags config value. Additional flags may also be passed with the cargo rustc command. If the --target flag (or build.target ) is used, then the flags will only be passed to the compiler for the target. Things being built for the host, such as build scripts or proc macros, will not receive the args. Without --target , the flags will be passed to all compiler invocations (including build scripts and proc macros) because dependencies are shared. If you have args that you do not want to pass to build scripts or proc macros and are building for the host, pass --target with the host triple . It is not recommended to pass in flags that Cargo itself usually manages. For example, the flags driven by profiles are best handled by setting the appropriate profile setting. Caution : Due to the low-level nature of passing flags directly to the compiler, this may cause a conflict with future versions of Cargo which may issue the same or similar flags on its own which may interfere with the flags you specify. This is an area where Cargo may not always be backwards compatible. build.rustdocflags Type: string or array of strings Default: none Environment: CARGO_BUILD_RUSTDOCFLAGS or CARGO_ENCODED_RUSTDOCFLAGS or RUSTDOCFLAGS Extra command-line flags to pass to rustdoc . The value may be an array of strings or a space-separated string. There are four mutually exclusive sources of extra flags. They are checked in order, with the first one being used: CARGO_ENCODED_RUSTDOCFLAGS environment variable. RUSTDOCFLAGS environment variable. All matching target.<triple>.rustdocflags config entries joined together. build.rustdocflags config value. Additional flags may also be passed with the cargo rustdoc command. Caution : Due to the low-level nature of passing flags directly to the compiler, this may cause a conflict with future versions of Cargo which may issue the same or similar flags on its own which may interfere with the flags you specify. This is an area where Cargo may not always be backwards compatible. build.incremental Type: bool Default: from profile Environment: CARGO_BUILD_INCREMENTAL or CARGO_INCREMENTAL Whether or not to perform incremental compilation . The default if not set is to use the value from the profile . Otherwise this overrides the setting of all profiles. The CARGO_INCREMENTAL environment variable can be set to 1 to force enable incremental compilation for all profiles, or 0 to disable it. This env var overrides the config setting. build.dep-info-basedir Type: string (path) Default: none Environment: CARGO_BUILD_DEP_INFO_BASEDIR Strips the given path prefix from dep info file paths. This config setting is intended to convert absolute paths to relative paths for tools that require relative paths. The setting itself is a config-relative path. So, for example, a value of "." would strip all paths starting with the parent directory of the .cargo directory. build.pipelining This option is deprecated and unused. Cargo always has pipelining enabled. [credential-alias] Type: string or array of strings Default: empty Environment: CARGO_CREDENTIAL_ALIAS_<name> The [credential-alias] table defines credential provider aliases. These aliases can be referenced as an element of the registry.global-credential-providers array, or as a credential provider for a specific registry under registries.<NAME>.credential-provider . If specified as a string, the value will be split on spaces into path and arguments. For example, to define an alias called my-alias : [credential-alias] my-alias = ["/usr/bin/cargo-credential-example", "--argument", "value", "--flag"] See Registry Authentication for more information. [doc] The [doc] table defines options for the cargo doc command. doc.browser Type: string or array of strings ( program path with args ) Default: BROWSER environment variable, or, if that is missing, opening the link in a system specific way This option sets the browser to be used by cargo doc , overriding the BROWSER environment variable when opening documentation with the --open option. [cargo-new] The [cargo-new] table defines defaults for the cargo new command. cargo-new.name This option is deprecated and unused. cargo-new.email This option is deprecated and unused. cargo-new.vcs Type: string Default: "git" or "none" Environment: CARGO_CARGO_NEW_VCS Specifies the source control system to use for initializing a new repository. Valid values are git , hg (for Mercurial), pijul , fossil or none to disable this behavior. Defaults to git , or none if already inside a VCS repository. Can be overridden with the --vcs CLI option. [env] The [env] section allows you to set additional environment variables for build scripts, rustc invocations, cargo run and cargo build . [env] OPENSSL_DIR = "/opt/openssl" By default, the variables specified will not override values that already exist in the environment. This behavior can be changed by setting the force flag. Setting the relative flag evaluates the value as a config-relative path that is relative to the parent directory of the .cargo directory that contains the config.toml file. The value of the environment variable will be the full absolute path. [env] TMPDIR = { value = "/home/tmp", force = true } OPENSSL_DIR = { value = "vendor/openssl", relative = true } [future-incompat-report] The [future-incompat-report] table controls setting for future incompat reporting future-incompat-report.frequency Type: string Default: "always" Environment: CARGO_FUTURE_INCOMPAT_REPORT_FREQUENCY Controls how often we display a notification to the terminal when a future incompat report is available. Possible values: always (default): Always display a notification when a command (e.g. cargo build ) produces a future incompat report never : Never display a notification [cache] The [cache] table defines settings for cargo’s caches. Global caches When running cargo commands, Cargo will automatically track which files you are using within the global cache. Periodically, Cargo will delete files that have not been used for some period of time. It will delete files that have to be downloaded from the network if they have not been used in 3 months. Files that can be generated without network access will be deleted if they have not been used in 1 month. The automatic deletion of files only occurs when running commands that are already doing a significant amount of work, such as all of the build commands ( cargo build , cargo test , cargo check , etc.), and cargo fetch . Automatic deletion is disabled if cargo is offline such as with --offline or --frozen to avoid deleting artifacts that may need to be used if you are offline for a long period of time. Note : This tracking is currently only implemented for the global cache in Cargo’s home directory. This includes registry indexes and source files downloaded from registries and git dependencies. Support for tracking build artifacts is not yet implemented, and tracked in cargo#13136 . Additionally, there is an unstable feature to support manually triggering cache cleaning, and to further customize the configuration options. See the Unstable chapter for more information. cache.auto-clean-frequency Type: string Default: "1 day" Environment: CARGO_CACHE_AUTO_CLEAN_FREQUENCY This option defines how often Cargo will automatically delete unused files in the global cache. This does not define how old the files must be, those thresholds are described above . It supports the following settings: "never" — Never deletes old files. "always" — Checks to delete old files every time Cargo runs. An integer followed by “seconds”, “minutes”, “hours”, “days”, “weeks”, or “months” — Checks to delete old files at most the given time frame. [http] The [http] table defines settings for HTTP behavior. This includes fetching crate dependencies and accessing remote git repositories. http.debug Type: boolean Default: false Environment: CARGO_HTTP_DEBUG If true , enables debugging of HTTP requests. The debug information can be seen by setting the CARGO_LOG=network=debug environment variable (or use network=trace for even more information). Be wary when posting logs from this output in a public location. The output may include headers with authentication tokens which you don’t want to leak! Be sure to review logs before posting them. http.proxy Type: string Default: none Environment: CARGO_HTTP_PROXY or HTTPS_PROXY or https_proxy or http_proxy Sets an HTTP and HTTPS proxy to use. The format is in libcurl format as in [protocol://]host[:port] . If not set, Cargo will also check the http.proxy setting in your global git configuration. If none of those are set, the HTTPS_PROXY or https_proxy environment variables set the proxy for HTTPS requests, and http_proxy sets it for HTTP requests. http.timeout Type: integer Default: 30 Environment: CARGO_HTTP_TIMEOUT or HTTP_TIMEOUT Sets the timeout for each HTTP request, in seconds. http.cainfo Type: string (path) Default: none Environment: CARGO_HTTP_CAINFO Path to a Certificate Authority (CA) bundle file, used to verify TLS certificates. If not specified, Cargo attempts to use the system certificates. http.proxy-cainfo Type: string (path) Default: falls back to http.cainfo if not set Environment: CARGO_HTTP_PROXY_CAINFO Path to a Certificate Authority (CA) bundle file, used to verify proxy TLS certificates. http.check-revoke Type: boolean Default: true (Windows) false (all others) Environment: CARGO_HTTP_CHECK_REVOKE This determines whether or not TLS certificate revocation checks should be performed. This only works on Windows. http.ssl-version Type: string or min/max table Default: none Environment: CARGO_HTTP_SSL_VERSION This sets the minimum TLS version to use. It takes a string, with one of the possible values of "default" , "tlsv1" , "tlsv1.0" , "tlsv1.1" , "tlsv1.2" , or "tlsv1.3" . This may alternatively take a table with two keys, min and max , which each take a string value of the same kind that specifies the minimum and maximum range of TLS versions to use. The default is a minimum version of "tlsv1.0" and a max of the newest version supported on your platform, typically "tlsv1.3" . http.low-speed-limit Type: integer Default: 10 Environment: CARGO_HTTP_LOW_SPEED_LIMIT This setting controls timeout behavior for slow connections. If the average transfer speed in bytes per second is below the given value for http.timeout seconds (default 30 seconds), then the connection is considered too slow and Cargo will abort and retry. http.multiplexing Type: boolean Default: true Environment: CARGO_HTTP_MULTIPLEXING When true , Cargo will attempt to use the HTTP2 protocol with multiplexing. This allows multiple requests to use the same connection, usually improving performance when fetching multiple files. If false , Cargo will use HTTP 1.1 without pipelining. http.user-agent Type: string Default: Cargo’s version Environment: CARGO_HTTP_USER_AGENT Specifies a custom user-agent header to use. The default if not specified is a string that includes Cargo’s version. [install] The [install] table defines defaults for the cargo install command. install.root Type: string (path) Default: Cargo’s home directory Environment: CARGO_INSTALL_ROOT Sets the path to the root directory for installing executables for cargo install . Executables go into a bin directory underneath the root. To track information of installed executables, some extra files, such as .crates.toml and .crates2.json , are also created under this root. The default if not specified is Cargo’s home directory (default .cargo in your home directory). Can be overridden with the --root command-line option. [net] The [net] table controls networking configuration. net.retry Type: integer Default: 3 Environment: CARGO_NET_RETRY Number of times to retry possibly spurious network errors. net.git-fetch-with-cli Type: boolean Default: false Environment: CARGO_NET_GIT_FETCH_WITH_CLI If this is true , then Cargo will use the git executable to fetch registry indexes and git dependencies. If false , then it uses a built-in git library. Setting this to true can be helpful if you have special authentication requirements that Cargo does not support. See Git Authentication for more information about setting up git authentication. net.offline Type: boolean Default: false Environment: CARGO_NET_OFFLINE If this is true , then Cargo will avoid accessing the network, and attempt to proceed with locally cached data. If false , Cargo will access the network as needed, and generate an error if it encounters a network error. Can be overridden with the --offline command-line option. net.ssh The [net.ssh] table contains settings for SSH connections. net.ssh.known-hosts Type: array of strings Default: see description Environment: not supported The known-hosts array contains a list of SSH host keys that should be accepted as valid when connecting to an SSH server (such as for SSH git dependencies). Each entry should be a string in a format similar to OpenSSH known_hosts files. Each string should start with one or more hostnames separated by commas, a space, the key type name, a space, and the base64-encoded key. For example: [net.ssh] known-hosts = [ "example.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFO4Q5T0UV0SQevair9PFwoxY9dl4pQl3u5phoqJH3cF" ] Cargo will attempt to load known hosts keys from common locations supported in OpenSSH, and will join those with any listed in a Cargo configuration file. If any matching entry has the correct key, the connection will be allowed. Cargo comes with the host keys for github.com built-in. If those ever change, you can add the new keys to the config or known_hosts file. See Git Authentication for more details. [patch] Just as you can override dependencies using [patch] in Cargo.toml , you can override them in the cargo configuration file to apply those patches to any affected build. The format is identical to the one used in Cargo.toml . Since .cargo/config.toml files are not usually checked into source control, you should prefer patching using Cargo.toml where possible to ensure that other developers can compile your crate in their own environments. Patching through cargo configuration files is generally only appropriate when the patch section is automatically generated by an external build tool. If a given dependency is patched both in a cargo configuration file and a Cargo.toml file, the patch in the configuration file is used. If multiple configuration files patch the same dependency, standard cargo configuration merging is used, which prefers the value defined closest to the current directory, with $HOME/.cargo/config.toml taking the lowest precedence. Relative path dependencies in such a [patch] section are resolved relative to the configuration file they appear in. [profile] The [profile] table can be used to globally change profile settings, and override settings specified in Cargo.toml . It has the same syntax and options as profiles specified in Cargo.toml . See the Profiles chapter for details about the options. [profile.<name>.build-override] Environment: CARGO_PROFILE_<name>_BUILD_OVERRIDE_<key> The build-override table overrides settings for build scripts, proc macros, and their dependencies. It has the same keys as a normal profile. See the overrides section for more details. [profile.<name>.package.<name>] Environment: not supported The package table overrides settings for specific packages. It has the same keys as a normal profile, minus the panic , lto , and rpath settings. See the overrides section for more details. profile.<name>.codegen-units Type: integer Default: See profile docs. Environment: CARGO_PROFILE_<name>_CODEGEN_UNITS See codegen-units . profile.<name>.debug Type: integer or boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_DEBUG See debug . profile.<name>.split-debuginfo Type: string Default: See profile docs. Environment: CARGO_PROFILE_<name>_SPLIT_DEBUGINFO See split-debuginfo . profile.<name>.debug-assertions Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_DEBUG_ASSERTIONS See debug-assertions . profile.<name>.incremental Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_INCREMENTAL See incremental . profile.<name>.lto Type: string or boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_LTO See lto . profile.<name>.overflow-checks Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_OVERFLOW_CHECKS See overflow-checks . profile.<name>.opt-level Type: integer or string Default: See profile docs. Environment: CARGO_PROFILE_<name>_OPT_LEVEL See opt-level . profile.<name>.panic Type: string Default: See profile docs. Environment: CARGO_PROFILE_<name>_PANIC See panic . profile.<name>.rpath Type: boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_RPATH See rpath . profile.<name>.strip Type: string or boolean Default: See profile docs. Environment: CARGO_PROFILE_<name>_STRIP See strip . [resolver] The [resolver] table overrides dependency resolution behavior for local development (e.g. excludes cargo install ). resolver.incompatible-rust-versions Type: string Default: See resolver docs Environment: CARGO_RESOLVER_INCOMPATIBLE_RUST_VERSIONS When resolving which version of a dependency to use, select how versions with incompatible package.rust-version s are treated. Values include: allow : treat rust-version -incompatible versions like any other version fallback : only consider rust-version -incompatible versions if no other version matched Can be overridden with --ignore-rust-version CLI option Setting the dependency’s version requirement higher than any version with a compatible rust-version Specifying the version to cargo update with --precise See the resolver chapter for more details. MSRV: allow is supported on any version fallback is respected as of 1.84 [registries] The [registries] table is used for specifying additional registries . It consists of a sub-table for each named registry. registries.<name>.index Type: string (url) Default: none Environment: CARGO_REGISTRIES_<name>_INDEX Specifies the URL of the index for the registry. registries.<name>.token Type: string Default: none Environment: CARGO_REGISTRIES_<name>_TOKEN Specifies the authentication token for the given registry. This value should only appear in the credentials file. This is used for registry commands like cargo publish that require authentication. Can be overridden with the --token command-line option. registries.<name>.credential-provider Type: string or array of path and arguments Default: none Environment: CARGO_REGISTRIES_<name>_CREDENTIAL_PROVIDER Specifies the credential provider for the given registry. If not set, the providers in registry.global-credential-providers will be used. If specified as a string, path and arguments will be split on spaces. For paths or arguments that contain spaces, use an array. If the value exists in the [credential-alias] table, the alias will be used. See Registry Authentication for more information. registries.crates-io.protocol Type: string Default: "sparse" Environment: CARGO_REGISTRIES_CRATES_IO_PROTOCOL Specifies the protocol used to access crates.io. Allowed values are git or sparse . git causes Cargo to clone the entire index of all packages ever published to crates.io from https://github.com/rust-lang/crates.io-index/ . This can have performance implications due to the size of the index. sparse is a newer protocol which uses HTTPS to download only what is necessary from https://index.crates.io/ . This can result in a significant performance improvement for resolving new dependencies in most situations. More information about registry protocols may be found in the Registries chapter . [registry] The [registry] table controls the default registry used when one is not specified. registry.index This value is no longer accepted and should not be used. registry.default Type: string Default: "crates-io" Environment: CARGO_REGISTRY_DEFAULT The name of the registry (from the registries table ) to use by default for registry commands like cargo publish . Can be overridden with the --registry command-line option. registry.credential-provider Type: string or array of path and arguments Default: none Environment: CARGO_REGISTRY_CREDENTIAL_PROVIDER Specifies the credential provider for crates.io . If not set, the providers in registry.global-credential-providers will be used. If specified as a string, path and arguments will be split on spaces. For paths or arguments that contain spaces, use an array. If the value exists in the [credential-alias] table, the alias will be used. See Registry Authentication for more information. registry.token Type: string Default: none Environment: CARGO_REGISTRY_TOKEN Specifies the authentication token for crates.io . This value should only appear in the credentials file. This is used for registry commands like cargo publish that require authentication. Can be overridden with the --token command-line option. registry.global-credential-providers Type: array Default: ["cargo:token"] Environment: CARGO_REGISTRY_GLOBAL_CREDENTIAL_PROVIDERS Specifies the list of global credential providers. If credential provider is not set for a specific registry using registries.<name>.credential-provider , Cargo will use the credential providers in this list. Providers toward the end of the list have precedence. Path and arguments are split on spaces. If the path or arguments contains spaces, the credential provider should be defined in the [credential-alias] table and referenced here by its alias. See Registry Authentication for more information. [source] The [source] table defines the registry sources available. See Source Replacement for more information. It consists of a sub-table for each named source. A source should only define one kind (directory, registry, local-registry, or git). source.<name>.replace-with Type: string Default: none Environment: not supported If set, replace this source with the given named source or named registry. source.<name>.directory Type: string (path) Default: none Environment: not supported Sets the path to a directory to use as a directory source. source.<name>.registry Type: string (url) Default: none Environment: not supported Sets the URL to use for a registry source. source.<name>.local-registry Type: string (path) Default: none Environment: not supported Sets the path to a directory to use as a local registry source. source.<name>.git Type: string (url) Default: none Environment: not supported Sets the URL to use for a git repository source. source.<name>.branch Type: string Default: none Environment: not supported Sets the branch name to use for a git repository. If none of branch , tag , or rev is set, defaults to the master branch. source.<name>.tag Type: string Default: none Environment: not supported Sets the tag name to use for a git repository. If none of branch , tag , or rev is set, defaults to the master branch. source.<name>.rev Type: string Default: none Environment: not supported Sets the revision to use for a git repository. If none of branch , tag , or rev is set, defaults to the master branch. [target] The [target] table is used for specifying settings for specific platform targets. It consists of a sub-table which is either a platform triple or a cfg() expression . The given values will be used if the target platform matches either the <triple> value or the <cfg> expression. [target.thumbv7m-none-eabi] linker = "arm-none-eabi-gcc" runner = "my-emulator" rustflags = ["…", "…"] [target.'cfg(all(target_arch = "arm", target_os = "none"))'] runner = "my-arm-wrapper" rustflags = ["…", "…"] cfg values come from those built-in to the compiler (run rustc --print=cfg to view) and extra --cfg flags passed to rustc (such as those defined in RUSTFLAGS ). Do not try to match on debug_assertions , test , Cargo features like feature="foo" , or values set by build scripts . If using a target spec JSON file, the <triple> value is the filename stem. For example --target foo/bar.json would match [target.bar] . target.<triple>.ar This option is deprecated and unused. target.<triple>.linker Type: string (program path) Default: none Environment: CARGO_TARGET_<triple>_LINKER Specifies the linker which is passed to rustc (via -C linker ) when the <triple> is being compiled for. By default, the linker is not overridden. target.<cfg>.linker This is similar to the target linker , but using a cfg() expression . If both a <triple> and <cfg> runner match, the <triple> will take precedence. It is an error if more than one <cfg> runner matches the current target. target.<triple>.runner Type: string or array of strings ( program path with args ) Default: none Environment: CARGO_TARGET_<triple>_RUNNER If a runner is provided, executables for the target <triple> will be executed by invoking the specified runner with the actual executable passed as an argument. This applies to cargo run , cargo test and cargo bench commands. By default, compiled executables are executed directly. target.<cfg>.runner This is similar to the target runner , but using a cfg() expression . If both a <triple> and <cfg> runner match, the <triple> will take precedence. It is an error if more than one <cfg> runner matches the current target. target.<triple>.rustflags Type: string or array of strings Default: none Environment: CARGO_TARGET_<triple>_RUSTFLAGS Passes a set of custom flags to the compiler for this <triple> . The value may be an array of strings or a space-separated string. See build.rustflags for more details on the different ways to specific extra flags. target.<cfg>.rustflags This is similar to the target rustflags , but using a cfg() expression . If several <cfg> and <triple> entries match the current target, the flags are joined together. target.<triple>.rustdocflags Type: string or array of strings Default: none Environment: CARGO_TARGET_<triple>_RUSTDOCFLAGS Passes a set of custom flags to the compiler for this <triple> . The value may be an array of strings or a space-separated string. See build.rustdocflags for more details on the different ways to specific extra flags. target.<triple>.<links> The links sub-table provides a way to override a build script . When specified, the build script for the given links library will not be run, and the given values will be used instead. [target.x86_64-unknown-linux-gnu.foo] rustc-link-lib = ["foo"] rustc-link-search = ["/path/to/foo"] rustc-flags = "-L /some/path" rustc-cfg = ['key="value"'] rustc-env = {key = "value"} rustc-cdylib-link-arg = ["…"] metadata_key1 = "value" metadata_key2 = "value" [term] The [term] table controls terminal output and interaction. term.quiet Type: boolean Default: false Environment: CARGO_TERM_QUIET Controls whether or not log messages are displayed by Cargo. Specifying the --quiet flag will override and force quiet output. Specifying the --verbose flag will override and disable quiet output. term.verbose Type: boolean Default: false Environment: CARGO_TERM_VERBOSE Controls whether or not extra detailed messages are displayed by Cargo. Specifying the --quiet flag will override and disable verbose output. Specifying the --verbose flag will override and force verbose output. term.color Type: string Default: "auto" Environment: CARGO_TERM_COLOR Controls whether or not colored output is used in the terminal. Possible values: auto (default): Automatically detect if color support is available on the terminal. always : Always display colors. never : Never display colors. Can be overridden with the --color command-line option. term.hyperlinks Type: bool Default: auto-detect Environment: CARGO_TERM_HYPERLINKS Controls whether or not hyperlinks are used in the terminal. term.unicode Type: bool Default: auto-detect Environment: CARGO_TERM_UNICODE Control whether output can be rendered using non-ASCII unicode characters. term.progress.when Type: string Default: "auto" Environment: CARGO_TERM_PROGRESS_WHEN Controls whether or not progress bar is shown in the terminal. Possible values: auto (default): Intelligently guess whether to show progress bar. always : Always show progress bar. never : Never show progress bar. term.progress.width Type: integer Default: none Environment: CARGO_TERM_PROGRESS_WIDTH Sets the width for progress bar. term.progress.term-integration Type: bool Default: auto-detect Environment: CARGO_TERM_PROGRESS_TERM_INTEGRATION Report progress to the terminal emulator for display in places like the task bar. | 2026-01-13T09:29:13 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.