url stringlengths 11 2.25k | text stringlengths 88 50k | ts timestamp[s]date 2026-01-13 08:47:33 2026-01-13 09:30:40 |
|---|---|---|
https://phpa.me/shocm | Eric Van Johnson | PHP Architect Magazine Subscribe Advertise with Us PHP Racing – December 2025 PHP 8.5 Explodes Onto The Scene – November 2025 Boo, It’s PHP! – October 2025 PHP Secrets – September 2025 Past Issues Write for Us Editorial Guidelines Books All Books Write for Us Jacked PHP The Complementary PHP Testing Tools Cookbook PHP Web Development with MySQL Beyond Laravel PHP Development with Windows Subsystem for Linux (WSL) WordPress Development in Depth The Grumpy Programmer’s Guide To Testing PHP Applications The Fizz Buzz Fix Swag Merch Store Services Consulting Conferences PHP TEK Podcast PHP Podcast / Community Corner PHPRoundtable Register Login Want to check out an issue? Sign up to receive a special offer. Close Home › Authors › Eric Van Johnson Eric Van Johnson Eric Van Johnson is one of the members behind PHP Architect, a collective of highly skilled frontend, backend, and mobile developers driven by passion and dedication. Their unyielding commitment to excellence is evident in the exceptional services they provide. Beyond his work with PHP Architect, Eric plays a pivotal role in the organization of the php[tek] conference as well as the San Diego tech community as a leading organizer of San Diego PHP (SDPHP), a local user group that fosters collaboration and knowledge exchange among PHP enthusiasts. His voice also reverberates in the podcasting realm with PHPUgly , PHPRoundtable , and the php podcast , where he shares insights, experiences, and discussions on various tech-related topics. Eric’s journey into the world of coding began in the early ’80s, a time when many of his peers were diving into the universe of Atari 2600 or Intellivision. Instead of a gaming console, his father presented him with a TRS-80 Coco equipped with BASIC. This thoughtful gift ignited a spark in Eric, leading him down the path of self-taught coding, a passion that has remained undiminished through the years. On a personal note, Eric wears the hats of a devoted husband and a proud father. He has a penchant for baseball and has an undeniable affinity for scotch. A light-hearted anecdote from his past reveals his humorous side: Eric had to reluctantly hang up his baseball cleats as he found it rather challenging to balance running the bases without spilling his scotch! ( Other Socials) Articles 2024 EOL By Eric Van Johnson by Eric Van Johnson Published in Time For PHP , December 2024 PHP, CouchDB, and Chill By Eric Van Johnson by Eric Van Johnson Published in Lounging Around with PHP , November 2024 —Available for Free What’s the Big Idea? PhpStorm for VIM Users By Eric Van Johnson I started my IT career in system operations. I spent many days in a cold data center, racking and stacking servers, installing operating systems, and configuring routers. That’s when I started using Vim. It was nice to sit in my cubicle outside the data center and access my servers to do everything I needed in a terminal, including editing files. I didn’t put any real thought into why Vim. It was just an option available on the command line to allow me to edit files. There were others—Nano, Emacs—and today, I couldn’t honestly tell you why I didn’t opt for one of them instead of Vim. But Vim did the job well, so I stuck with it. by Eric Van Johnson Published in PHP Reflections , May 2024 Artisan Way: Laracon Online 2022 By Eric Van Johnson During this time of the virtual lifestyle, Laracon Online has been one of the better and most engaging online conferences out. I mean, in my opinion, it doesn’t even come close to a good in-person conference, like hmm, I don’t know, maybe php[tek] 2023 in Chicago. But as for virtual, Laracon Online is really well organized, and last month’s broadcast was no exception. by Eric Van Johnson Published in The State of PHP , October 2022 Community Corner: Interview with PHP 8.1 Release Manager Ben Ramsey By Eric Van Johnson Last month we talked with Patrick Allaert, the release manager for PHP 8.1, but Patrick is only one member of the dynamic duo known as the first-time release managers. The core team had been pairing one veteran release manager with one new release manager, but for PHP 8.1, they decided to have two new release managers. Although it may be Ben Ramsey’s first time being a release manager, he is not new to PHP Internals. As long as I’ve been lurking on the PHP Internal Mailing List, I’ve seen Ben contributing to the discussions and voting on RFCs. This month we learn a little more about Ben and his journey into the PHP coding life. by Eric Van Johnson Published in Parallelize Your Code , February 2022 Community Corner: Interview with PHP 8.1 Release Manager Patrick Allaert By Eric Van Johnson PHP 8.1 is now the current version of PHP, and there is a new team behind managing the releases of PHP 8.1. For this release of PHP, the Internals groups decide to do something different. There is still a veteran release manager for PHP 8.1, namely, Joe Watkins ( @krakjoe ). But instead of only having one new release manager, “rookie,” they elected two additional people to manage the release. This month we speak to one of those rookie release managers, Patrick Allaert. When Patrick isn’t coding PHP or C, you might find him dancing Forró/Salsa/Bachata or kitesurfing on the ocean. by Eric Van Johnson Published in Domain-Driven Resolutions , January 2022 Community Corner: The PHP Foundation By Eric Van Johnson When I was young, I remember hearing the saying, “A house is only as good as its foundation.” That same philosophy can be applied to many other things. A good foundation with your code, company, family, and communities makes success easier to achieve. I’ve always felt the PHP internal community had a pretty solid foundation. It has seen many vital people come and go, and it continues to evolve and innovate. by Eric Van Johnson Published in The Zen of Mindful Programming , December 2021 Community Corner: Interview with Wasseem Khayrattee By Eric Van Johnson Off the coast of Africa, in the Indian Ocean, lays an Island named Mauritius between latitudes 19°58.8’S and 20°31.7’S and longitudes 57°18.0’E and 57°46.5’E. It is 65 km (40 mi) long and 45 km (30 mi) wide. The country spans 2,040 square kilometers (790 sq mi). It is the only known habitat of the extinct dodo, and it is where this month’s interviewee Wasseem Khayrattee , or maybe better known as 7PHP, calls home. 7PHP is the new voice on Voices of the elePHPant, and this month we sit down and talk with him, how he got started, his journey, and what he’s up to now. Grab yourself an Alouda or a Phoenix beer, queue up “Top 100 Mauritius” on Spotify and let’s see what makes up the universe known as 7PHP. by Eric Van Johnson Published in The Art of Data , November 2021 Community Corner: PHPUnit Creator Sebastian Bergmann, Part Two By Eric Van Johnson Welcome to part two of our interview with Sebastian Bergmann. Last month we heard about Sebastian’s beginnings, how he got started with development and what led him to write the de facto testing suite for the PHP programming language. This month, Sebastian dives into what it’s like managing such a widely used and crucial piece of the PHP ecosystem. Published in Decrypting Cryptography , October 2021 Community Corner: PHPUnit Creator Sebastian Bergmann, Part One By Eric Van Johnson When Sebastian Bergmann was in university, his professor pulled him to the side and said, “Open Source is great, PHP is great, but I see that you’re interested in these testing concepts, unit testing in particular. That does not exist for PHP. Can I finally convince you now to continue what you do, but with a different language? Do that with Java, and do it at the university and do cool stuff with that?”. Most students would take that sort of advice from their professor as solid words of wisdom and do exactly what had been suggested. Fortunately for all of us, Sebastian wasn’t like most students, and he took it as a challenge and replied, “Well, just because it has not been implemented for PHP yet does not mean that it cannot be done.” Published in It’s Really an Upgrade , September 2021 Community Corner: An Interview with Taylor Otwell By Eric Van Johnson This month, we talk with the person behind the Laravel Framework, Taylor Otwell, who he is, and where he is taking Laravel moving forward. Laravel recently turned ten years old, and what a remarkable ten years it has been. Laravel has inspired its own industry with business, training, services, blogs, and podcasts built around and for the Laravel Framework. Published in Trimming One’s Sails , August 2021 Community Corner: Interview with Joe Watkins By Eric Van Johnson In this month’s Community Corner, we speak with Joe Watkins. Sure we get to know Joe a little better, but we also discuss a very impactful blog post he made called “Avoiding Busses.” If you’ve been reading my Community Corner contributions over the past year and a half, you may notice that this isn’t my typical profile piece. Published in Deep Dive Into Search , July 2021 Community Corner: Longhorn PHP 2021 By Eric Van Johnson 2020 was a year of uncertainty, fear, and adjustments. We limited our physical contact with others, created bubbles of safe spaces and people, and tried not to wander too far from those established bubbles. As a result, a lot changed that year. One of the year’s casualties was in-person conferences, a staple in our tech industry and something some would try to attend at least once a year. Published in Debug, Rinse, Repeat , June 2021 Community Corner: Interview with Ryan Weaver By Eric Van Johnson Fun fact, no matter where you stand in the state of Michigan, you are never further than 85 miles from a Great Lake. It’s the only state that touches four of the five Great Lakes, has its own regional dialect, which includes phrases like “a Michigan left” and “Bumpy Cake,” and happens to be the birthplace of this month’s Community Corner spotlight Symfony core member Ryan Weaver. Published in Testing Assumptions , May 2021 Community Corner: A Bref of Fresh Air By Eric Van Johnson This month, we sit down and have a conversation with Matthieu Napoli. About four years ago, Matthieu saw a gap in the emerging serverless architectures and PHP. When Amazon added custom runtime support in late 2018, he decided to address it, creating the bref project. Bref lowers the bar on complexity to allow you to take advantage of a serverless environment. Published in Busy Worker Bees , April 2021 Community Corner: Interview with Matthew Weier O’Phinney By Eric Van Johnson This month, I sat down with Matthew Weier O’Phinney, a long-time member of the PHP Community and one of the leading contributors to the Laminas (formerly Zend Framework) project. Published in Lambda PHP , March 2021 Community Corner: Interview with Angie Byron, Part Two By Eric Van Johnson Now on Drupal 9, the community isn’t slowing down. This month, we continue our interview with Angie Byron, a.k.a Webchick, a Drupal Core committer and product manager, Drupal Association Board Member, author, speaker, mentor, and Mom, and so much more. Currently, she works at Aquia for the Drupal acceleration team, where her primary role is to “Make Drupal awesome.” We talk about Drupal, coding, family, and her journey throughout the years. Published in Dealing with Data , February 2021 —Available for Free Community Corner: Interview with Angie Byron, Part One By Eric Van Johnson Now on Drupal 9, the community isn’t slowing down. This month, we sit down and talk with Angie Byron, a.k.a Webchick, a Drupal Core committer and product manager, Drupal Association Board Member, author, speaker, mentor, and Mom, and so much more. Currently, she works at Aquia for the Drupal acceleration team, where her primary role is to “Make Drupal awesome.” We talk about Drupal, coding, family, and her journey throughout the years. Published in Newfangled Views , January 2021 Community Corner: An Interview with Andreas Heigl By Eric Van Johnson It started off simple enough, a friend in school ask Andreas if he could help him write a piece of software. Andreas himself wasn’t an aspiring programmer; he has was an avid district forester. However, he had done some small projects in the past for the Apple platform using FileMaker. This project needed to be different, and it needed to be cross-platform. Andres remembers reading about a web technology called PHP paired with MySQL and thought, “It can’t be that complicated. Can it?” Published in PHP 8 Bits and Git , December 2020 Community Corner: Podcast—Mic Check By Eric Van Johnson With it being so difficult to hang out with friends or go to a meetup, podcasts are a great solution to stay plugged in and current on what is going on in the development world. There is a wide range of subject matters for podcasts, but I focused on PHP podcasts and some other general Development podcasts for this article. Published in SOLID Foundations , November 2020 —Available for Free Community Corner: Larabelles By Eric Van Johnson “If you are looking for a development community and when you look around, if you don’t find one, congratulations, you are now the organizer of your new development community.” These were the (paraphrased) wise words spoken by one Cal Evans ( @calevans). This advice inspired John Congdon to reboot my local PHP User Group in San Diego. It’s also the action that was taken by this month’s interviewee, Zuzana Kunckova, when she looked around for a development community she wanted to belong to. Zuzana’s twitter announcement was straightforward and said it all. Published in Running Parallel , October 2020 Community Corner: PHP 8 Release Managers: Interview with Sara Golemon and Gabriel Caruso, Part Three By Eric Van Johnson Part three concludes my interview with the PHP 8 Release Managers about PHP Internals. We touch on getting started contributing to internals via RFCs, becoming release managers, the commitment required by that role, and how the project’s release cycles have evolved. Published in Under the Scope , September 2020 —Available for Free Community Corner: PHP 8 Release Managers: Interview with Sara Golemon and Gabriel Caruso, Part Two By Eric Van Johnson In part two, I continue chatting with the PHP 8 Release Managers about PHP Internals, preparing a new release, the evolution of the language, and where it might go in the future. Published in Data Discipline , August 2020 Community Corner: PHP 8 Release Managers: Interview with Sara Golemon and Gabriel Caruso, Part 1 By Eric Van Johnson I’ve been contributing to Community Corner for a few months, so you would know by now that I am not a journalist and that I love PHP. I love coding with it, talking to people about it, and meeting new people involved with it. I’ve had the opportunity to speak with a lot of fantastic people, from Community Organizers to Internals contributors, but this month is probably the highlight for me as I sat down to speak with Sara Golemon and Gabriel Caruso, the two Release Managers of PHP 8.0. Published in Warp Driven Development , July 2020 Community Corner: Let’s Talk Xdebug By Eric Van Johnson This month, we take a moment to speak with—well technically email with—a member of the PHP community. We are very fortunate to have a community filled with people who care about making PHP stronger. Today we speak with Derick Rethans ( @derickr ), author, conference speaker, PHP 7.4 Release Manager, host of the PHP Internal News Podcast, and the creator/maintainer of Xdebug. Published in Advanced Design & Development , June 2020 Community Corner: York-Region-PHP User Group By Eric Van Johnson This month, we revisit our Canadian friends, this time we travel north of Toronto, Canada, to the York Region of Canada and the You Region PHP User Group. Published in Unsupervised Learning , May 2020 Community Corner: PHP Adelaide: PHP Down Under By Eric Van Johnson This month, we travel halfway around the world, well at least I do; this might be right around the corner for you. This month we find ourselves in the land down under, where women glow, and men plunder, well that’s how the song goes anyways, Adelaide, Australia. Published in Machine Learning and OpenAPI , April 2020 Community Corner: AustinPHP By Eric Van Johnson Within Texas, is the beautiful city of Austin. One of the fastest-growing cities in the United States and the State Capital of Texas. Austin has caught the eye of more than cowboys and musicians. It’s caught the eye of tech with several Fortune 500 companies establishing a presence in Austin. Logan has taken the time to write up a profile for AustinPHP, and I would like to share that with you. Logan Lindquist wrote the following article. Published in How Magento is Evolving , March 2020 Community Corner: Greater Toronto Area PHP By Eric Van Johnson This month, in our little community corner, we travel back to Canada, Toronto, more specifically. This city sports a population of over 5.6 million people who speak over 180 languages and offices of some of technology’s heaviest hitters such as IBM, Microsoft, Oracle, Facebook, Twitter, and Google. Published in Cultivating the Developer Experience , February 2020 Community Corner: ArizonaPHP By Eric Van Johnson If you’ve never been to the deserts of Arizona here in the U.S., you may think it’s a desolate and harsh land. Perhaps you envision Mad Max-style cars driving around looking for fuel and water. Maybe you think of the old Spaghetti Western, dusty small and disconnected towns, where strangers are not welcome. Nothing could be further from the truth Published in New Habits , January 2020 Community Corner: Vancouver PHP By Eric Van Johnson For our next stop on our tour of user groups, we travel to the Great White North of Vancouver, British Columbia, Canada. Vancouver is often listed as a top city to live for quality of life, and it’s one of Canada’s densest and most ethnically diverse cities. Over recent years, Vancouver has gotten the tag of Hollywood North, becoming home to many top film productions. Published in Expedition PHP , December 2019 Community Corner: San Diego PHP By Eric Van Johnson You might recognize me as one of the contributors to the php; if you’re not familiar, it’s a great companion to the magazine and the community of PHP podcasts! I’ve been a subscriber to the php[architect] magazine since 2003, so I was thrilled when Oscar Merida asked me if I would be interested in contributing to Community Corner. I am going to take a little different approach to the Community Corner and focus on the various user groups, who they are, and the awesome people running them. Published in Object Orientation , November 2019 December 2025 Magazine Issue Buy Issue $6 Subscribe from $4.99 Accessing backend system… We're sorry, but your session has expired due to inactivity. Please use your browser to refresh this page and log in to our system again. Message goes here. Message goes here. Message goes here. Message goes here. Our Partners Collaborating with industry leaders to bring you the best PHP resources and expertise Displace Infrastructure Management, Simplified https://displace.tech/ PHPScore Put Your Technical Debt on Autopay https://phpscore.com/ About us What we do Contact us Write for us FAQ Logo Download Policies & legal Customer support Privacy & Cookie Policy Refund policy Code of Conduct Terms & Conditions Online Store Magazine Training courses Books Merch Store Special sections PHP Tek Conference PHP Architect Follow us on: X/Twitter Bluesky Mastodon Facebook LinkedIn Copyright © 2002-2026 PHP Architect, LLC. — All amounts in USD | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/download.html#binary-distribution | Download :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Download Edit this Page Download You can manually download all published Log4j distributions, verify them, and see their licensing information by following the instructions in the Download page of Logging Services . Are you looking for the Log4j installation instructions ? Proceed to Installation . Are you looking for the list of changes associated with a particular release? Proceed to Release notes . Source distribution You can download the source code of the latest Log4j release using the links below: Table 1. Source distribution files Sources apache-log4j-2.25.3-src.zip Checksum apache-log4j-2.25.3-src.zip.sha512 Signature apache-log4j-2.25.3-src.zip.asc Signing keys KEYS Binary distribution A set of binaries of Log4j is available through two main distribution channels: ASF Nexus Repository All the binary artifacts are available on the Apache Software Foundation repository.apache.org Nexus repository . Its content is mirrored to the Maven Central repository . See Components for more information on the GAV coordinates of the artifacts. Binary distribution archive All the artifacts in the ASF Nexus repository are also available in a single ZIP archive: Table 2. Binary distribution files Binaries apache-log4j-2.25.3-bin.zip Checksum apache-log4j-2.25.3-bin.zip.sha512 Signature apache-log4j-2.25.3-bin.zip.asc Signing keys KEYS The authenticity of the Log4j binary release is independently verified by the Reproducible Builds for Maven Central Repository project. You can check the reproducibility status of the artifacts on their org.apache.logging.log4j:log4j RB check page. Software Bill of Materials (SBOM) Each Log4j artifact is accompanied by a Software Bill of Materials (SBOM). See the Download page of Logging Services page for details. Available versions Below you can find the list of available Log4j versions and their associated maintenance status; Active Development, Active Maintenance, End-of-Maintenance, and End-of-Life. Refer to Versioning and maintenance policy for details. Table 3. Maintenance status of selected Log4j versions Version Status Latest release First stable release EOM EOL Notes 3.0.x AD 3.0.0-beta3 2.26.x AD 2.25.x AM 2.25.3 2025-12-15 2.24.x EOM 2.24.3 2024-09-03 2025-06-13 2.12.x EOM 2.12.4 2019-06-23 2021-12-29 Last release supporting Java 7 2.3.x EOM 2.3.2 2015-05-09 2021-12-29 Last release supporting Java 6 1.x EOL 1.2.17 2000-01-08 2014-07-12 2015-08-05 Last release supporting Java 1.4 Click to see all past versions Table 4. Maintenance status of all Log4j versions Version Status Latest release First release EOM EOL 3.0.x AD 3.0.0-beta3 2.26.x AD 2.25.x AM 2.25.2 2025-06-13 2.24.x EOM 2.24.3 2024-09-03 2025-06-13 2.23.x EOM 2.23.1 2024-02-17 2024-09-03 2.22.x EOM 2.22.1 2023-11-17 2024-02-17 2.21.x EOM 2.21.1 2023-10-12 2023-11-17 2.20.x EOM 2.20.0 2023-02-17 2023-10-12 2.19.x EOM 2.19.0 2022-09-09 2023-02-17 2.18.x EOM 2.18.0 2022-06-28 2022-09-09 2.17.x EOM 2.17.2 2021-12-17 2022-06-28 2.16.x EOM 2.16.0 2021-12-13 2021-12-17 2.15.x EOM 2.15.0 2021-12-06 2021-12-13 2.14.x EOM 2.14.1 2020-11-06 2021-12-06 2.13.x EOM 2.13.3 2019-12-11 2020-11-06 2.12.x EOM 2.12.4 2019-06-23 2021-12-29 2.11.x EOM 2.11.2 2018-03-11 2019-06-23 2.10.x EOM 2.10.0 2017-11-18 2018-03-11 2.9.x EOM 2.9.1 2017-08-26 2017-11-18 2.8.x EOM 2.8.2 2017-01-21 2017-08-26 2.7.x EOM 2.7 2016-10-02 2017-01-21 2.6.x EOM 2.6.2 2016-05-25 2016-10-02 2.5.x EOM 2.5 2015-12-06 2016-05-25 2.4.x EOM 2.4.1 2015-09-20 2015-12-06 2.3.x EOM 2.3.2 2015-05-09 2021-12-29 2.2.x EOM 2.2 2015-02-22 2015-05-09 2.1.x EOM 2.1 2014-10-19 2015-02-22 2.0.x EOM 2.0.2 2014-07-12 2014-10-19 1.x EOL 1.2.17 2000-01-08 2014-07-12 2015-08-05 Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
http://php.net/cached.php?t=1756715876&f=/styles/home.css | PHP update page now Downloads Documentation Get Involved Help Search docs Getting Started Introduction A simple tutorial Language Reference Basic syntax Types Variables Constants Expressions Operators Control Structures Functions Classes and Objects Namespaces Enumerations Errors Exceptions Fibers Generators Attributes References Explained Predefined Variables Predefined Exceptions Predefined Interfaces and Classes Predefined Attributes Context options and parameters Supported Protocols and Wrappers Security Introduction General considerations Installed as CGI binary Installed as an Apache module Session Security Filesystem Security Database Security Error Reporting User Submitted Data Hiding PHP Keeping Current Features HTTP authentication with PHP Cookies Sessions Handling file uploads Using remote files Connection handling Persistent Database Connections Command line usage Garbage Collection DTrace Dynamic Tracing Function Reference Affecting PHP's Behaviour Audio Formats Manipulation Authentication Services Command Line Specific Extensions Compression and Archive Extensions Cryptography Extensions Database Extensions Date and Time Related Extensions File System Related Extensions Human Language and Character Encoding Support Image Processing and Generation Mail Related Extensions Mathematical Extensions Non-Text MIME Output Process Control Extensions Other Basic Extensions Other Services Search Engine Extensions Server Specific Extensions Session Extensions Text Processing Variable and Type Related Extensions Web Services Windows Only Extensions XML Manipulation GUI Extensions Keyboard Shortcuts ? This help j Next menu item k Previous menu item g p Previous man page g n Next man page G Scroll to bottom g g Scroll to top g h Goto homepage g s Goto search (current page) / Focus search box A popular general-purpose scripting language that is especially suited to web development. Fast, flexible and pragmatic, PHP powers everything from your blog to the most popular websites in the world. What's new in 8.5 Download 8.5.1 · Changelog · Upgrading 8.4.16 · Changelog · Upgrading 8.3.29 · Changelog · Upgrading 8.2.30 · Changelog · Upgrading 18 Dec 2025 PHP 8.1.34 Released! The PHP development team announces the immediate availability of PHP 8.1.34. This is a security release. All PHP 8.1 users are encouraged to upgrade to this version. For source downloads of PHP 8.1.34 please visit our downloads page , Windows source and binaries can also be found there . The list of changes is recorded in the ChangeLog . 18 Dec 2025 PHP 8.4.16 Released! The PHP development team announces the immediate availability of PHP 8.4.16. This is a security release. All PHP 8.4 users are encouraged to upgrade to this version. For source downloads of PHP 8.4.16 please visit our downloads page , Windows source and binaries can also be found there . The list of changes is recorded in the ChangeLog . 18 Dec 2025 PHP 8.2.30 Released! The PHP development team announces the immediate availability of PHP 8.2.30. This is a security release. All PHP 8.2 users are encouraged to upgrade to this version. For source downloads of PHP 8.2.30 please visit our downloads page , Windows source and binaries can also be found there . The list of changes is recorded in the ChangeLog . 18 Dec 2025 PHP 8.3.29 Released! The PHP development team announces the immediate availability of PHP 8.3.29. This is a security release. All PHP 8.3 users are encouraged to upgrade to this version. For source downloads of PHP 8.3.29 please visit our downloads page , Windows source and binaries can also be found there . The list of changes is recorded in the ChangeLog . 18 Dec 2025 PHP 8.5.1 Released! The PHP development team announces the immediate availability of PHP 8.5.1. This is a security release. All PHP 8.5 users are encouraged to upgrade to this version. For source downloads of PHP 8.5.1 please visit our downloads page , Windows source and binaries can also be found there . The list of changes is recorded in the ChangeLog . 20 Nov 2025 PHP 8.5.0 Released! The PHP development team announces the immediate availability of PHP 8.5.0. This release marks the latest minor release of the PHP language. PHP 8.5 comes with numerous improvements and new features such as: New "URI" extension New pipe operator (|>) Clone With New #[\NoDiscard] attribute Support for closures, casts, and first class callables in constant expressions And much much more... For source downloads of PHP 8.5.0 please visit our downloads page , Windows source and binaries can also be found there . The list of changes is recorded in the ChangeLog . The migration guide is available in the PHP Manual. Please consult it for the detailed list of new features and backward incompatible changes. Kudos to all the contributors and supporters! 20 Nov 2025 PHP 8.4.15 Released! The PHP development team announces the immediate availability of PHP 8.4.15. This is a bug fix release. All PHP 8.4 users are encouraged to upgrade to this version. For source downloads of PHP 8.4.15 please visit our downloads page , Windows source and binaries can also be found there . The list of changes is recorded in the ChangeLog . 20 Nov 2025 PHP 8.3.28 Released! The PHP development team announces the immediate availability of PHP 8.3.28. This is a bug fix release. All PHP 8.3 users are encouraged to upgrade to this version. For source downloads of PHP 8.3.28 please visit our downloads page , Windows source and binaries can also be found there . The list of changes is recorded in the ChangeLog . 13 Nov 2025 PHP 8.5.0 RC 5 available for testing The PHP team is pleased to announce the fifth release candidate of PHP 8.5.0, RC 5. This continues the PHP 8.5 release cycle, the rough outline of which is specified in the PHP Wiki . For source downloads of PHP 8.5.0 RC5, please visit the download page . Please carefully test this version and report any issues found on GitHub . Please DO NOT use this version in production, it is a test version. For more information on the new features and other changes, you can read the NEWS file, or the UPGRADING file for a complete list of upgrading notes. These files can also be found in the release archive. The next release will be the GA release of PHP 8.5.0, planned for 20 Nov 2025. The signatures for the release can be found in the manifest or on the Release Candidates page . Thank you for helping us make PHP better. 06 Nov 2025 PHP 8.5.0 RC4 available for testing The PHP team is pleased to announce the final planned release candidate of PHP 8.5.0, RC 4. This continues the PHP 8.5 release cycle, the rough outline of which is specified in the PHP Wiki . For source downloads of PHP 8.5.0 RC4, please visit the download page . Please carefully test this version and report any issues found on GitHub . Please DO NOT use this version in production, it is a test version. For more information on the new features and other changes, you can read the NEWS file, or the UPGRADING file for a complete list of upgrading notes. These files can also be found in the release archive. The next release will be the GA release of PHP 8.5.0, planned for 20 Nov 2025. The signatures for the release can be found in the manifest or on the Release Candidates page . Thank you for helping us make PHP better. 23 Oct 2025 PHP 8.3.27 Released! The PHP development team announces the immediate availability of PHP 8.3.27. This is a bug fix release. All PHP 8.3 users are encouraged to upgrade to this version. For source downloads of PHP 8.3.27 please visit our downloads page , Windows source and binaries can be found on windows.php.net/download/ . The list of changes is recorded in the ChangeLog . 23 Oct 2025 PHP 8.4.14 Released! The PHP development team announces the immediate availability of PHP 8.4.14. This is a bug fix release. All PHP 8.4 users are encouraged to upgrade to this version. For source downloads of PHP 8.4.14 please visit our downloads page , Windows source and binaries can be found on windows.php.net/download/ . The list of changes is recorded in the ChangeLog . 23 Oct 2025 PHP 8.5.0 RC 3 available for testing The PHP team is pleased to announce the third release candidate of PHP 8.5.0, RC 3. This continues the PHP 8.5 release cycle, the rough outline of which is specified in the PHP Wiki . For source downloads of PHP 8.5.0 RC3, please visit the download page . Please carefully test this version and report any issues found on GitHub . Please DO NOT use this version in production, it is an early test version. For more information on the new features and other changes, you can read the NEWS file, or the UPGRADING file for a complete list of upgrading notes. These files can also be found in the release archive. The next release will be RC4, planned for 6 Nov 2025. The signatures for the release can be found in the manifest or on the Release Candidates page . Thank you for helping us make PHP better. 09 Oct 2025 PHP 8.5.0 RC 2 available for testing The PHP team is pleased to announce the second release candidate of PHP 8.5.0, RC 2. This continues the PHP 8.5 release cycle, the rough outline of which is specified in the PHP Wiki . For source downloads of PHP 8.5.0 RC2, please visit the download page . Please carefully test this version and report any issues found on GitHub . Please DO NOT use this version in production, it is an early test version. For more information on the new features and other changes, you can read the NEWS file, or the UPGRADING file for a complete list of upgrading notes. These files can also be found in the release archive. The next release will be RC3, planned for 23 Oct 2025. The signatures for the release can be found in the manifest or on the Release Candidates page . Thank you for helping us make PHP better. 25 Sep 2025 PHP 8.5.0 RC 1 available for testing The PHP team is pleased to announce the first release candidate of PHP 8.5.0, RC 1. This continues the PHP 8.5 release cycle, the rough outline of which is specified in the PHP Wiki . For source downloads of PHP 8.5.0 RC1, please visit the download page . Please carefully test this version and report any issues found on GitHub . Please DO NOT use this version in production, it is an early test version. For more information on the new features and other changes, you can read the NEWS file, or the UPGRADING file for a complete list of upgrading notes. These files can also be found in the release archive. The next release will be RC2, planned for 9 Oct 2025. The signatures for the release can be found in the manifest or on the Release Candidates page . Thank you for helping us make PHP better. 25 Sep 2025 PHP 8.3.26 Released! The PHP development team announces the immediate availability of PHP 8.3.26. This is a bug fix release. All PHP 8.3 users are encouraged to upgrade to this version. For source downloads of PHP 8.3.26 please visit our downloads page , Windows source and binaries can be found on windows.php.net/download/ . The list of changes is recorded in the ChangeLog . 25 Sep 2025 PHP 8.4.13 Released! The PHP development team announces the immediate availability of PHP 8.4.13. This is a bug fix release. All PHP 8.4 users are encouraged to upgrade to this version. For source downloads of PHP 8.4.13 please visit our downloads page , Windows source and binaries can be found on windows.php.net/download/ . The list of changes is recorded in the ChangeLog . 11 Sep 2025 PHP 8.5.0 Beta 3 available for testing The PHP team is pleased to announce the third beta release of PHP 8.5.0, Beta 3. This continues the PHP 8.5 release cycle, the rough outline of which is specified in the PHP Wiki . For source downloads of PHP 8.5.0 Beta 3, please visit the download page . Please carefully test this version and report any issues found on GitHub . Please DO NOT use this version in production, it is an early test version. For more information on the new features and other changes, you can read the NEWS file, or the UPGRADING file for a complete list of upgrading notes. These files can also be found in the release archive. The next release will be RC1, planned for 25 Sep 2025. The signatures for the release can be found in the manifest or on the Release Candidates page . Thank you for helping us make PHP better. 28 Aug 2025 PHP 8.5.0 Beta 2 available for testing The PHP team is pleased to announce the second beta release of PHP 8.5.0, Beta 2. This continues the PHP 8.5 release cycle, the rough outline of which is specified in the PHP Wiki . For source downloads of PHP 8.5.0 Beta 2 please visit the download page . Please carefully test this version and report any issues found on GitHub . Please DO NOT use this version in production, it is an early test version. For more information on the new features and other changes, you can read the NEWS file, or the UPGRADING file for a complete list of upgrading notes. These files can also be found in the release archive. The next release will be Beta 3, planned for 11 Sep 2025. The signatures for the release can be found in the manifest or on the Release Candidates page . Thank you for helping us make PHP better. 28 Aug 2025 PHP 8.3.25 Released! The PHP development team announces the immediate availability of PHP 8.3.25. This is a bug fix release. All PHP 8.3 users are encouraged to upgrade to this version. For source downloads of PHP 8.3.25 please visit our downloads page , Windows source and binaries can be found on windows.php.net/download/ . The list of changes is recorded in the ChangeLog . 28 Aug 2025 PHP 8.4.12 Released! The PHP development team announces the immediate availability of PHP 8.4.12. This is a bug fix release. All PHP 8.4 users are encouraged to upgrade to this version. For source downloads of PHP 8.4.12 please visit our downloads page , Windows source and binaries can be found on windows.php.net/download/ . The list of changes is recorded in the ChangeLog . 14 Aug 2025 PHP 8.5.0 Beta 1 available for testing The PHP team is pleased to announce the first beta release of PHP 8.5.0, Beta 1. This continues the PHP 8.5 release cycle, the rough outline of which is specified in the PHP Wiki . For source downloads of PHP 8.5.0 Beta 1 please visit the download page . Please carefully test this version and report any issues found on GitHub . Please DO NOT use this version in production, it is an early test version. For more information on the new features and other changes, you can read the NEWS file, or the UPGRADING file for a complete list of upgrading notes. These files can also be found in the release archive. The next release will be Beta 2, planned for 28 Aug 2025. The signatures for the release can be found in the manifest or on the Release Candidates page . Thank you for helping us make PHP better. 01 Aug 2025 PHP 8.5.0 Alpha 4 available for testing The PHP team is pleased to announce the third testing release of PHP 8.5.0, Alpha 4. This continues the PHP 8.5 release cycle, the rough outline of which is specified in the PHP Wiki . For source downloads of PHP 8.5.0 Alpha 4 please visit the download page . Please carefully test this version and report any issues found on GitHub . Please DO NOT use this version in production, it is an early test version. For more information on the new features and other changes, you can read the NEWS file, or the UPGRADING file for a complete list of upgrading notes. These files can also be found in the release archive. The next release will be Beta 1, planned for 14 Aug 2025. The signatures for the release can be found in the manifest or on the Release Candidates page . Thank you for helping us make PHP better. 31 Jul 2025 PHP 8.4.11 Released! The PHP development team announces the immediate availability of PHP 8.4.11. This is a bug fix release. All PHP 8.4 users are encouraged to upgrade to this version. For source downloads of PHP 8.4.11 please visit our downloads page , Windows source and binaries can be found on windows.php.net/download/ . The list of changes is recorded in the ChangeLog . 31 Jul 2025 PHP 8.3.24 Released! The PHP development team announces the immediate availability of PHP 8.3.24. This is a bug fix release. All PHP 8.3 users are encouraged to upgrade to this version. For source downloads of PHP 8.3.24 please visit our downloads page , Windows source and binaries can be found on windows.php.net/download/ . The list of changes is recorded in the ChangeLog . Older News Entries The PHP Foundation The PHP Foundation is a collective of people and organizations, united in the mission to ensure the long-term prosperity of the PHP language. Donate Upcoming conferences International PHP Conference Berlin 2026 Laravel Live Japan Conferences calling for papers Dutch PHP Conference 2026 User Group Events Special Thanks Social media @official_php @php@fosstodon.org @phpnet Copyright © 2001-2026 The PHP Group My PHP.net Contact Other PHP.net sites Privacy policy View Source ↑ and ↓ to navigate • Enter to select • Esc to close • / to open Press Enter without selection to search using Google | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/manual/customconfig.html | Programmatic configuration :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Manual Implementation Configuration Programmatic configuration Edit this Page Programmatic configuration Next to configuration files , Log4j Core can be configured programmatically too. In this page, we will explore utilities helping with programmatic configuration and demonstrate how they can be leveraged for certain use cases. Preliminaries To begin with, we strongly encourage you to check out the Architecture page first. Let’s repeat some basic definitions of particular interest: LoggerContext It is the anchor of the logging system. Generally there is one, statically-accessible, global LoggerContext for most applications. But there can be multiple LoggerContext s, for instance, to use in tests, in Java EE web applications, etc. Configuration It encapsulates a Log4j Core configuration (properties, appenders, loggers, etc.) and is associated with a LoggerContext . Tooling For programmatic configuration, Log4j Core essentially provides the following tooling: ConfigurationBuilder for declaratively creating a Configuration Configurator for associating a Configuration with a LoggerContext ConfigurationFactory for registering a Configuration factory to the configuration file mechanism In short, we will create Configuration s using ConfigurationBuilder , and activate them using Configurator . ConfigurationBuilder ConfigurationBuilder interface models a fluent API to programmatically create Configuration s. If you have ever created a Log4j Core configuration file , consider ConfigurationBuilder as a convenience utility to model the very same declarative configuration structure programmatically. Let’s show ConfigurationBuilder usage with an example. Consider the following Log4j Core configuration file: XML JSON YAML Properties Snippet from an example log4j2.xml <Appenders> <Console name="CONSOLE"> <JsonTemplateLayout/> </Console> </Appenders> <Loggers> <Root level="WARN"> <AppenderRef ref="CONSOLE"/> </Root> </Loggers> Snippet from an example log4j2.json "Appenders": { "Console": { "name": "CONSOLE", "JsonTemplateLayout": {} } }, "Loggers": { "Root": { "level": "WARN", "AppenderRef": { "ref": "CONSOLE" } } } Snippet from an example log4j2.yaml Appenders: Console: name: "CONSOLE" JsonTemplateLayout: {} Loggers: Root: level: "WARN" AppenderRef: ref: "CONSOLE" Snippet from an example log4j2.properties appender.0.type = Console appender.0.name = CONSOLE appender.0.layout.type = JsonTemplateLayout rootLogger.level = WARN rootLogger.appenderRef.0.ref = CONSOLE Above Log4j Core configuration can be programmatically built using ConfigurationBuilder as follows: Snippet from an example Usage.java ConfigurationBuilder<BuiltConfiguration> configBuilder = ConfigurationBuilderFactory.newConfigurationBuilder(); (1) Configuration configuration = configBuilder .add( configBuilder (2) .newAppender("CONSOLE", "List") .add(configBuilder.newLayout("JsonTemplateLayout"))) .add( configBuilder (3) .newRootLogger(Level.WARN) .add(configBuilder.newAppenderRef("CONSOLE"))) .build(false); (4) 1 The default ConfigurationBuilder instance is obtained using ConfigurationBuilderFactory.newConfigurationBuilder() static method 2 Add the appender along with the layout 3 Add the root logger along with a level and appender reference 4 Create the configuration, but don’t initialize it It is a good practice to not initialize Configuration s when they are constructed. This task should ideally be delegated to Configurator . ConfigurationBuilder has convenience methods for the base components that can be configured such as loggers, appenders, filters, properties, etc. Though there are cases where the provided convenience methods fall short of: Custom plugins that are declared to be represented in a configuration Custom subcomponents (e.g., a triggering policy for rolling file appenders ) For those, you can use the generic ConfigurationBuilder#newComponent() method. See Configurator1Test.java for examples on ConfigurationBuilder , newComponent() , etc. usage. Configurator Configurator is a programmatic interface to associate a Configuration with either new, or an existing LoggerContext . Obtaining a LoggerContext You can use Configurator to obtain a LoggerContext : Snippet from an example Usage.java Configuration configuration = createConfiguration(); try (LoggerContext loggerContext = Configurator.initialize(configuration)) { // Use `LoggerContext`... } initialize() will either return the LoggerContext currently associated with the caller, or create a new one. This is a convenient way to create isolated LoggerContext s for tests, etc. Reconfiguring the active LoggerContext You can use Configurator to reconfigure the active LoggerContext as follows: Snippet from an example Usage.java Configuration configuration = createConfiguration(); Configurator.reconfigure(configuration); Using the Configurator in this manner allows the application control over when Log4j is initialized. However, should any logging be attempted before Configurator.initialize() is called then the default configuration will be used for those log events. ConfigurationFactory ConfigurationFactory interface, which is mainly used by the configuration file mechanism to load a Configuration , can be leveraged to inject a custom Configuration . You need to Create a custom ConfigurationFactory plugin Assign it a higher priority (i.e., higher @Order value) Support all configuration file types (i.e. return * from getSupportedTypes() ) Consider the example below: Snippet from an example ExampleConfigurationFactory.java @Order(100) @Plugin(name = "ExampleConfigurationFactory", category = ConfigurationFactory.CATEGORY) public class ExampleConfigurationFactory extends ConfigurationFactory { @Override public Configuration getConfiguration(LoggerContext loggerContext, ConfigurationSource source) { (1) // Return a `Configuration`... } @Override public Configuration getConfiguration(LoggerContext loggerContext, String name, URI configLocation) { // Return a `Configuration`... } @Override public String[] getSupportedTypes() { return new String[] {"*"}; } } 1 getConfiguration(LoggerContext, ConfigurationSource) is only called if ConfigurationSource is not null. This is possible if the Configuration is provided programmatically. Hence, you are encouraged to implement getConfiguration(LoggerContext, String, URI) overload too. How-to guides In this section we will share guides on programmatically configuring Log4j Core for certain use cases. Loading a configuration file ConfigurationFactory provides the getInstance() method returning a meta- ConfigurationFactory that combines the behaviour of all available ConfigurationFactory implementations, including the predefined ones ; XmlConfigurationFactory , JsonConfigurationFactory , etc. You can use this getInstance() method to load a configuration file programmatically, granted that the input file format is supported by at least one of the available ConfigurationFactory plugins: Snippet from an example Usage.java ConfigurationFactory.getInstance() .getConfiguration( null, (1) null, (2) URI.create("uri://to/my/log4j2.xml")); (3) 1 Passing the LoggerContext argument as null, since this is the first time we are instantiating this Configuration , and it is not associated with a LoggerContext yet 2 Passing the configuration name argument as null, since it is not used when the configuration source location is provided 3 URI pointing to the configuration file; file://path/to/log4j2.xml , classpath:log4j2.xml , etc. Combining multiple configurations There are occasions where multiple configurations might need to be combined. For instance, You have a common Log4j Core configuration that should always be present, and an environment-specific one that extends the common one depending on the environment (test, production, etc.) the application is running on. You develop a framework, and it contains a predefined Log4j Core configuration. Yet you want to allow users to extend it whenever necessary. You collect Log4j Core configurations from multiple sources. You can programmatically combine multiple configurations into a single one using CompositeConfiguration : Snippet from an example Usage.java ConfigurationFactory configFactory = ConfigurationFactory.getInstance(); AbstractConfiguration commonConfig = (AbstractConfiguration) (2) configFactory.getConfiguration(null, null, URI.create("classpath:log4j2-common.xml")); (1) AbstractConfiguration appConfig = (AbstractConfiguration) (2) configFactory.getConfiguration(null, null, URI.create("classpath:log4j2-app.xml")); (1) AbstractConfiguration runtimeConfig = ConfigurationBuilderFactory.newConfigurationBuilder() // ... .build(false); (3) return new CompositeConfiguration(Arrays.asList(commonConfig, appConfig, runtimeConfig)); (4) 1 Loading a common, and an application-specific configuration from file 2 Casting them to AbstractConfiguration , the type required by CompositeConfiguration 3 Programmatically creating an uninitialized configuration. Note that no casting is needed. 4 Creating a CompositeConfiguration using all three configurations created. Note that passed configuration order matters! How does CompositeConfiguration work? CompositeConfiguration merges multiple configurations into a single one using a MergeStrategy , which can be customized using the log4j2.mergeStrategy configuration property . The default merge strategy works as follows: Global configuration attributes in later configurations replace those in previous configurations. The only exception is the monitorInterval attribute: the lowest positive value from all the configuration files will be used. Properties are aggregated. Duplicate properties override those in previous configurations. Filters are aggregated under CompositeFilter , if more than one filter is defined. Scripts are aggregated. Duplicate definitions override those in previous configurations. Appenders are aggregated. Appenders with the same name are overridden by those in later configurations, including all their elements. Loggers are aggregated. Logger attributes are individually merged, and those in later configurations replace duplicates. Appender references on a logger are aggregated, and those in later configurations replace duplicates. The strategy merges filters on loggers using the rule above. Modifying configuration components We strongly advise against programmatically modifying components of a configuration! This section will explain what it is, and why you should avoid it. It is unfortunately common that users modify components (appenders, filters, etc.) of a configuration programmatically as follows: LoggerContext context = LoggerContext.getContext(false); Configuration config = context.getConfiguration(); PatternLayout layout = PatternLayout.createDefaultLayout(config); Appender appender = createCustomAppender(); appender.start(); config.addAppender(appender); updateLoggers(appender, config); This approach is prone several problems: Your code relies on Log4j Core internals which don’t have any backward compatibility guarantees. You not only risk breaking your build at a minor Log4j Core version upgrade, but also make the life of Log4j maintainers trying to evolve the project extremely difficult. You move out from the safety zone, where Log4j Core takes care of components' life cycle (initialization, reconfiguration, etc.), and step into a minefield seriously undermining the reliability of your logging setup. If you happen to have code programmatically modifying components of a configuration, we advise you to migrate to other declarative approaches shared in this page. In case of need, feel free to ask for help in user support channels . Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://www.visma.com/voiceofvisma/episode-08-steffen-torp | Ep 08: Navigating the waters of entrepreneurship with Steffen Torp Who we are About us Connected by software – driven by people Become a Visma company Join our family of thriving SaaS companies Technology and AI at Visma Innovation with customer value at its heart Our sponsorship Team Visma | Lease a Bike Sustainability A better impact through software Contact us Find the right contact information What we offer Cloud software We create brilliant ways to work For medium businesses Lead your business with clarity For small businesses Start, run and grow with ease For public sector Empower efficient societies For accounting offices Build your dream accounting office For partners Help us keep customers ahead For investors For investors Latest results, news and strategy Financials Key figures, quarterly and annual results Events Financial calendar Governance Policies, management, board and owners Careers Careers at Visma Join the business software revolution Locations Find your nearest office Open positions Turn your passion into a career Resources News For small businesses Cloud accounting software built for small businesses Who we are About us Technology and AI at Visma Sustainability Become a Visma company Our sponsorship What we offer Cloud software For small businesses For accounting offices For enterprises Public sector For partners For investors Overview Financials Governance News and press Events Careers Careers at Visma Open positions Hubs Resources Blog Visma Developer Trust Centre News Press releases Team Visma | Lease a Bike Podcast Ep 08: Navigating the waters of entrepreneurship with Steffen Torp Voice of Visma July 3, 2024 Spotify Created with Sketch. YouTube Apple Podcasts Amazon Music <iframe style="border-radius:12px" src="https://open.spotify.com/embed/episode/69ROcOnlUuMtOZtj6hkHm3?utm_source=generator" width="100%" height="352" frameBorder="0" allowfullscreen="" allow="autoplay; clipboard-write; encrypted-media; fullscreen; picture-in-picture" loading="lazy"></iframe> About the episode When it comes to being an entrepreneur, the journey is as personal as it is unpredictable. So today, Steffen and Diana are reminiscing on his unique (sometimes precarious) entrepreneurial experiences and why he believes business software is a key component to building trust within today’s societies. Share More from Voice of Visma We're sitting down with leaders and colleagues from around Visma to share their stories, industry knowledge, and valuable career lessons. With the Voice of Visma podcast, we’re bringing our people and culture closer to you. Get to know the podcast Ep 22: Building, learning, and accelerating growth in the SaaS world with Maxin Schneider Entrepreneurial leadership often grows through experience, and Maxin Schneider has seen that up close. Read more Ep 21: How DEI fuels business success with Iveta Bukane Why DEI isn't just a moral imperative—it’s a business necessity. Read more Ep 20: Driving tangible sustainability outcomes with Freja Landewall Discover how ESG goes far beyond the environment, encompassing people, governance, and the long-term resilience of business. Read more Ep 19: Future-proofing public services in Sweden with Marie Ceder Between demographic changes, the rise in AI, and digitalisation, the public sector is at a pivotal moment. Read more Ep 18: Making inclusion part of our everyday work with Ida Algotsson What does inclusion truly mean at Visma – not just as values, but as everyday actions? Read more Ep 17: Sustainability at the heart of business with Robin Åkerberg Honouring our responsibility goes well beyond the numbers – it starts with a shared purpose and values. Read more Ep 16: Innovation for the public good with Kasper Lyhr Serving the public sector goes way beyond software – it’s about shaping the future of society as a whole. Read more Ep 15: Leading with transparency and vulnerability with Ellen Sano What does it mean to be a “firestarter” in business? Read more Ep 14: Women, innovation, and the future of Visma with Merete Hverven Our CEO, Merete, knows that great leadership takes more than just hard work – it takes vision. Read more Ep 13: Building partnerships beyond software with Daniel Ognøy Kaspersen What does it look like when an accounting software company delivers more than just great software? Read more Ep 12: AI in the accounting sphere with Joris Joppe Artificial intelligence is changing industries across the board, and accounting is no exception. But in such a highly specialised field, what does change actually look like? Read more Ep 11: From Founder to Segment Director with Ari-Pekka Salovaara Ari-Pekka is a serial entrepreneur who joined Visma when his company was acquired in 2010. He now leads the small business segment. Read more Ep 10: When brave choices can save a company with Charlotte von Sydow What’s it like stepping in as the Managing Director for a company in decline? Read more Ep 09: Revolutionising tax tech in Italy with Enrico Mattiazzi and Vito Lomele Take one look at their product, their customer reviews, or their workplace awards, and it’s clear why Fiscozen leads Italy’s tax tech scene. Read more Ep 08: Navigating the waters of entrepreneurship with Steffen Torp When it comes to being an entrepreneur, the journey is as personal as it is unpredictable. Read more Ep 07: The untold stories of Visma with Øystein Moan What did Visma look like in its early days? Are there any decisions our former CEO would have made differently? Read more Ep 06: Measure what matters: Employee engagement with Vibeke Müller Research shows that having engaged, happy employees is so important for building a great company culture and performing better financially. Read more Ep 05: Our Team Visma | Lease a Bike sponsorship with Anne-Grethe Thomle Karlsen It’s one thing to sponsor the world’s best cycling team; it’s a whole other thing to provide software and expertise that helps them do what they do best. Read more Ep 04: “How do you make people care about security?” with Joakim Tauren With over 700 applications across the Visma Group (and counting!), cybersecurity is make-or-break for us. Read more Ep 03: The human side of enterprise with Yvette Hoogewerf As a software company, our products are central to our business… but that’s only one part of the equation. Read more Ep 02: From Management Trainee to CFO with Stian Grindheim How does someone work their way up from Management Trainee to CFO by the age of 30? And balance fatherhood alongside it all? Read more Ep 01: An optimistic look at the future of AI with Jacob Nyman We’re all-too familiar with the fears surrounding artificial intelligence. So today, Jacob and Johan are flipping the script. Read more (Trailer) Introducing: Voice of Visma These are the stories that shape us... and the reason Visma is unlike anywhere else. Read more Visma Software International AS Organisation number: 980858073 MVA (Foretaksregisteret/The Register of Business Enterprises) Main office Karenslyst allé 56 0277 Oslo Norway Postal address PO box 733, Skøyen 0214 Oslo Norway visma@visma.com Visma on LinkedIn Who we are About us Technology at Visma Sustainability Become a Visma company Our sponsorship Contact us What we offer For small businesses For accounting offices For medium businesses For public sector For partners e-invoicing Digital signature For investors Overview Financials Governance Events Careers Careers at Visma Open positions Hubs Resources Blog Trust Centre Community News Press ©️ 2026 Visma Privacy policy Cookie policy Whistleblowing Cookies settings Transparency Act Change country | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/manual/jmx.html | JMX :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Manual Implementation Configuration JMX Edit this Page JMX Log4j 2 has built-in support for JMX. When JMX support is enabled, Status Logger , ContextSelector, and all LoggerContexts, LoggerConfigs, and Appenders are instrumented with MBeans. Also included is a simple client GUI that can be used to monitor the status logger output, as well as to remotely reconfigure Log4j with a different configuration file, or to edit the current configuration directly. Enabling JMX JMX support is disabled by default. JMX support was enabled by default in Log4j 2 versions before 2.24.0. To enable JMX support, set the log4j2.disableJmx system property when starting the Java VM: log4j2.disableJmx=false Local Monitoring and Management To perform local monitoring you need to set the log4j2.disableJmx system property to false . The JConsole tool that is included in the Java JDK can be used to monitor your application. Start JConsole by typing $JAVA_HOME/bin/jconsole in a command shell. For more details, see Oracle’s documentation at how to use JConsole . Remote Monitoring and Management To enable monitoring and management from remote systems, set the following two system properties when starting the Java VM: log4j2.disableJmx=false and com.sun.management.jmxremote.port=portNum In the property above, portNum is the port number through which you want to enable JMX RMI connections. For more details, see Oracle’s documentation at Remote Monitoring and Management . RMI impact on Garbage Collection Be aware that RMI by default triggers a full GC every hour. See the Oracle documentation for the sun.rmi.dgc.server.gcInterval and sun.rmi.dgc.client.gcInterval properties. The default value of both properties is 3600000 milliseconds (one hour). Before Java 6, it was one minute. The two sun.rmi arguments reflect whether your JVM is running in server or client mode. If you want to modify the GC interval time it may be best to specify both properties to ensure the argument is picked up by the JVM. An alternative may be to disable explicit calls to System.gc() altogether with -XX:+DisableExplicitGC , or (if you are using the CMS or G1 collector) add -XX:+ExplicitGCInvokesConcurrent to ensure the full GCs are done concurrently in parallel with your application instead of forcing a stop-the-world collection. Log4j Instrumented Components The best way to find out which methods and attributes of the various Log4j components are accessible via JMX is to look at the org.apache.logging.log4j.core.jmx package contents in the log4j-core artifact or by exploring directly in JConsole. The screenshot below shows the Log4j MBeans in JConsole. Client GUI The Apache Log4j JMX GUI is a basic client GUI that can be used to monitor the StatusLogger output and to remotely modify the Log4j configuration. The client GUI can be run as a stand-alone application or as a JConsole plug-in. Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/manual/implementation.html | Reference implementation :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Manual Implementation Edit this Page Reference implementation The reference implementation of the Log4j API is called Log4j Core. It is a versatile, reliable, feature-rich and production-ready implementation. The remaining chapters of the manual describe the ins and outs of this logging implementation: Do you want to learn about its architecture? Go to Architecture . Do you want to install Log4j Core? Go to Installing Log4j Core . Do you want to configure Log4j Core? Go to Configuration . Do you want to write custom components for Log4j Core? Go to Extending . Do you want to tune your installation for a better performance? Go to Performance . Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/manual/messages.html | Messages :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Manual API Messages Edit this Page Messages Unlike other logging APIs, which either restrict the description of log events to (possibly interpolated) Java String s or allow generic Java Object s, the Log4j API encapsulates every log message into the logging-specific Message interface, before passing it to the logging implementation. Such an approach opens to developers a wide range of customization possibilities. Log messages are often used interchangeably with log events . While this simplification holds for several cases, it is not technically correct. A log event, capturing the logging context (level, logger name, instant, etc.) along with the log message, is generated by the logging implementation (e.g., Log4j Core) when a user issues a log using a logger , e.g., LOGGER.info("Hello, world!") . Hence, log events are compound objects containing log messages . Click for an introduction to log event fields Log events contain fields that can be classified into three categories: Some fields are provided explicitly, in a Logger method call. The most important are the log level and the log message, which is a description of what happened, and it is addressed to humans. Some fields are contextual (e.g., Thread Context ) and are either provided explicitly by developers of other parts of the application, or is injected by Java instrumentation. The last category of fields is those that are computed automatically by the logging implementation employed. For clarity’s sake let us look at a log event formatted as JSON: { (1) "log.level": "INFO", "message": "Unable to insert data into my_table.", "error.type": "java.lang.RuntimeException", "error.message": null, "error.stack_trace": [ { "class": "com.example.Main", "method": "doQuery", "file.name": "Main.java", "file.line": 36 }, { "class": "com.example.Main", "method": "main", "file.name": "Main.java", "file.line": 25 } ], "marker": "SQL", "log.logger": "com.example.Main", (2) "tags": [ "SQL query" ], "labels": { "span_id": "3df85580-f001-4fb2-9e6e-3066ed6ddbb1", "trace_id": "1b1f8fc9-1a0c-47b0-a06f-af3c1dd1edf9" }, (3) "@timestamp": "2024-05-23T09:32:24.163Z", "log.origin.class": "com.example.Main", "log.origin.method": "doQuery", "log.origin.file.name": "Main.java", "log.origin.file.line": 36, "process.thread.id": 1, "process.thread.name": "main", "process.thread.priority": 5 } 1 Explicitly supplied fields: log.level The level of the event, either explicitly provided as an argument to the logger call, or implied by the name of the logger method message The log message that describes what happened error.* An optional Throwable explicitly passed as an argument to the logger call marker An optional marker explicitly passed as an argument to the logger call log.logger The logger name provided explicitly to LogManager.getLogger() or inferred by Log4j API 2 Contextual fields: tags The Thread Context stack labels The Thread Context map 3 Logging backend specific fields. In case you are using Log4j Core, the following fields can be automatically generated: @timestamp The instant of the logger call log.origin.* The location of the logger call in the source code process.thread.* The name of the Java thread, where the logger is called Usage While internally Log4j uses Message objects, the Logger interface provides various shortcut methods to create the most commonly used messages: To create a SimpleMessage from a String argument, the following logger calls are equivalent: LOGGER.error("Houston, we have a problem.", exception); LOGGER.error(new SimpleMessage("Houston, we have a problem."), exception); To create a ParameterizedMessage from a format String and an array of object parameters, the following logger calls are equivalent: LOGGER.error("Unable process user with ID `{}`", userId, exception); LOGGER.error(new ParameterizedMessage("Unable process user with ID `{}`", userId), exception); In most cases, this is sufficient. Nex to use cases sufficed with String -based messages, the Message interface abstraction also allows users to log custom objects. This effectively provides logging convenience in certain use cases. For instance, imagine a scenario that uses a domain event to signal authentication failures: record LoginFailureEvent(String userName, InetSocketAddress remoteAddress) {} When the developer wants to log a message reporting the event, we can see that the string construction becomes more challenging to read: LOGGER.info( "Connection closed by authenticating user {} {} port {} [preauth]", event.userName(), event.remoteAddress().getHostName(), event.remoteAddress().getPort()); By extending the Message interface, developers can simplify the reporting of a login failure: record LoginFailureEvent(String userName, InetSocketAddress remoteAddress) implements Message { (1) @Override public String getFormattedMessage() { (2) return "Connection closed by authenticating user " + userName() + " " + remoteAddress().getHostName() + " port " + remoteAddress().getPort() + " [preauth]"; } // Other methods } 1 Domain model needs to implement the Message interface 2 getFormattedMessage() provides the String to be logged As a result, logging of LoginFailureEvent instances can be simplified as follows: LOGGER.info(event); Collection This section explains predefined Log4j Message implementations addressing certain use cases. We will group this collection into following titles: Types intended for plain String -based messages Types intended for structured logging String-based types This section explains message types intended for human-readable String -typed output. FormattedMessage FormattedMessage is intended as a generic entry point to actual message implementations that use pattern-based formatting. It works as follows: If the input is a valid MessageFormat pattern, use MessageFormatMessage If the input is a valid String.format() pattern , use StringFormattedMessage Otherwise, use ParameterizedMessage Due to checks involved, FormattedMessage has an extra performance overhead compared to directly using a concrete Message implementation. LocalizedMessage LocalizedMessage incorporates a ResourceBundle , and allows the message pattern parameter to be the key to the message pattern in the bundle. If no bundle is specified, LocalizedMessage will attempt to locate a bundle with the name of the Logger used to log the event. The message retrieved from the bundle will be formatted using a FormattedMessage . LocalizedMessage is primarily provided for compatibility with Log4j 1 . We advise you to perform log message localization at the representation layer of your application, e.g., the client UI. MessageFormatMessage MessageFormatMessage formats its input using Java’s MessageFormat . While MessageFormatMessage offers more flexibility compared to ParameterizedMessage , the latter is engineered for performance, e.g., it is garbage-free . You are recommended to use ParameterizedMessage for performance-sensitive setups. ObjectMessage ObjectMessage is a wrapper Message implementation to log custom domain model instances. It formats an input Object by calling its toString() method. If the object is found to be extending from StringBuilderFormattable , it uses formatTo(StringBuilder) instead. ObjectMessage can be thought as a convenience for ParameterizedMessage such that the following message instances are analogous: new ObjectMessage(obj) new ParameterizedMessage("{}", obj) That is, They will both be formatted in the same way Message#getParameters() will return an Object[] containing only obj Hence, ObjectMessage is intended more as a marker interface to indicate the single value it encapsulates. ReusableObjectMessage provides functionally equivalent to ObjectMessage , plus methods to replace its content to enable Garbage-free logging . When garbage-free logging is enabled, loggers will use this instead of ObjectMessage . ParameterizedMessage ParameterizedMessage accepts a formatting pattern containing {} placeholders and a list of arguments. It formats the message such that each {} placeholder in the pattern is replaced with the corresponding argument. ReusableParameterizedMessage provides functionally equivalent to ParameterizedMessage , plus methods to replace its content to enable Garbage-free logging . When garbage-free logging is enabled, loggers will use this instead of ParameterizedMessage . SimpleMessage SimpleMessage encapsulates a String or CharSequence that requires no formatting. ReusableSimpleMessage provides functionally equivalent to SimpleMessage , plus methods to replace its content to enable Garbage-free logging . When garbage-free logging is enabled, loggers will use this instead of SimpleMessage . StringFormattedMessage StringFormattedMessage accepts a format string and a list of arguments. It formats the message using java.lang.String#format() . While StringFormattedMessage offers more flexibility compared to ParameterizedMessage , the latter is engineered for performance, e.g., it is garbage-free . You are recommended to use ParameterizedMessage for performance-sensitive setups. ThreadDumpMessage If a ThreadDumpMessage is logged, Log4j generates stack traces for all threads. These stack traces will include any held locks. Structured types Log4j strives to provide top of the class support for structured logging . It complements structured layouts with message types allowing users to create structured messages effectively resulting in an end-to-end structured logging experience. This section will introduce the predefined structured message types. What is structured logging ? In almost any modern production deployment, logs are no more written to files read by engineers while troubleshooting, but forwarded to log ingestion systems (Elasticsearch, Google Cloud Logging, etc.) for several observability use cases ranging from logging to metrics. This necessitates the applications to structure their logs in a machine-readable way ready to be delivered to an external system. This act of encoding logs following a certain structure is called structured logging . MapMessage MapMessage is a Message implementation that models a Java Map with String -typed keys and values. It is an ideal generic message type for passing structured data. MapMessage implements MultiformatMessage to facilitate encoding of its content in multiple formats. It supports following formats: Format Description XML format as XML JSON format as JSON JAVA format as Map#toString() (the default) JAVA_UNQUOTED format as Map#toString() , but without quotes Some appenders handle MapMessage s differently when there is no layout: JMS Appender converts to a JMS javax.jms.MapMessage or jakarta.jms.MapMessage JDBC Appender converts to values in an SQL INSERT statement MongoDB NoSQL provider converts to fields in a MongoDB object JSON Template Layout JSON Template Layout has a specialized handling for MapMessage s to properly encode them as JSON objects. StructuredDataMessage StructuredDataMessage formats its content in a way compliant with the Syslog message format described in RFC 5424 . RFC 5424 Layout StructuredDataMessage is mostly intended to be used in combination with RFC 5424 Layout , which has specialized handling for StructuredDataMessage s. By combining two, users can have complete control on how their message is encoded in a way compliant with RFC 5424, while RFC 5424 Layout will make sure the rest of the information attached to the log event is properly injected. JSON Template Layout Since StructuredDataMessage extends from MapMessage , which JSON Template Layout has a specialized handling for, StructuredDataMessage s will be properly encoded by JSON Template Layout too. Performance As explained in Usage , SimpleMessage and ParameterizedMessage instances are created indirectly while interacting with Logger methods; info() , error() , etc. In a modern JVM, the allocation cost difference between these Message instances and plain String objects is marginal. If you observe this cost to be significant enough for your use case, you can enable Garbage-free logging . This will effectively cause Message instances to be recycled and avoid creating pressure on the garbage collector. In such a scenario, if you also have custom message types, consider implementing StringBuilderFormattable and introducing a message recycling mechanism. Extending If predefined message types fall short of addressing your needs, you can extend from the Message interface to either create your own message types or make your domain models take control of the message formatting. Example custom message class Snippet from CustomMessageExample.java record LoginFailureEvent(String userName, InetSocketAddress remoteAddress) implements Message, StringBuilderFormattable { (1) @Override public void formatTo(StringBuilder buffer) { (2) buffer.append("Connection closed by authenticating user ") .append(userName()) .append(" ") .append(remoteAddress().getHostName()) .append(" port ") .append(remoteAddress().getPort()) .append(" [preauth]"); } @Override public String getFormattedMessage() { (3) StringBuilder buffer = new StringBuilder(); formatTo(buffer); return buffer.toString(); } } 1 Extending from both Message and StringBuilderFormattable interfaces 2 Formats the message directly into a StringBuilder 3 getFormattedMessage() reuses formatTo() Format type You can extend from MultiformatMessage (and optionally from MultiFormatStringBuilderFormattable ) to implement messages that can format themselves in one or more encodings; JSON, XML, etc. Layouts leverage this mechanism to encode a message in a particular format. For instance, when JSON Template Layout figures out that the array returned by getFormats() of a MultiformatMessage contains JSON , it injects the MultiformatMessage#getFormattedMessage({"JSON"}) output as is without quoting it. Marker interfaces There are certain Log4j API interfaces that you can optionally extend from in your Message implementations to enable associated features: LoggerNameAwareMessage LoggerNameAwareMessage is a marker interface with a setLoggerName(String) method. This method will be called during event construction to pass the associated Logger to the Message . MultiformatMessage MultiformatMessage extends from Message to support multiple format types . For example, see MapMessage.java extending from MultiformatMessage to support multiple formats; XML, JSON, etc. MultiFormatStringBuilderFormattable MultiFormatStringBuilderFormattable extends StringBuilderFormattable to support multiple format types . StringBuilderFormattable Many layouts recycle StringBuilder s to encode log events without generating garbage , and this effectively results in significant performance benefits. StringBuilderFormattable is the primary interface facilitating the formatting of objects to a StringBuilder . TimestampMessage TimestampMessage provides a getTimestamp() method that will be called during log event construction to determine the instant instead of using the current timestamp. Message implementations that want to control the timestamp of the log event they are encapsulated in, they can extend from TimestampMessage . Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/manual/getting-started.html | Getting started :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Manual Getting started Edit this Page Getting started This document aims to guide you through the most important aspects of logging with Log4j. It is not a comprehensive guide, but it should give you a good starting point. What is logging? Logging is the act of publishing diagnostics information at certain points of a program execution. It means you can write messages to a log file or console to help you understand what your application is doing. The simplest way to log in Java is to use System.out.println() , like this: private void truncateTable(String tableName) { System.out.println("truncating table"); (1) db.truncate(tableName); } 1 The information that a table is being truncated is written to the console. This is already useful, but the reader of this message does not know what table is being truncated. Usually, we would like to include the table name in the message, which quickly leads developers to use System.out.format() or similar methods. Log4j helps with formatting strings as we will see later, but for now, let’s see how to work without it. The following code shows how this method can be improved to provide more context about its action. private void truncateTable(String tableName) { System.out.format("[WARN] truncating table `%s`%n", tableName); (1) db.truncate(tableName); } 1 format() writes the message to the console, replacing %s with the value of tableName , and %n with a new line. If the developer decides the truncate the table fruits , the output of this code will look like this: [WARN] Truncating table `fruits` This provides observability into an application’s runtime, and we can follow the execution flow. However, there are several drawbacks with the above approach and this is where Log4j comes in. Log4j will help you to write logs in a more structured way, with more information, and with more flexibility. Why should I use Log4j? Log4j is a versatile, industrial-grade Java logging framework, maintained by many contributors. It can help us with common logging tasks and lets us focus on the application logic. It helps with: Enhancing the message with additional information (timestamp, file, class, and method name, line number, host, severity, etc.) Formatting the message according to a given layout (CSV, JSON, etc.) Writing the message to various targets using an appender (console, file, socket, database, queue, etc.) Filter ing messages to be written (e.g. filter by severity, content, etc.) What is Log4j composed of? Log4j is essentially composed of a logging API called Log4j API , and its reference implementation called Log4j Core . Log4j also bundles several logging bridges to enable Log4j Core consume from foreign logging APIs. Let’s briefly explain these concepts: Logging API A logging API is an interface your code or your dependencies directly logs against. It is required at compile-time. It is implementation agnostic to ensure that your application can write logs, but is not tied to a specific logging implementation. Log4j API, SLF4J , JUL (Java Logging) , JCL (Apache Commons Logging) , JPL (Java Platform Logging) and JBoss Logging are major logging APIs. Logging implementation A logging implementation is only required at runtime and can be changed without the need to recompile your software. Log4j Core, JUL (Java Logging) , Logback are the most well-known logging implementations. Logging bridge Logging implementations accept input from a single logging API of their preference; Log4j Core from Log4j API, Logback from SLF4J, etc. A logging bridge is a simple logging implementation of a logging API that forwards all messages to a foreign logging API. Logging bridges allow a logging implementation to accept input from other logging APIs that are not their primary logging API. For instance, log4j-slf4j2-impl bridges SLF4J calls to Log4 API and effectively enables Log4j Core to accept input from SLF4J. What are the installation prerequisites? We will need a BOM (Bill of Materials) to manage the versions of the dependencies. This way we won’t need to provide the version for each Log4j module explicitly. Maven Gradle <dependencyManagement> <dependencies> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-bom</artifactId> <version>2.25.3</version> <scope>import</scope> <type>pom</type> </dependency> </dependencies> </dependencyManagement> dependencies { implementation platform('org.apache.logging.log4j:log4j-bom:2.25.3') } How do I log using Log4j API? To log, you need a Logger instance which you will retrieve from the LogManager . These are all part of the log4j-api module, which you can install as follows: Maven Gradle <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api</artifactId> <version>${log4j-api.version}</version> </dependency> implementation 'org.apache.logging.log4j:log4j-api:${log4j-api.version}' You can use the Logger instance to log by using methods like info() , warn() , error() , etc. These methods are named after the log levels they represent, a way to categorize log events by severity. The log message can also contain placeholders written as {} that will be replaced by the arguments passed to the method. import org.apache.logging.log4j.Logger; import org.apache.logging.log4j.LogManager; public class DbTableService { private static final Logger LOGGER = LogManager.getLogger(); (1) public void truncateTable(String tableName) throws IOException { LOGGER.warn("truncating table `{}`", tableName); (2) db.truncate(tableName); } } 1 The returned Logger instance is thread-safe and reusable. Unless explicitly provided as an argument, getLogger() associates the returned Logger with the enclosing class, that is, DbTableService in this example. 2 The placeholder {} in the message will be replaced with the value of tableName The generated log event , which contain the user-provided log message and log level (i.e., WARN ), will be enriched with several other implicitly derived contextual information: timestamp, class & method name, line number, etc. What happens to the generated log event will vary significantly depending on the configuration used. It can be pretty-printed to the console, written to a file, or get totally ignored due to insufficient severity or some other filtering. Log levels are used to categorize log events by severity and control the verbosity of the logs. Log4j contains various predefined levels, but the most common are DEBUG , INFO , WARN , and ERROR . With them, you can filter out less important logs and focus on the most critical ones. Previously we used Logger#warn() to log a warning message, which could mean that something is not right, but the application can continue. Log levels have a priority, and WARN is less severe than ERROR . Exceptions are often also errors. In this case, we might use the ERROR log level. Make sure to log exceptions that have diagnostics value. This is simply done by passing the exception as the last argument to the log method: LOGGER.warn("truncating table `{}`", tableName); try { db.truncate(tableName); } catch (IOException exception) { LOGGER.error("failed truncating table `{}`", tableName, exception); (1) throw new IOException("failed truncating table: " + tableName, exception); } 1 By using error() instead of warn() , we signal that the operation failed. While there is only one placeholder in the message, we pass two arguments: tableName and exception . Log4j will attach the last extra argument of type Throwable in a separate field to the generated log event. Best practices There are several widespread bad practices while using Log4j API. Below we will walk through the most common ones and see how to fix them. For a complete list, refer to the Log4j API best practices page . Don’t use toString() Don’t use Object#toString() in arguments, it is redundant! /* BAD! */ LOGGER.info("userId: {}", userId.toString()); Underlying message type and layout will deal with arguments: /* GOOD */ LOGGER.info("userId: {}", userId); Pass exception as the last extra argument Don’t call Throwable#printStackTrace() ! This not only circumvents the logging but can also leak sensitive information! /* BAD! */ exception.printStackTrace(); Don’t use Throwable#getMessage() ! This prevents the log event from getting enriched with the exception. /* BAD! */ LOGGER.info("failed", exception.getMessage()); /* BAD! */ LOGGER.info("failed for user ID `{}`: {}", userId, exception.getMessage()); Don’t provide both Throwable#getMessage() and Throwable itself! This bloats the log message with a duplicate exception message. /* BAD! */ LOGGER.info("failed for user ID `{}`: {}", userId, exception.getMessage(), exception); Pass exception as the last extra argument: /* GOOD */ LOGGER.error("failed", exception); /* GOOD */ LOGGER.error("failed for user ID `{}`", userId, exception); Don’t use string concatenation If you are using String concatenation while logging, you are doing something very wrong and dangerous! Don’t use String concatenation to format arguments! This circumvents the handling of arguments by message type and layout. More importantly, this approach is prone to attacks! Imagine userId being provided by the user with the following content: placeholders for non-existing args to trigger failure: {} {} {dangerousLookup} /* BAD! */ LOGGER.info("failed for user ID: " + userId); Use message parameters /* GOOD */ LOGGER.info("failed for user ID `{}`", userId); How do I install Log4j Core to run my application ? This section explains how to install Log4j Core to run an application. Are you implementing not an application , but a library ? Please refer to the How do I install Log4j Core for my library ? instead. First, add the log4j-core runtime dependency to your application. Second, it is highly recommended to add the log4j-layout-template-json runtime dependency to encode log events in JSON. This is the most secure way to format log events and should be preferred over the default PatternLayout , at least for production deployments. Maven Gradle <project> <!-- Assuming `log4j-bom` is added --> <dependency> <!-- Logging implementation (Log4j Core) --> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <scope>runtime</scope> (1) </dependency> <!-- Log4j JSON-encoding support --> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-layout-template-json</artifactId> <scope>runtime</scope> (1) </dependency> </dependency> </project> dependencies { // Assuming `log4j-bom` is added // The logging implementation (i.e., Log4j Core) runtimeOnly 'org.apache.logging.log4j:log4j-core' (1) // Log4j JSON-encoding support runtimeOnly 'org.apache.logging.log4j:log4j-layout-template-json' (1) } 1 For applications, the logging implementation need to be runtime dependencies. If your application has (direct or transitive!) dependencies that use another logging API, you need to bridge that to Log4j. This way the foreign logging API calls will effectively be consumed by Log4j too. SLF4J is another logging API used pretty common in the wild. ( Installation covers all supported foreign APIs.) Let’s see how you can use the log4j-slf4j2-impl bridge to support SLF4J: Maven Gradle <project> <!-- Assuming `log4j-bom` is added --> <dependency> <!-- Assuming `log4j-core` and `log4j-layout-template-json` is added --> <!-- SLF4J-to-Log4j bridge --> (2) <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-slf4j2-impl</artifactId> <scope>runtime</scope> (1) </dependency> </dependency> </project> dependencies { // Assuming `log4j-bom`, `log4j-core`, and `log4j-layout-template-json` is added // SLF4J-to-Log4j bridge (2) runtimeOnly 'org.apache.logging.log4j:log4j-slf4j2-impl' (1) } 1 For applications, bridges need to be runtime dependencies. 2 Log4j module bridging SLF4J to Log4j To complete the installation, Log4j needs to be configured. Please continue with How do I configure Log4j Core to run my application ? How do I configure Log4j Core to run my application ? This section explains configuring Log4j on how log events should be processed. Log4j supports several configuration inputs and file formats. Let’s start with a basic and robust configuration where the logs are encoded in JSON and written to the console. Save the following XML-formatted Log4j configuration file to src/ main /resources/ log4j2.xml in your application. An example src/ main /resources/ log4j2.xml <?xml version="1.0" encoding="UTF-8"?> <Configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="https://logging.apache.org/xml/ns" xsi:schemaLocation=" https://logging.apache.org/xml/ns https://logging.apache.org/xml/ns/log4j-config-2.xsd"> <Appenders> (1) <Console name="CONSOLE"> (2) <JsonTemplateLayout/> (3) </Console> </Appenders> <Loggers> <Logger name="com.mycompany" level="INFO"/> (4) <Root level="WARN"> (5) <AppenderRef ref="CONSOLE"/> (6) </Root> </Loggers> </Configuration> 1 Appenders are responsible for writing log events to a particular target; console, file, socket, database, queue, etc. 2 Console Appender writes logs to the console. 3 Layouts are responsible for encoding log events before appenders writing them. JSON Template Layout encodes log events in JSON. 4 Log events generated by classes in the com.mycompany package (incl. its sub-packages) and that are of level INFO or higher (i.e., WARN , ERROR , FATAL ) will be accepted. 5 Unless specified otherwise, log events of level WARN and higher will be accepted. It serves as the default <logger> configuration. 6 Unless specified otherwise, accepted log events will be forwarded to the console appender defined earlier. Next, you need to configure Log4j for the tests of your application. Please proceed to How do I configure Log4j Core for tests? How do I install Log4j Core for my library ? This section explains how to install Log4j Core for libraries. Are you implementing not a library , but an application ? Please refer to How do I install Log4j Core to run my application ? instead. Unlike applications, libraries should be logging implementation agnostic. That is, libraries should log through a logging API, but leave the decision of the logging implementation to the application . That said, libraries need a logging implementation while running their tests. Let’s see how you can install Log4j Core for your tests. Start with adding the log4j-core dependency in test scope to your library: Maven Gradle <project> <!-- Assuming `log4j-bom` is added --> <dependency> <!-- Logging implementation (Log4j Core) --> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <scope>test</scope> (1) </dependency> </dependency> </project> dependencies { // Assuming `log4j-bom` is already added // The logging implementation (i.e., Log4j Core) testRuntimeOnly 'org.apache.logging.log4j:log4j-core' (1) } 1 For tests of libraries, the logging implementation is only needed in test scope. If your library has (direct or transitive!) dependencies that use another logging API, you need to bridge that to Log4j. This way the foreign logging API calls will effectively be consumed by Log4j too. SLF4J is another logging API used pretty common in the wild. ( Installation covers all supported foreign APIs.) Let’s see how you can use the log4j-slf4j2-impl bridge to support SLF4J: Maven Gradle <project> <!-- Assuming `log4j-bom` is added --> <dependency> <!-- Assuming `log4j-core` and `log4j-layout-template-json` is added --> <!-- SLF4J-to-Log4j bridge --> (2) <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-slf4j2-impl</artifactId> <scope>test</scope> (1) </dependency> </dependency> </project> dependencies { // Assuming `log4j-bom`, `log4j-core`, and `log4j-layout-template-json` is added // SLF4J-to-Log4j bridge (2) runtimeOnly 'org.apache.logging.log4j:log4j-slf4j2-impl' (1) } 1 For tests of libraries, logging bridges are only needed in test scope. 2 Log4j module bridging SLF4J to Log4j Next, you need to you need to configure Log4j. Please proceed to How do I configure Log4j Core for tests? How do I configure Log4j Core for tests? This section explains configuring Log4j on how log events should be processed for tests. Log4j supports several configuration inputs and file formats. Let’s start with a basic and developer-friendly configuration where the logs are pretty-printed in a human-readable way and written to the console. Contrast to an application’s more conservative Log4j setup , for tests, we will go with a more developer-friendly Log4j configuration where the logs are pretty-printed to the console, and logging verbosity is increased. While it is not recommended to use Pattern Layout in production for security reasons, it is a good choice for tests to encode log events. We will use it to pretty-print the log event to the console with extra fields: timestamp, thread name, log level, class name, etc. The rest of the configuration should look familiar from earlier sections. Save the following XML-formatted Log4j configuration file to src/ test /resources/ log4j2-test.xml . An example src/ test /resources/ log4j2-test.xml <?xml version="1.0" encoding="UTF-8"?> <Configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="https://logging.apache.org/xml/ns" xsi:schemaLocation=" https://logging.apache.org/xml/ns https://logging.apache.org/xml/ns/log4j-config-2.xsd"> <Appenders> <Console name="CONSOLE"> <PatternLayout pattern="%d [%t] %5p %c{1.} - %m%n"/> (1) </Console> </Appenders> <Loggers> <Logger name="com.mycompany" level="DEBUG"/> (2) <Root level="WARN"> <AppenderRef ref="CONSOLE"/> </Root> </Loggers> </Configuration> 1 Pattern Layout is used for encoding the log event in a human-readable way. 2 Increased logging verbosity for the com.mycompany package. What is next? At this stage, you know How to install Log4j API and log using it How to install and configure Log4j Core in your application/library You can use following pointers to further customize your Log4j setup. Installation While shared dependency management snippets should get you going, your case might necessitate a more intricate setup. Are you dealing with a Spring Boot application? Is it running in a Java EE container? Do you need to take into account other logging APIs such as JUL, JPL, JCL, etc.? See Installation for the complete installation guide. Configuration Log4j can be configured in several ways in various file formats (XML, JSON, Properties, and YAML). See the Configuration file page for details. Appenders & Layouts Log4j contains several appenders and layouts to compose a configuration that best suit your needs. Performance Do you want to get the best performance out of your logging system? Make sure to check out the Performance page. Architecture Want to learn more about loggers, contexts, and how these are all wired together? See the Architecture page. Support Confused? Having a problem while setting up Log4j? See the Support page. Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/manual/status-logger.html | Status Logger :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Manual API Loggers Status Logger Edit this Page Status Logger StatusLogger is a standalone, self-sufficient Logger implementation to record events that occur in the logging system (i.e., Log4j) itself. It is the logging system used by Log4j for reporting status of its internals. Usage You can use the status logger for several purposes: Troubleshooting When Log4j is not behaving in the way you expect it to, you can increase the verbosity of status logger messages emitted using the log4j2.debug system property for troubleshooting. See Configuration for details. Reporting internal status If you have custom Log4j components (layouts, appenders, etc.), you cannot use Log4j API itself for logging, since this will result in a chicken and egg problem. This is where StatusLogger comes into play: private class CustomLog4jComponent { private static final Logger LOGGER = StatusLogger.getInstance(); void doSomething(String input) { LOGGER.trace("doing something with input: `{}`", input); } } Listening internal status You can configure where the status logger messages are delivered to. See Listeners . Configuration StatusLogger can be configured in following ways: Passing system properties to the Java process (e.g., -Dlog4j2.statusLoggerLevel=INFO } Due to several complexities involved, you are strongly advised to configure the status logger only using system properties ! Providing properties in a "log4j2.StatusLogger.properties" file in the classpath Using Log4j configuration (i.e., <Configuration status="WARN" dest="out"> in a log4j2.xml in the classpath) Since version 2.24.0 , status attribute in the Configuration element is deprecated and should be replaced with the log4j2.statusLoggerLevel configuration property. Programmatically (e.g., StatusLogger.getLogger().setLevel(Level.WARN) ) It is crucial to understand that there is a time between the first StatusLogger access and a configuration file (e.g., log4j2.xml ) read. Consider the following example: The default level (of fallback listener) is ERROR You have <Configuration status="WARN"> in your log4j2.xml Until your log4j2.xml configuration is read, the effective level will be ERROR Once your log4j2.xml configuration is read, the effective level will be WARN as you configured Hence, unless you use either system properties or "log4j2.StatusLogger.properties" file in the classpath, there is a time window that only the defaults will be effective. StatusLogger is designed as a singleton class accessed statically. If you are running an application containing multiple Log4j configurations (e.g., in a servlet environment with multiple containers), and you happen to have differing StatusLogger configurations (e.g, one log4j2.xml containing <Configuration status="ERROR"> while the other <Configuration status="INFO"> ), the last loaded configuration will be the effective one. Properties StatusLogger can be configured using the following system properties: log4j2.debug Env. variable LOG4J_DEBUG Type boolean Default value false If set to a value different from false , sets the level of the status logger to TRACE overriding any other system property. log4j2.statusEntries Env. variable LOG4J_STATUS_ENTRIES Type int Default value 0 Specifies the number of status logger entries to cache. Once the limit is reached newer entries will overwrite the oldest ones. log4j2.statusLoggerLevel Env. variable LOG4J_STATUS_LOGGER_LEVEL Type Level Default value ERROR Specifies the level of the status logger. Can be overridden by log4j2.debug . log4j2.statusLoggerDateFormat Env. variable LOG4J_STATUS_LOGGER_DATE_FORMAT Type DateTimeFormatter pattern Default value DateTimeFormatter.ISO_INSTANT Sets the DateTimeFormatter pattern used by status logger to format dates. log4j2.statusLoggerDateFormatZone Env. variable LOG4J_STATUS_LOGGER_DATE_FORMAT_ZONE Type ZoneId Default value ZoneId.systemDefault() Sets the timezone id used by status logger. See ZoneId for the accepted formats. Debug mode When the log4j2.debug system property is present, any level-related filtering will be skipped and all events will be notified to listeners. If no listeners are available, the fallback listener of type StatusConsoleListener will be used. Listeners Each recorded log event by StatusLogger will first get buffered and then used to notify the registered StatusListener s. If none are available, the fallback listener of type StatusConsoleListener will be used. You can programmatically register listeners using the StatusLogger#registerListener(StatusListener) method . Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/kotlin/index.html | Log4j Kotlin API :: Apache Log4j Kotlin Apache Log4j Kotlin a subproject of Apache Logging Services Home Development Release notes Download Support Security Home Log4j Kotlin API Edit this Page Log4j Kotlin API Log4j Kotlin API provides a Kotlin-friendly interface to log against the Log4j API . The minimum requirements are Java 8 and Kotlin 1.6.21 . This is just a logging API. Your application still needs to have a logging backend (e.g., Log4j ) configured. Dependencies You need to have the org.apache.logging.log4j:log4j-api-kotlin dependency in your classpath: <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api-kotlin</artifactId> <version>1.5.0</version> </dependency> Java module name and OSGi Bundle-SymbolicName are set to org.apache.logging.log4j.api.kotlin . Creating loggers A Logger is the primary interface that users interact with Log4j Kotlin. You can create Logger s particularly in two ways: Associate them with the class ( Recommended! ) Associate them with the instance Creating class loggers For most applications, we recommend you to create a single logger instance per class definition – not per class instance ! This not only avoids creating an extra logger field for each instance, its access pattern transparently communicates the implementation: the Logger is statically bound to the class definition. You can create class loggers in one of following ways: Creating a logger in the companion object This is the traditional approach to create class loggers. It also happens to be the most efficient one, since the logger lookup is performed once and its result is stored in the companion object shared by all instances of the class. import org.apache.logging.log4j.kotlin.logger class DbTableService { companion object { private val LOGGER = logger() (1) } fun truncateTable(tableName: String) { LOGGER.warn { "truncating table `${tableName}`" } db.truncate(tableName) } } 1 Create a Logger associated with the static class definition that all instances of the class share Extending companion object from Logging Logging interface contains a logger getter that you can use by extending the companion object from the Logging class: import org.apache.logging.log4j.kotlin.Logging class DbTableService { companion object: Logging (1) fun truncateTable(tableName: String) { logger.warn { "truncating table `${tableName}`" } db.truncate(tableName) } } 1 Extending the companion object from Logging effectively creates a single Logger instance Assigned to the logger field Associated with the static class definition that all instances of the class share This getter-based approach incurs an extra overhead (compared to Creating a logger in the companion object ) due to the logger lookup involved at runtime. Creating instance loggers Even though we recommend you to create class loggers , there might be occasions (most notably while sharing classes in Jakarta EE environments ) necessitating loggers associated with each instance. You can achieve this as follows: Creating a logger in the class This is the traditional approach to create instance loggers. It also happens to be the most efficient one, since the logger lookup is performed once and its result is stored in the instance field. import org.apache.logging.log4j.kotlin.logger class DbTableService { private val logger = logger() (1) fun truncateTable(tableName: String) { logger.warn { "truncating table `${tableName}`" } db.truncate(tableName) } } 1 Create a Logger associated with the class instance Extending the class from Logging Logging interface contains a logger getter that you can use by extending the class from Logging : import org.apache.logging.log4j.kotlin.Logging class DbTableService: Logging { (1) fun truncateTable(tableName: String) { logger.warn { "truncating table `${tableName}`" } db.truncate(tableName) } } 1 Extending the class from Logging effectively creates a single Logger instance Assigned to the logger field Exclusively associated with the class instance (i.e., not shared among instances!) This getter-based approach incurs an extra overhead (compared to Creating a logger in the class ) due to the logger lookup involved at runtime. Using logger extension property You can use the logger extension property to dynamically inject a logger at the spot: import org.apache.logging.log4j.kotlin.logger class DbTableService { fun truncateTable(tableName: String) { logger.warn { "truncating table `${tableName}`" } (1) db.truncate(tableName) } } 1 logger will look up the associated Logger instance for the encapsulating class This getter-based approach incurs an extra overhead (compared to Creating a logger in the class ) due to the logger lookup involved at runtime. Thread context The ThreadContext API has two facade objects provided: ContextMap and ContextStack . import org.apache.logging.log4j.kotlin.ContextMap import org.apache.logging.log4j.kotlin.ContextStack ContextMap["key"] = "value" assert(ContextMap["key"] == "value") assert("key" in ContextMap) ContextMap += "anotherKey" to "anotherValue" ContextMap -= "key" ContextStack.push("message") assert(!ContextStack.empty) assert(ContextStack.depth == 1) val message = ContextStack.peek() assert(message == ContextStack.pop()) assert(ContextStack.empty) A CoroutineThreadContext context element is provided to integrate logging context with coroutines. We provide convenience functions loggingContext and additionalLoggingContext to create instances of CoroutineThreadContext with the appropriate context data. The result of these functions can be passed directly to coroutine builders to set the context for the coroutine. To set the context, ignoring any context currently in scope: launch(loggingContext(mapOf("myKey" to "myValue"), listOf("test"))) { assertEquals("myValue", ContextMap["myKey"]) assertEquals("test", ContextStack.peek()) } Or to preserve the existing context and add additional logging context: launch(additionalLoggingContext(mapOf("myKey" to "myValue"), listOf("test"))) { assertEquals("myValue", ContextMap["myKey"]) assertEquals("test", ContextStack.peek()) } Alternatively, to change the context without launching a new coroutine, the withLoggingContext and withAdditionalLoggingContext functions are provided: withAdditionalLoggingContext(mapOf("myKey" to "myValue"), listOf("test")) { assertEquals("myValue", ContextMap["myKey"]) assertEquals("test", ContextStack.peek()) } These functions are shorthand for withContext(loggingContext(…​)) or withContext(additionalLoggingContext(…​)) . Parameter substitution Unlike Java, Kotlin provides native functionality for string templates . However, using a string template still incurs the message construction cost if the logger level is not enabled. To avoid this, prefer passing a lambda which won’t be evaluated until necessary: logger.debug { "Logging in user ${user.name} with birthday ${user.calcBirthday()}" } Logger names Most logging implementations use a hierarchical scheme for matching logger names with logging configuration. In this scheme the logger name hierarchy is represented by . (dot) characters in the logger name, in a fashion very similar to the hierarchy used for Java/Kotlin package names. The Logger property added by the Logging interface follows this convention: the interface ensures the Logger is automatically named according to the class it is being used in. The value returned when calling the logger() extension method depends on the receiver of the extension. When called within an object, the receiver is this and therefore the logger will again be named according to the class it is being used in. However, a logger named via another class can be obtained as well: import org.apache.logging.log4j.kotlin class MyClass: BaseClass { val logger = SomeOtherClass.logger() // ... } Explicitly Named Loggers An explicitly-named logger may be obtained via the logger function that takes a name parameter: import org.apache.logging.log4j.kotlin class MyClass: BaseClass { val logger = logger("MyCustomLoggerName") // ... } This is also needed in scopes that do not have a this object, such as top-level functions. Copyright © 1999-2024 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/jmx-gui/index.html | Log4j JMX GUI :: Apache Log4j JMX GUI Apache Log4j JMX GUI a subproject of Apache Logging Services Home Development Release notes Download Support Security Home Log4j JMX GUI Edit this Page Log4j JMX GUI The Log4j JMX GUI provides a Swing-based client for remotely editing the Log4j configuration and remotely monitoring StatusLogger output. It can be run as a standalone application or as a JConsole plugin. Log4j has built-in support for JMX. The StatusLogger , ContextSelector , and all LoggerContext s, LoggerConfig s and Appender s are instrumented with MBeans and can be remotely monitored and controlled. See the Log4j JMX support documentation on how to configure it. Running the client as a JConsole plugin To run the Log4j JMX GUI as a JConsole plugin, start JConsole with the following command: $JAVA_HOME/bin/jconsole -pluginpath /path/to/log4j-api-2.25.1.jar:/path/to/log4j-core-2.25.1.jar:/path/to/log4j-jmx-gui-2.23.0-SNAPSHOT.jar or on Windows: %JAVA_HOME%\bin\jconsole -pluginpath \path\to\log4j-api-2.25.1.jar;\path\to\log4j-core-2.25.1.jar;\path\to\log4j-jmx-gui-2.23.0-SNAPSHOT.jar If you execute the above command and connect to your application, you will see an extra Log4j 2 tab in the JConsole window. This tab contains the client GUI, with the StatusLogger selected. The screenshot below shows the StatusLogger panel in JConsole. Remotely editing the Log4j configuration The client GUI also contains a simple editor that can be used to remotely change the Log4j configuration. The screenshot below shows the configuration edit panel in JConsole. The configuration edit panel provides two ways to modify the Log4j configuration: specifying a different configuration location URI, or modifying the configuration XML directly in the editor panel. If you specify a different configuration location URI and click the "Reconfigure from Location" button, the specified file or resource must exist and be readable by the application, or an error will occur and the configuration will not change. If an error occurred while processing the contents of the specified resource, Log4j will keep its original configuration, but the editor panel will show the contents of the file you specified. The text area showing the contents of the configuration file is editable, and you can directly modify the configuration in this editor panel. Clicking the "Reconfigure with XML below" button will send the configuration text to the remote application where it will be used to reconfigure Log4j on the fly. This will not overwrite any configuration file. Reconfiguring with text from the editor happens in memory only and the text is not permanently stored anywhere. Running the client as a standalone application To run the Log4j JMX GUI as a standalone application, run the following command: $JAVA_HOME/bin/java -cp /path/to/log4j-api-2.25.1.jar:/path/to/log4j-core-2.25.1.jar:/path/to/log4j-jmx-gui-2.23.0-SNAPSHOT.jar org.apache.logging.log4j.jmx.gui.ClientGui or on Windows: %JAVA_HOME%\bin\java -cp \path\to\log4j-api-2.25.1.jar;\path\to\log4j-core-2.25.1.jar;\path\to\log4j-jmx-gui-2.25.1.jar org.apache.logging.log4j.jmx.gui.ClientGui Where options are one of the following: <host>:<port> service:jmx:rmi:///jndi/rmi://<host>:<port>/jmxrmi service:jmx:rmi://<host>:<port>/jndi/rmi://<host>:<port>/jmxrmi The port number (i.e., port ) must be the same as the com.sun.management.jmxremote.port configured for the application you want to monitor. For example, if you started your application with these options: com.sun.management.jmxremote.port=33445 com.sun.management.jmxremote.authenticate=false com.sun.management.jmxremote.ssl=false Note that this disables all security so this is not recommended for production environments! Oracle’s documentation on Remote Monitoring and Management provides details on how to configure JMX more securely with password authentication and SSL. Then you can run the client with this command: $JAVA_HOME/bin/java -cp /path/to/log4j-api-2.25.1.jar:/path/to/log4j-core-2.25.1.jar:/path/to/log4j-jmx-gui-2.23.0-SNAPSHOT.jar org.apache.logging.log4j.jmx.gui.ClientGui localhost:33445 or on Windows: %JAVA_HOME%\bin\java -cp \path\to\log4j-api-2.25.1.jar;\path\to\log4j-core-2.25.1.jar;\path\to\log4j-jmx-gui-2.25.1.jar org.apache.logging.log4j.jmx.gui.ClientGui localhost:33445 The screenshot below shows the StatusLogger panel of the client GUI when running as a standalone application. The screenshot below shows the configuration editor panel of the client GUI when running as a standalone application. Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/592 | LLVM Weekly - #592, May 5th 2025 LLVM Weekly - #592, May 5th 2025 Welcome to the five hundred and ninety-second issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events The next Cambridge (UK) Compiler Social will take place on June 5th . The next LLVM Meetup in Darmstadt will take place on May 28th . Arm’s developer blog covers Arm’s contributions to LLVM 20 . According to the LLVM Calendar in the coming week there will be the following: Office hours with the following hosts: Johannes Doerfert, Quentin Colombet, Aaron Ballman. Online sync-ups on the following topics: MLIR C/C++ frontend, ClangIR upstreaming, pointer authentication, OpenMP, Clang C/C++ language working group, Flang, OpenMP for flang, memory safety working group. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Reid Kleckner provided updates on the LLVM_LINK_LLVM_DYLIB default change proposal . Maxim Kuvyrkov provided a summary of discussion on an LLVM LTS since the RFC was posted . Michele Scuttari started a discussion on quadratic scaling of bufferization in MLIR . Aiden Grossman is seeking agreement on running premerge postcommit through GitHub Actions . Joseph Huber started an RFC discussion in policy for top level directories and language runtimes with the following tl;dr “ for the future where offload/ begins to contain other language’s runtimes. Should we prefer a separate top level directory for each one? Or should they all be put under offload/.” LLVM 20.1.4 was released . Anshil Gandhi has a proposal for inserting casts to increase vectorisation of loads and stores . The next MLIR open meeting will take place on May 6th and cover “side effect semantics, its use for Linalg operations and how it affects CSE as well as other transformations”. Sameer Sahasrabuddhe has on RFC on defining thread convergence in C++ languages such as HIP, CUDA, OpenCL . LLVM commits MC layer support was added for the RISC-V XAndesperf vendor extension. 6ba1a62 . atomicrmw fmaxmimum/fminimum is now supported. 6e49f73 . A scheduling model was added for the MIPS i6400 and i6500 CPUs. c22bc21 . TableGen’s subtarget emitter now prints a warning when a Processor contains duplicate Features. 951292b . Various RISC-V instruction predicates were converted to using the TIIPredicate mechanism. 8f75747 . Values of opaque types are no longer allowed in IR. 6feb4a8 . The update_foo_checks scripts were updated to continue on error when processing multiple inputs. 88b03aa . Clang commits A new -header-include-filtering=direct-per-file option was added. 2f1ef1d . -Wimplicit-int-enum-cast was added which warns about implicit casts from int to an enumeration type in C, which is valid C but not compatible with C++. -Wjump-bypasses-init was also added. df267d7 , 543f112 . ClangIR upstreaming continues with initial switch statement support, union types, and more. 9d1f1c4 , 708053c . clang-format’s new OneLineFormatOffRegex option can be used to give a regex used to match a marker that disables clang-format for one line. 8effc8d . -Wunterminated-string-initialization warns upon an initialization from a string literal where the null terminator cannot be stored. Taking an example from the commit message: char buf[3] = "foo"; . e8ae779 . -ftime-report-json was added which outputs timing data formatted as JSON. 4a6c81d . Other project commits Flang’s build system started to use precompiled headers, which appear to result in meaningful compile time and memory usage improvements. d68c732 . Volatile references can now be lowered in Flang. 8836bce . A benchmarking script was added for LLD. 6b25cfb . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/log4j-iostreams.html | Log4j IOStreams :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Components Log4j IOStreams Edit this Page Log4j IOStreams Log4j IOStreams The IOStreams component is a Log4j API extension that provides numerous classes from java.io that can either write to a Logger while writing to another OutputStream or Writer , or the contents read by an InputStream or Reader can be wiretapped by a Logger . Requirements The Log4j IOStreams API extension requires the Log4j 2 API. This component was introduced in Log4j 2.1. Usage The main entry point for the IOStreams module is the builder class IoBuilder , and in particular, the IoBuilder.forLogger() methods. One primary usage of this API extension is for setting up loggers in the JDBC API. For example: PrintWriter logger = IoBuilder.forLogger(DriverManager.class) .setLevel(Level.DEBUG) .buildPrintWriter(); DriverManager.setLogWriter(logger); Using the IoBuilder class, there are a few more options that can be set. In general, there are six primary classes one can build from it: Reader , Writer , PrintWriter , InputStream , OutputStream , and PrintStream . The input-oriented classes are for wiretapping, and the output-oriented classes are for creating either an output class that solely outputs its lines as log messages, or an output filter class that logs all lines output through it to its delegate output class. Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/versioning.html | Versioning and maintenance policy :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Support Versioning and maintenance policy Edit this Page Versioning and maintenance policy This page answers the following questions: How are Log4j releases versioned? How shall users align the Log4j API and Log4j Core versions? How does Log4j classify maintenance status of released versions? Versioning policy Since version 2.0 , Log4j follows semantic versioning , with release numbers of the form: <major>.<minor>.<patch>[-<pre-release>] where: <major> The major version number is incremented when breaking changes are introduced. Upgrading to a new major version typically requires code changes in your application. For each major release, a migration guide is provided. See Migrating from Log4j 1 for instructions on migrating from Log4j 1 to Log4j 2. <minor> The minor version number is incremented when new features are added in a backward-compatible manner, such as: New Java methods or classes added to the public API of one of the Log4j artifacts . New configuration attributes added to Log4j Plugins (appenders, layouts, filters, etc.). Functionality or Java methods/classes being deprecated. Behavioral changes introduced without breaking the public API. Upgrading to a new minor version usually does not require code changes, unless you rely on undocumented behavior that has changed. To avoid accumulating such changes, we recommend upgrading minor versions regularly. When upgrading to a new minor version, review the corresponding Release notes , where behavioral changes are highlighted. <patch> The patch version number is incremented when only backward-compatible bug fixes are introduced. Upgrading to a new patch release is the simplest upgrade path. Click to see the OSGi package versioning policy Since release 2.21.0 , Log4j follows OSGi best practices by versioning each Java package individually (see Versioning Packages for details). Package versions are available in the manifest of each artifact and in the package Javadoc. For example, the version of the org.apache.logging.log4j.core.appender package appears in the package summary page . Package versions have the form X.Y.Z , where the X.Y portion corresponds to the Log4j version that last introduced changes to the package’s public API. For example, if a package has version 2.34.5 , then all functionality in that package has been available since Log4j 2.34.0 . Version alignment Because Log4j API and Log4j Core implementation are separate artifacts, their versions at runtime must be aligned: Log4j Core version X depends on Log4j API version X , so you must have at least version X of Log4j API at runtime. Conversely, to use all methods provided by Log4j API version X , you need a Log4j Core version that implements them, i.e., Log4j Core version X or later. The easiest way to ensure version alignment in your project is to use the log4j-bom artifact in your build tool. Using log4j-bom guarantees that compatible versions are selected, regardless of your tool’s dependency resolution strategy. Version lifecycle and maintenance policy Minor releases of Log4j follow a defined lifecycle consisting of four phases: Active development (AD) The version is under active development and may introduce new features. Pre-release builds (alpha, beta, etc.) may be published during this phase; vulnerability reports are accepted and will be addressed. Versions in this phase are not recommended for production use. Active maintenance (AM) The version is considered stable and suitable for production. In this phase, no new features are accepted: only bug fixes and security fixes. Vulnerability reports are accepted and will be addressed. Due to the limited resources of the Log4j project, only the latest minor release of the latest major version remains in Active Maintenance. End-of-maintenance (EOM) The version is no longer actively maintained. New releases, including security fixes, are very unlikely . Vulnerability reports may still be submitted, but fixes will be produced only in exceptional circumstances. Because the project is volunteer-driven, any PMC member may choose to create a release for an EOM version, but such releases should be considered exceptional. End-of-life (EOL) The version is no longer maintained, and vulnerability reports are not accepted . This final phase is entered after an official PMC vote and public announcement. We avoid using the term support to describe the maintenance phases, because support remains available in all phases: See Community support for the community-run discussion channels that are offered on a best-effort basis. Although the ASF does not endorse any third-party commercial providers, some companies may offer paid support for EOM or EOL versions. See Commercial support for a publicly maintained list of such providers. Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-event-request-response.html | How Lambda@Edge works with requests and responses - Amazon CloudFront How Lambda@Edge works with requests and responses - Amazon CloudFront Documentation Amazon CloudFront Developer Guide How Lambda@Edge works with requests and responses When you associate a CloudFront distribution with a Lambda@Edge function, CloudFront intercepts requests and responses at CloudFront edge locations. You can execute Lambda functions when the following CloudFront events occur: When CloudFront receives a request from a viewer (viewer request) Before CloudFront forwards a request to the origin (origin request) When CloudFront receives a response from the origin (origin response) Before CloudFront returns the response to the viewer (viewer response) If you're using AWS WAF, the Lambda@Edge viewer request is executed after any AWS WAF rules are applied. For more information, see Work with requests and responses and Lambda@Edge event structure . Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions Customize with Lambda@Edge Ways to use Lambda@Edge Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/log4j-spring-boot.html | Log4j Spring Boot Support :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Components Log4j Spring Boot Support Edit this Page Log4j Spring Boot Support This module provides enhanced support for Spring Boot beyond what Spring Boot itself provides. Overview The components in this module require a Spring Environment to have been created. Spring Boot applications initialize logging multiple times. The first initialization occurs before any initialization work is performed by Spring, thus no Environment will have been created and the components implemented in this module will not produce the desired results. Subsequent initializations of logging will have a Spring Environment. Usage Spring Lookup The Spring Lookup allows configuration files to reference properties defined in Spring configuration files from a Log4j configuration file. For example: <property name="applicationName">${spring:spring.application.name}</property> would set the Log4j applicationName property to the value of spring.application.name set in the Spring configuration. Spring Property Source Log4j uses property sources when resolving properties it uses internally. This support allows most of Log4j’s Configuration properties to be specified in the Spring Configuration. However, some properties that are only referenced during the first Log4j initialization, such as the property Log4j uses to allow the default Log4j implementation to be chosen, would not be available. Spring Profile Arbiter New with Log4j 2.15.0 are "Arbiters" which are conditionals that can cause a portion of the Log4j configuration to be included or excluded. log4j-spring-boot provides an Arbiter that allows a Spring profile value to be used for this purpose. Below is an example: <Configuration name="ConfigTest" status="ERROR" monitorInterval="5"> <Appenders> <SpringProfile name="dev | staging"> <Console name="Out"> <PatternLayout pattern="%m%n"/> </Console> </SpringProfile> <SpringProfile name="prod"> <List name="Out"> </List> </SpringProfile> </Appenders> <Loggers> <Logger name="org.apache.test" level="trace" additivity="false"> <AppenderRef ref="Out"/> </Logger> <Root level="error"> <AppenderRef ref="Out"/> </Root> </Loggers> </Configuration> Requirements The Log4j 2 Spring Cloud Configuration integration has a dependency on Log4j 2 API, Log4j 2 Core, and Spring Boot versions 2.0.3.RELEASE or 2.1.1.RELEASE or later versions it either release series. Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/565 | LLVM Weekly - #565, October 28th 2024 LLVM Weekly - #565, October 28th 2024 Welcome to the five hundred and sixty-fifth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org , @llvmweekly or @asbradbury on Twitter, or @llvmweekly@fosstodon.org or @asb@fosstodon.org . News and articles from around the web and events There will be an LLVM dev room again at FOSDEM next year and the call for proposals is now out . The event will take place on February 1st in Brussels. As a reminder, the Munich LLVM meetup is taking place this week on October 30th . The next LLVM Social in Darmstadt will take place on November 27th . According to the LLVM calendar in the coming week there will be the following: Office hours with the following hosts: Johannes Doerfert, Renato Golin. Online sync-ups on the following topics: Flang, libc++, new contributors, LLVM/Offload, classic flang, MLIR open meeting, MLGO. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums A number of summaries from roundtable discussions / workshops at the LLVM Developers' Meeting are already posted. Read the notes from the embedded toolchains roundtable , llvm-libc roundtable , security group roundtable , embedded toolchains unconference , and LLVM loves machine learning workshop . Kazu Hirata suggested migrating llvm::StringRef to std::string_view . Orlando Cazalet-Hyams kicked off an RFC thread on improving is_stmt placement for improved interactive debugging . Adrian Prantl proposes adding a minimal bytecode to LLDB tailored for running LLDB formatters . Alex Langford, Jonas Devlieghere, Ismail Bennani, Jim Ingham propose to refactor ‘Platform’ in LLDB . Another Flang liaison report to J3 was published . The report acts as a great summary of the current status and progress over the past ~4 months. The LLVM Code of Conduct Committee shared the 2024 transparency report . “Keksgesicht” asked about how to modify some downstream intrinsics to support 64-bit RISC-V as well as 32-bit and received guidance. John Harrison proposed updating lldb-dap’s server mode to allow multiple connections . This would amortize the overhead of loading symbols. David Spickett raised the question of documenting a minimum Python version for LLDB . Sandeep Dasgupta shared an RFC on supporting sub-channel quantization in MLIR . Donát Nagy shared some results from applying some proposed clang static analyzer loop handling improvements . LLVM commits MC layer support was added for newly added AArch64 atomic instructions and memory systems extensions, and register classes added for new Armv9.6 instructions. Also zeroing convert instructions, and more. 67ff5ba , 4679583 , 6e535a9 , 2c5208a . MC layer support was added for the new AArch64 compare-and-branch instructions. 82d2df2 . llvm-cxxfilter learned a new --quote option to quote demangled function names. d582442 . llvm-lit --use-unique-output-file-name will avoid overwriting test report files. This is motivated by CI use cases that often do something like ninja check-clang check-llvm . 8507dba , 22e21bc . The basic register allocator no longer takes into account the block frequency multiplier for spill weight calculations of optsize functions. This is because for optsize only the codesize cost should be considered, not the runtime cost of spilling. e6ada71 . Branch analysis was implemented for the Xtensa backend. 1e9a296 . The documentation covering “landing your change” on Github was cleaned up. dfc40650 . Support for the WebAssembly wide arithmetic proposal was implemented in LLVM. c2293b3 . Clang commits The __mfp8 type was introduced for AArch64. 4994051 . Support was removed for negative priority in RISC-V target_version and target_clones attributes. c77e836 . The KeepFormFeed option was added to clang-format. 786db63 . Other project commits libcxx started using libc code for the first time (part of project ‘hand in hand’). It reuses libc code to implement std::from_chars . 6c4267f . BOLT gained a profile density computation. 6ee5ff9 . The libc++ headers as of the LLVM 19.1 release were copied to a directory to enable them to serve as ‘frozen’ C++03 headers. e78f53d . std::flat_map was implemented. 0be1883 . The runtimes can now be built against an installed LLVM tree. b1be213 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/log4j-jul.html | JUL-to-Log4j bridge :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Components JUL-to-Log4j bridge Edit this Page JUL-to-Log4j bridge The JUL-to-Log4j bridge provides components that allow application and library that use java.util.logging.Logger (JUL) to log to the Log4j API instead. This chapter covers advanced usage scenarios of the JUL-to-Log4j bridge. For the installation procedure and basic configuration see Using JUL-to-Log4j section of our Installation guide . Configuration Struggling with the logging API, implementation, and bridge concepts? Click for an introduction. Logging API A logging API is an interface your code or your dependencies directly logs against. It is required at compile-time. It is implementation agnostic to ensure that your application can write logs, but is not tied to a specific logging implementation. Log4j API, SLF4J , JUL (Java Logging) , JCL (Apache Commons Logging) , JPL (Java Platform Logging) and JBoss Logging are major logging APIs. Logging implementation A logging implementation is only required at runtime and can be changed without the need to recompile your software. Log4j Core, JUL (Java Logging) , Logback are the most well-known logging implementations. Logging bridge Logging implementations accept input from a single logging API of their preference; Log4j Core from Log4j API, Logback from SLF4J, etc. A logging bridge is a simple logging implementation of a logging API that forwards all messages to a foreign logging API. Logging bridges allow a logging implementation to accept input from other logging APIs that are not their primary logging API. For instance, log4j-slf4j2-impl bridges SLF4J calls to Log4 API and effectively enables Log4j Core to accept input from SLF4J. To make things a little bit more tangible, consider the following visualization of a typical Log4j Core installation with bridges for an application: Figure 1. Visualization of a typical Log4j Core installation with SLF4J, JUL, and JPL bridges The java.util.logging logging API, available since JRE 1.4, shares many similarities with other logging API, such as SLF4J or Log4j API. Similarly to other APIs, it allows users to change the underlying LogManager implementation, but unlike other APIs, it has two big limitations: it is part of JRE, which means that each JVM can contain only one instance of the LogManager class and all the applications of an application server must use the same LogManager implementation, it does not support auto-detection of the logging backend through ServiceLoader or a similar mechanim (see JDK-8262741 ). In order to switch to an alternate LogManager implementation you must be able to set the java.util.logging.manager system property before the first logging call. To work around the limitations of JUL, the JUL-to-Log4j bridge offers two installation options: If you are able to modify the java.util.logging.manager system property very early in the JVM startup process, you can replace the default LogManager implementation with a Log4j-specific one. This option gives the best performance. See Using LogManager for details. If JUL initializes before your application does, which is a typical behavior in application servers, you can still configure JUL to use Log4j as appender. See Using Log4jBridgeHandler for details. Using LogManager The best way to install the JUL-to-Log4j bridge on your system is to set the value of the java.util.logging.manager Java system property to org.apache.logging.log4j.jul.LogManager This property must be set very early in an application initialization process, e.g. using the -D<property>=<value> command line option of the java executable or by adding: static { if (System.getProperty("java.util.logging.manager") == null) { System.setProperty("java.util.logging.manager", "org.apache.logging.log4j.jul.LogManager"); } } at the top of your main class. Setting this property will replace the default JUL LogManager implementation with a custom implementation that translates JUL Logger method calls into Log4j Logger calls with a minimal overhead. LogManager -specific features The use of a java.util.logging.Filter is supported on a per- Logger basis. However, it is recommended to use the standard Filters feature in Log4j instead. The use of java.util.logging.Handler classes is not supported. Custom handlers should be replaced with the appropriate Log4j Appender . Using Log4jBridgeHandler Are you a Spring Boot user? Spring Boot will automatically configure Log4jBridgeHandler . If setting the java.util.logging.manager system property is not possible, the JUL-to-Log4j bridge offers an implementation of JUL’s Handler abstract class, which redirects all log events to Log4j Core: org.apache.logging.log4j.jul.Log4jBridgeHandler . The Log4jBridgeHandler requires Log4j Core as logging implementation and will fail with other Log4j API implementations. In order to use Log4jBridgeHandler you can either: modify the default JUL configuration file logging.properties to only contain: # Set Log4jBridgeHandler as only handler for all JUL loggers handlers = org.apache.logging.log4j.jul.Log4jBridgeHandler See the JRE documentation for details about the format and location of the logging.properties file. or call the Log4jBridgeHandler.install() method in your code. Usage of Log4jBridgeHandler introduces a considerably higher overhead that the usage of LogManager , since logging events need to traverse the entire JUL logging pipeline followed by the logging pipeline of the Log4j API implementation. Consider setting propagateLevels to true to reduce the overhead. You can tune the behavior of Log4jBridgeHandler by adding the following properties to the logging.properties configuration file, which are also available as parameters to the install() method call: sysoutDebug Property name org.apache.logging.log4j.jul.Log4jBridgeHandler.sysoutDebug install() parameter N/A Type boolean Default value false If set to true the bridge will print diagnostic information on the standard output. appendSuffix Property name org.apache.logging.log4j.jul.Log4jBridgeHandler.appendSuffix install() parameter suffixToAppend Type String Default value null Specifies the suffix to append to the name of all JUL loggers, which allows to differentiate JUL log messages from native Log4j API messages. propagateLevels Property name org.apache.logging.log4j.jul.Log4jBridgeHandler.propagateLevels install() parameter propagateLevels Type boolean Default value false The additional overhead of Log4jBridgeHandler can be especially heavy for disabled log statements. This is why you must ensure that log event filtering of the Log4j implementation and JUL are aligned. You can do it by either: configuring JUL loggers with the same levels as the Log4j loggers, or setting this property to true , which will perform the synchronization automatically. Common configuration Independently of the way you install the JUL-to-Log4j bridge, you can finely tune the behavior of the bridge using the following configuration properties. See Configuration properties for more details. log4j2.julLevelConverter Env. variable LOG4J_JUL_LEVEL_CONVERTER Type Class<? extends LevelConverter> Default value org.apache.logging.log4j.jul.DefaultLevelConverter Fully qualified name of an alternative org.apache.logging.log4j.jul.LevelConverter implementation. Default level conversions Java Level Log4j Level OFF OFF SEVERE ERROR WARNING WARN INFO INFO CONFIG custom CONFIG level with a numeric value of 450 FINE DEBUG FINER TRACE FINEST custom FINEST level with a numeric value of 700 ALL ALL log4j2.julLoggerAdapter Env. variable LOG4J_JUL_LOGGER_ADAPTER Type Class<? extends AbstractLoggerAdapter> Default value org.apache.logging.log4j.jul.ApiLoggerAdapter Fully qualified class name of the org.apache.logging.log4j.jul.AbstractLoggerAdapter implementation to use. This property allows users to choose between two implementations of the logging bridge: org.apache.logging.log4j.jul.CoreLoggerAdapter It allows users to modify the Log4j Core configuration through the JUL Logger interface. It requires the usage of the Log4j Core implementation. org.apache.logging.log4j.jul.ApiLoggerAdapter It disables the level mutators in the JUL Logger interface. Since version 2.24.0 the default value changed to ApiLoggerAdapter . If you need to modify log levels via JUL, you need to select CoreLoggerAdapter explicitly. Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/soa.html | Integrating with service-oriented architectures :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Resources Integrating with service-oriented architectures Edit this Page Integrating with service-oriented architectures In this page we will share certain best practices you can employ in your applications using Log4j Core to integrate them with service-oriented architectures. While doing so, we will also try to share guides on some popular scenarios. Motivation Most modern software is deployed in service-oriented architectures . This is a very broad domain and can be realized in an amazingly large number of ways. Nevertheless, they all redefine the notion of an application: Deployed in multiple instances Situated in multiple locations ; either in the same rack, or in different data centers located in different continents Hosted by multiple platforms ; hardware, virtual machine, container, etc. Polyglot ; a product of multiple programming languages Scaled on demand; instances come and go in time Naturally, logging systems also evolved to accommodate these needs. In particular, the old practice of "monoliths writing logs to files rotated daily" has changed in two major angles: Application delivers logs differently Applications no longer write logs to files, but encode them structurally , and deliver them to an external system centrally managed. Most of the time this is a proxy (a library, a sidecar container, etc.) that takes care of discovering the log storage system and determining the right external service to forward the logs to. Platform stores logs differently There is no longer /var/log/tomcat/catalina.out combining all logs of a monolith. Instead, the software runs in multiple instances, each is implemented in a different language, and instances get scaled (i.e., new ones get started, old ones get stopped) on demand. To accommodate this, logs are persisted on a central storage system (Elasticsearch, Google Cloud Logging, etc.) that allows advanced navigation and filtering capabilities. Log4j Core not only adapts to this evolution, but also strives to provide the best in the class support for that. We will explore how to integrate Log4j with service-oriented architectures. Best practices Independent of the service-oriented architecture you choose, there are certain best practices we strongly encourage you to follow: Encode logs using a structured layout We can’t emphasize it enough to not use anything, but a structured layout to deliver your logs to an external system. We recommend JSON Template Layout for this purpose: JSON Template Layout provides full customizability and contains several predefined layouts for popular log storage services. JSON is accepted by every log storage service. JSON is supported by logging frameworks in other languages. This makes it possible to agree on a common log format with non-Java applications. Use a proxy for writing logs Most of the time it is not a good idea to write to the log storage system directly, but instead delegate that task to a proxy. This design decouples applications' log target and the log storage system and, as a result, effectively enables each to evolve independently and reliably (i.e., without downtime). For instance, this will allow the log storage system to scale or migrate to a new environment while proxies take care of necessary buffering and routing. This proxy can appear in many forms, for instance: Console can act as a proxy. Logs written to console can be consumed by an external service. For example, The Twelve-Factor App and Kubernetes Logging Architecture recommends this approach. A library can act as proxy. It can tap into the logging API and forward it to an external service. For instance, Datadog’s Java Log Collector uses this mechanism. An external service can act as a proxy, which applications can write logs to. For example, you can write to Logstash , a Kubernetes logging agent sidecar , or a Redis queue over a socket. What to use as a proxy depends on your deployment environment. You should consult to your colleagues if there is already an established logging proxy convention. Otherwise, we strongly encourage you to establish one in collaboration with your system administrators and architects. Configure your appender correctly Once you decide on the log proxy to use, the choice of appender pretty much becomes self-evident. Nevertheless, there are some tips we recommend you to practice: For writing to console , use a Console Appender and make sure to configure its direct attribute to true for the maximum efficiency. For writing to an external service , use a Socket Appender and make sure to configure the protocol and layout’s null termination (e.g., see the nullEventDelimiterEnabled configuration attribute of JSON Template Layout ) appropriately. Avoid writing to files As explained in Motivation , in a service-oriented architecture, log files are Difficult to maintain – writable volumes must be mounted to the runtime (container, VM, etc.), rotated, and monitored for excessive usage Difficult to use – multiple files need to be manually combined while troubleshooting, no central navigation point Difficult to interoperate – each application needs to be individually configured to produce the same structured log output to enable interleaving of logs from multiple sources while troubleshooting distributed issues In short, we don’t recommend writing logs to files . Separate logging configuration from the application We strongly advise you to separate the logging configuration from the application and couple them in an environment-specific way. This will allow you to Address environment-specific configurations (e.g., logging verbosity needs of test and production can be different) Ensure Log4j configuration changes applies to all affected Log4j-using software without the need to manually update their Log4j configuration one by one How to implement this separation pretty much depends on your setup. We will share some recommended approaches to give you an idea: Choosing configuration files during deployment Environment-specific Log4j configuration files ( log4j2-common.xml , log4j2-local.xml , log4j2-test.xml , log4j2-prod.xml , etc.) can be provided in one of following ways: Shipped with your software (i.e., accessible in the classpath) Served from an HTTP server A combination of the first two Depending on the deployment environment, you can selectively activate a subset of them using the log4j2.configurationFile configuration property . Spring Boot allows you to configure the underlying logging system . Just like any other Spring Boot configuration, logging-related configuration also can be provided in multiple files split by profiles matching the environment: application-common.yaml , application-local.yaml , etc. Spring Boot’s Externalized Configuration System will automatically load these files depending on the active profile(s). Mounting configuration files during deployment Many service-oriented deployment architectures offer solutions for environment-specific configuration storage; Kubernetes' ConfigMap , HashiCorp’s Consul , etc. You can leverage these to store environment-specific Log4j configurations and mount them to the associated runtime (container, VM, etc.) at deployment. Log4j Core can poll configuration files for changes (see the monitorInterval attribute ) and reconfigure the associated logger context. You can leverage this mechanism to dynamically update the Log4j configuration at runtime . You need to be careful with this mechanism to not shoot yourself in the foot. Imagine publishing an incorrect log4j2.xml and rendering the logging setup of your entire cluster useless in seconds. Coupling the configuration with the application at deployment and gradually deploying new configurations is a more reliable approach. Guides In this section, we will share guides on some popular integration scenarios. Docker See Log4j Docker for Docker-specific Log4j features, e.g., Docker Lookup . We also strongly advise you to check the extensive logging integration offered by Docker containers. Kubernetes Log4j Kubernetes (containing Kubernetes Lookup ) is distributed as a part of Fabric8’s Kubernetes Client, refer to its website for details. Elasticsearch & Logstash Elasticsearch, Logstash, and Kibana (aka. ELK Stack ) is probably the most popular logging system solution. In this setup, Elasticsearch is used for log storage Logstash is used for transformation and ingestion to Elasticsearch from multiple sources (file, socket, etc.) Kibana is used as a web-based UI to query Elasticsearch To begin with, JSON is the de facto messaging format used across the entire Elastic platform . Hence, as stated earlier, we strongly advise you to configure a structured encoding , i.e., JSON Template Layout . Logstash as a proxy While using ELK stack, there are numerous ways you can write your application logs to Elasticsearch. We advise you to always employ a proxy while doing so. In particular, we recommend you to use Logstash for this purpose. In a modern software stack, the shape and accessibility of log varies greatly: some write to files (be it legacy or new systems), some doesn’t provide a structured encoding, etc. Logstash excels at ingesting from a wide range of sources, transforming them into the desired format, and writing them to Elasticsearch. While setting up Logstash, we recommend you to use TCP input plugin in combination with Elasticsearch output plugin to accept logs over a TCP socket and write them to Elasticsearch: An example logstash.conf snippet for accepting JSON-encoded log events over TCP and writing them to Elasticsearch input { tcp { (1) port => 12345 (2) codec => "json" (3) } } output { # stdout { codec => rubydebug } (4) # Modify the hosts value to reflect where Elasticsearch is installed. elasticsearch { (5) hosts => ["http://localhost:9200/"] (6) index => "app-%{application}-%{+YYYYMMdd}" (7) } } 1 Using TCP input plugin to accept logs from 2 Setting the port Logstash will bind to accept TCP connections to 12345 – adapt the port to your setup 3 Setting the payload encoding to JSON 4 Uncomment this while troubleshooting your Logstash configuration 5 Using Elasticsearch output plugin to write logs to Elasticsearch 6 The list of Elasticsearch hosts to connect to 7 The name of the Elasticsearch index to write to Refer to the official documentation for details on configuring a Logstash pipeline. For the sake of completeness, see the following Log4j configuration to write to the TCP socket Logstash accepts input from: XML JSON YAML Properties Snippet from an example log4j2.xml <Socket name="SOCKET" host="localhost" port="12345"> <JsonTemplateLayout/> </Socket> Snippet from an example log4j2.json "Socket": { "name": "SOCKET", "host": "localhost", "port": 12345, "JsonTemplateLayout": {} } Snippet from an example log4j2.yaml Socket: name: "SOCKET" host: "localhost" port: 12345 JsonTemplateLayout: {} Snippet from an example log4j2.properties appender.0.type = Socket appender.0.name = SOCKET appender.0.host = localhost appender.0.port = 12345 appender.0.layout.type = JsonTemplateLayout We don’t recommend writing logs to files . If this is a necessity in your logging setup for some reason, we recommend you to check Filebeat . It is a data shipper agent for forwarding logs to Logstash, Elasticsearch, etc. Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
http://docs.buildbot.net/current/manual/cmdline.html#buildbot | 2.7. Command-line Tool — Buildbot 4.3.0 documentation Buildbot 1. Buildbot Tutorial 2. Buildbot Manual 2.1. Introduction 2.2. Installation 2.3. Concepts 2.4. Secret Management 2.5. Configuration 2.6. Customization 2.7. Command-line Tool 2.7.1. buildbot 2.7.1.1. Administrator Tools 2.7.1.2. Developer Tools 2.7.1.3. Other Tools 2.7.1.4. .buildbot config directory 2.7.2. buildbot-worker 2.7.2.1. create-worker 2.7.2.2. start 2.7.2.3. restart 2.7.2.4. stop 2.8. Resources 2.9. Optimization 2.10. Plugin Infrastructure in Buildbot 2.11. Deployment 2.12. Upgrading 3. Buildbot Development 4. Release Notes 5. Older Release Notes 6. API Indices Buildbot 2. Buildbot Manual 2.7. Command-line Tool View page source 2.7. Command-line Tool This section describes command-line tools available after buildbot installation. The two main command-line tools are buildbot and buildbot-worker . The former handles a Buildbot master and the former handles a Buildbot worker. Every command-line tool has a list of global options and a set of commands which have their own options. One can run these tools in the following way: buildbot [global options] command [command options] buildbot-worker [global options] command [command options] The buildbot command is used on the master, while buildbot-worker is used on the worker. Global options are the same for both tools which perform the following actions: --help Print general help about available commands and global options and exit. All subsequent arguments are ignored. --verbose Set verbose output. --version Print current buildbot version and exit. All subsequent arguments are ignored. You can get help on any command by specifying --help as a command option: buildbot command --help You can also use manual pages for buildbot and buildbot-worker for quick reference on command-line options. The remainder of this section describes each buildbot command. See Command Line Index for a full list. 2.7.1. buildbot The buildbot command-line tool can be used to start or stop a buildmaster or buildbot, and to interact with a running buildmaster. Some of its subcommands are intended for buildmaster admins, while some are for developers who are editing the code that the buildbot is monitoring. 2.7.1.1. Administrator Tools The following buildbot sub-commands are intended for buildmaster administrators: create-master buildbot create-master -r {BASEDIR} This creates a new directory and populates it with files that allow it to be used as a buildmaster’s base directory. You will usually want to use the option -r option to create a relocatable buildbot.tac . This allows you to move the master directory without editing this file. upgrade-master buildbot upgrade-master {BASEDIR} This upgrades a previously created buildmaster’s base directory for a new version of buildbot master source code. This will copy the web server static files, and potentially upgrade the db. start buildbot start [--nodaemon] {BASEDIR} This starts a buildmaster which was already created in the given base directory. The daemon is launched in the background, with events logged to a file named twistd.log . The option –nodaemon option instructs Buildbot to skip daemonizing. The process will start in the foreground. It will only return to the command-line when it is stopped. Additionally, the user can set the environment variable START_TIMEOUT to specify the amount of time the script waits for the master to start until it declares the operation as failure. restart buildbot restart [--nodaemon] {BASEDIR} Restart the buildmaster. This is equivalent to stop followed by start The option –nodaemon option has the same meaning as for start . stop buildbot stop {BASEDIR} This terminates the daemon (either buildmaster or worker) running in the given directory. The --clean option shuts down the buildmaster cleanly. With --no-wait option buildbot stop command will send buildmaster shutdown signal and will immediately exit, not waiting for complete buildmaster shutdown. sighup buildbot sighup {BASEDIR} This sends a SIGHUP to the buildmaster running in the given directory, which causes it to re-read its master.cfg file. checkconfig buildbot checkconfig {BASEDIR|CONFIG_FILE} This checks if the buildmaster configuration is well-formed and contains no deprecated or invalid elements. If no arguments are used or the base directory is passed as the argument the config file specified in buildbot.tac is checked. If the argument is the path to a config file then it will be checked without using the buildbot.tac file. cleanupdb buildbot cleanupdb {BASEDIR|CONFIG_FILE} [-q] This command is frontend for various database maintenance jobs: optimiselogs: This optimization groups logs into bigger chunks to apply higher level of compression. This script runs for as long as it takes to finish the job including the time needed to check master.cfg file. copy-db buildbot copy-db {DESTINATION_URL} {BASEDIR} [-q] This command copies all buildbot data from source database configured in the buildbot configuration file to the destination database. The URL of the destination database is specified on the command line. The destination database may have different type from the source database. The destination database must be empty. The script will initialize it in the same way as if a new Buildbot installation was created. Source database must be already upgraded to the current Buildbot version by the buildbot upgrade-master command. 2.7.1.2. Developer Tools These tools are provided for use by the developers who are working on the code that the buildbot is monitoring. try This lets a developer to ask the question What would happen if I committed this patch right now? . It runs the unit test suite (across multiple build platforms) on the developer’s current code, allowing them to make sure they will not break the tree when they finally commit their changes. The buildbot try command is meant to be run from within a developer’s local tree, and starts by figuring out the base revision of that tree (what revision was current the last time the tree was updated), and a patch that can be applied to that revision of the tree to make it match the developer’s copy. This (revision, patch) pair is then sent to the buildmaster, which runs a build with that SourceStamp . If you want, the tool will emit status messages as the builds run, and will not terminate until the first failure has been detected (or the last success). There is an alternate form which accepts a pre-made patch file (typically the output of a command like svn diff ). This --diff form does not require a local tree to run from. See try –diff concerning the --diff command option. For this command to work, several pieces must be in place: the Try_Jobdir or : Try_Userpass , as well as some client-side configuration. Locating the master The try command needs to be told how to connect to the try scheduler, and must know which of the authentication approaches described above is in use by the buildmaster. You specify the approach by using --connect=ssh or --connect=pb (or try_connect = 'ssh' or try_connect = 'pb' in .buildbot/options ). For the PB approach, the command must be given a option –master argument (in the form HOST : PORT ) that points to TCP port that you picked in the Try_Userpass scheduler. It also takes a option –username and option –passwd pair of arguments that match one of the entries in the buildmaster’s userpass list. These arguments can also be provided as try_master , try_username , and try_password entries in the .buildbot/options file. For the SSH approach, the command must be given option –host and option –username , to get to the buildmaster host. It must also be given option –jobdir , which points to the inlet directory configured above. The jobdir can be relative to the user’s home directory, but most of the time you will use an explicit path like ~buildbot/project/trydir . These arguments can be provided in .buildbot/options as try_host , try_username , try_password , and try_jobdir . If you need to use something different from the default ssh command for connecting to the remote system, you can use –ssh command line option or try_ssh in the configuration file. The SSH approach also provides a option –buildbotbin argument to allow specification of the buildbot binary to run on the buildmaster. This is useful in the case where buildbot is installed in a virtualenv on the buildmaster host, or in other circumstances where the buildbot command is not on the path of the user given by option –username . The option –buildbotbin argument can be provided in .buildbot/options as try_buildbotbin The following command line arguments are deprecated, but retained for backward compatibility: --tryhost is replaced by option –host --trydir is replaced by option –jobdir --master is replaced by option –masterstatus Likewise, the following .buildbot/options file entries are deprecated, but retained for backward compatibility: try_dir is replaced by try_jobdir masterstatus is replaced by try_masterstatus Waiting for results If you provide the option –wait option (or try_wait = True in .buildbot/options ), the buildbot try command will wait until your changes have either been proven good or bad before exiting. Unless you use the option –quiet option (or try_quiet=True ), it will emit a progress message every 60 seconds until the builds have completed. The SSH connection method does not support waiting for results. Choosing the Builders A trial build is performed on multiple Builders at the same time, and the developer gets to choose which Builders are used (limited to a set selected by the buildmaster admin with the TryScheduler ’s builderNames= argument). The set you choose will depend upon what your goals are: if you are concerned about cross-platform compatibility, you should use multiple Builders, one from each platform of interest. You might use just one builder if that platform has libraries or other facilities that allow better test coverage than what you can accomplish on your own machine, or faster test runs. The set of Builders to use can be specified with multiple option –builder arguments on the command line. It can also be specified with a single try_builders option in .buildbot/options that uses a list of strings to specify all the Builder names: try_builders = [ "full-OSX" , "full-win32" , "full-linux" ] If you are using the PB approach, you can get the names of the builders that are configured for the try scheduler using the get-builder-names argument: buildbot try --get-builder-names --connect = pb --master = ... --username = ... --passwd = ... Specifying the VC system The try command also needs to know how to take the developer’s current tree and extract the (revision, patch) source-stamp pair. Each VC system uses a different process, so you start by telling the try command which VC system you are using, with an argument like option –vc=cvs or option –vc=git . This can also be provided as try_vc in .buildbot/options . The following names are recognized: bzr cvs darcs hg git mtn p4 svn Finding the top of the tree Some VC systems (notably CVS and SVN) track each directory more-or-less independently, which means the try command needs to move up to the top of the project tree before it will be able to construct a proper full-tree patch. To accomplish this, the try command will crawl up through the parent directories until it finds a marker file. The default name for this marker file is .buildbot-top , so when you are using CVS or SVN you should touch .buildbot-top from the top of your tree before running buildbot try . Alternatively, you can use a filename like ChangeLog or README , since many projects put one of these files in their top-most directory (and nowhere else). To set this filename, use --topfile=ChangeLog , or set it in the options file with try_topfile = 'ChangeLog' . You can also manually set the top of the tree with --topdir=~/trees/mytree , or try_topdir = '~/trees/mytree' . If you use try_topdir , in a .buildbot/options file, you will need a separate options file for each tree you use, so it may be more convenient to use the try_topfile approach instead. Other VC systems which work on full projects instead of individual directories (Darcs, Mercurial, Git, Monotone) do not require try to know the top directory, so the option –try-topfile and option –try-topdir arguments will be ignored. If the try command cannot find the top directory, it will abort with an error message. The following command line arguments are deprecated, but retained for backward compatibility: --try-topdir is replaced by option –topdir --try-topfile is replaced by option –topfile Determining the branch name Some VC systems record the branch information in a way that try can locate it. For the others, if you are using something other than the default branch, you will have to tell the buildbot which branch your tree is using. You can do this with either the option –branch argument, or a try_branch entry in the .buildbot/options file. Determining the revision and patch Each VC system has a separate approach for determining the tree’s base revision and computing a patch. CVS try pretends that the tree is up to date. It converts the current time into a option -D time specification, uses it as the base revision, and computes the diff between the upstream tree as of that point in time versus the current contents. This works, more or less, but requires that the local clock be in reasonably good sync with the repository. SVN try does a svn status -u to find the latest repository revision number (emitted on the last line in the Status against revision: NN message). It then performs an svn diff -r NN to find out how your tree differs from the repository version, and sends the resulting patch to the buildmaster. If your tree is not up to date, this will result in the try tree being created with the latest revision, then backwards patches applied to bring it back to the version you actually checked out (plus your actual code changes), but this will still result in the correct tree being used for the build. bzr try does a bzr revision-info to find the base revision, then a bzr diff -r$base.. to obtain the patch. Mercurial hg parents --template '{node}\n' emits the full revision id (as opposed to the common 12-char truncated) which is a SHA1 hash of the current revision’s contents. This is used as the base revision. hg diff then provides the patch relative to that revision. For try to work, your working directory must only have patches that are available from the same remotely-available repository that the build process’ source.Mercurial will use. Perforce try does a p4 changes -m1 ... to determine the latest changelist and implicitly assumes that the local tree is synced to this revision. This is followed by a p4 diff -du to obtain the patch. A p4 patch differs slightly from a normal diff. It contains full depot paths and must be converted to paths relative to the branch top. To convert the following restriction is imposed. The p4base (see P4Source ) is assumed to be //depot Darcs try does a darcs changes --context to find the list of all patches back to and including the last tag that was made. This text file (plus the location of a repository that contains all these patches) is sufficient to re-create the tree. Therefore the contents of this context file are the revision stamp for a Darcs-controlled source tree. It then does a darcs diff -u to compute the patch relative to that revision. Git git branch -v lists all the branches available in the local repository along with the revision ID it points to and a short summary of the last commit. The line containing the currently checked out branch begins with “* “ (star and space) while all the others start with “ “ (two spaces). try scans for this line and extracts the branch name and revision from it. Then it generates a diff against the base revision. Todo I’m not sure if this actually works the way it’s intended since the extracted base revision might not actually exist in the upstream repository. Perhaps we need to add a –remote option to specify the remote tracking branch to generate a diff against. Monotone mtn automate get_base_revision_id emits the full revision id which is a SHA1 hash of the current revision’s contents. This is used as the base revision. mtn diff then provides the patch relative to that revision. For try to work, your working directory must only have patches that are available from the same remotely-available repository that the build process’ source.Monotone will use. patch information You can provide the option –who=dev to designate who is running the try build. This will add the dev to the Reason field on the try build’s status web page. You can also set try_who = dev in the .buildbot/options file. Note that option –who=dev will not work on version 0.8.3 or earlier masters. Similarly, option –comment=COMMENT will specify the comment for the patch, which is also displayed in the patch information. The corresponding config-file option is try_comment . Sending properties You can set properties to send with your change using either the option –property=key=value option, which sets a single property, or the option –properties=key1=value1,key2=value2… option, which sets multiple comma-separated properties. Either of these can be specified multiple times. Note that the option –properties option uses commas to split on properties, so if your property value itself contains a comma, you’ll need to use the option –property option to set it. try –diff Sometimes you might have a patch from someone else that you want to submit to the buildbot. For example, a user may have created a patch to fix some specific bug and sent it to you by email. You’ve inspected the patch and suspect that it might do the job (and have at least confirmed that it doesn’t do anything evil). Now you want to test it out. One approach would be to check out a new local tree, apply the patch, run your local tests, then use buildbot try to run the tests on other platforms. An alternate approach is to use the buildbot try --diff form to have the buildbot test the patch without using a local tree. This form takes a option –diff argument which points to a file that contains the patch you want to apply. By default this patch will be applied to the TRUNK revision, but if you give the optional option –baserev argument, a tree of the given revision will be used as a starting point instead of TRUNK. You can also use buildbot try --diff=- to read the patch from stdin . Each patch has a patchlevel associated with it. This indicates the number of slashes (and preceding pathnames) that should be stripped before applying the diff. This exactly corresponds to the option -p or option –strip argument to the patch utility. By default buildbot try --diff uses a patchlevel of 0, but you can override this with the option -p argument. When you use option –diff , you do not need to use any of the other options that relate to a local tree, specifically option –vc , option –try-topfile , or option –try-topdir . These options will be ignored. Of course you must still specify how to get to the buildmaster (with option –connect , option –tryhost , etc). 2.7.1.3. Other Tools These tools are generally used by buildmaster administrators. sendchange This command is used to tell the buildmaster about source changes. It is intended to be used from within a commit script, installed on the VC server. It requires that you have a PBChangeSource ( PBChangeSource ) running in the buildmaster (by being set in c['change_source'] ). buildbot sendchange --master {MASTERHOST}:{PORT} --auth {USER}:{PASS} --who {USER} {FILENAMES..} The option –auth option specifies the credentials to use to connect to the master, in the form user:pass . If the password is omitted, then sendchange will prompt for it. If both are omitted, the old default (username “change” and password “changepw”) will be used. Note that this password is well-known, and should not be used on an internet-accessible port. The option –master and option –username arguments can also be given in the options file (see .buildbot config directory ). There are other (optional) arguments which can influence the Change that gets submitted: --branch (or option branch ) This provides the (string) branch specifier. If omitted, it defaults to None , indicating the default branch . All files included in this Change must be on the same branch. --category (or option category ) This provides the (string) category specifier. If omitted, it defaults to None , indicating no category . The category property can be used by schedulers to filter what changes they listen to. --project (or option project ) This provides the (string) project to which this change applies, and defaults to ‘’. The project can be used by schedulers to decide which builders should respond to a particular change. --repository (or option repository ) This provides the repository from which this change came, and defaults to '' . --revision This provides a revision specifier, appropriate to the VC system in use. --revision_file This provides a filename which will be opened and the contents used as the revision specifier. This is specifically for Darcs, which uses the output of darcs changes --context as a revision specifier. This context file can be a couple of kilobytes long, spanning a couple lines per patch, and would be a hassle to pass as a command-line argument. --property This parameter is used to set a property on the Change generated by sendchange . Properties are specified as a name : value pair, separated by a colon. You may specify many properties by passing this parameter multiple times. --comments This provides the change comments as a single argument. You may want to use option –logfile instead. --logfile This instructs the tool to read the change comments from the given file. If you use - as the filename, the tool will read the change comments from stdin. --encoding Specifies the character encoding for all other parameters, defaulting to 'utf8' . --vc Specifies which VC system the Change is coming from, one of: cvs , svn , darcs , hg , bzr , git , mtn , or p4 . Defaults to None . user Note that in order to use this command, you need to configure a CommandlineUserManager instance in your master.cfg file, which is explained in Users Options . This command allows you to manage users in buildbot’s database. No extra requirements are needed to use this command, aside from the Buildmaster running. For details on how Buildbot manages users, see Users . --master The user command can be run virtually anywhere provided a location of the running buildmaster. The option –master argument is of the form MASTERHOST : PORT . --username PB connection authentication that should match the arguments to CommandlineUserManager . --passwd PB connection authentication that should match the arguments to CommandlineUserManager . --op There are four supported values for the option –op argument: add , update , remove , and get . Each are described in full in the following sections. --bb_username Used with the option –op=update option, this sets the user’s username for web authentication in the database. It requires option –bb_password to be set along with it. --bb_password Also used with the option –op=update option, this sets the password portion of a user’s web authentication credentials into the database. The password is first encrypted prior to storage for security reasons. --ids When working with users, you need to be able to refer to them by unique identifiers to find particular users in the database. The option –ids option lets you specify a comma separated list of these identifiers for use with the user command. The option –ids option is used only when using option –op=remove or option –op=get . --info Users are known in buildbot as a collection of attributes tied together by some unique identifier (see Users ). These attributes are specified in the form {TYPE}={VALUE} when using the option –info option. These {TYPE}={VALUE} pairs are specified in a comma separated list, so for example: --info=svn=jdoe,git='John Doe <joe@example.com>' The option –info option can be specified multiple times in the user command, as each specified option will be interpreted as a new user. Note that option –info is only used with option –op=add or with option –op=update , and whenever you use option –op=update you need to specify the identifier of the user you want to update. This is done by prepending the option –info arguments with {ID:} . If we were to update 'jschmo' from the previous example, it would look like this: --info=jdoe:git='Joe Doe <joe@example.com>' Note that option –master , option –username , option –passwd , and option –op are always required to issue the user command. The option –master , option –username , and option –passwd options can be specified in the option file with keywords user_master , user_username , and user_passwd , respectively. If user_master is not specified, then option –master from the options file will be used instead. Below are examples of how each command should look. Whenever a user command is successful, results will be shown to whoever issued the command. For option –op=add : buildbot user --master={MASTERHOST} --op=add \ --username={USER} --passwd={USERPW} \ --info={TYPE}={VALUE},... For option –op=update : buildbot user --master={MASTERHOST} --op=update \ --username={USER} --passwd={USERPW} \ --info={ID}:{TYPE}={VALUE},... For option –op=remove : buildbot user --master={MASTERHOST} --op=remove \ --username={USER} --passwd={USERPW} \ --ids={ID1},{ID2},... For option –op=get : buildbot user --master={MASTERHOST} --op=get \ --username={USER} --passwd={USERPW} \ --ids={ID1},{ID2},... A note on option –op=update : when updating the option –bb_username and option –bb_password , the option –info doesn’t need to have additional {TYPE}={VALUE} pairs to update and can just take the {ID} portion. 2.7.1.4. .buildbot config directory Many of the buildbot tools must be told how to contact the buildmaster that they interact with. This specification can be provided as a command-line argument, but most of the time it will be easier to set them in an options file. The buildbot command will look for a special directory named .buildbot , starting from the current directory (where the command was run) and crawling upwards, eventually looking in the user’s home directory. It will look for a file named options in this directory, and will evaluate it as a Python script, looking for certain names to be set. You can just put simple name = 'value' pairs in this file to set the options. For a description of the names used in this file, please see the documentation for the individual buildbot sub-commands. The following is a brief sample of what this file’s contents could be. # for status-reading tools masterstatus = 'buildbot.example.org:12345' # for 'sendchange' or the debug port master = 'buildbot.example.org:18990' Note carefully that the names in the options file usually do not match the command-line option name. master Equivalent to option –master for sendchange . It is the location of the pb.PBChangeSource for `sendchange . username Equivalent to option –username for the sendchange command. branch Equivalent to option –branch for the sendchange command. category Equivalent to option –category for the sendchange command. try_connect Equivalent to option –connect , this specifies how the try command should deliver its request to the buildmaster. The currently accepted values are ssh and pb . try_builders Equivalent to option –builders , specifies which builders should be used for the try build. try_vc Equivalent to option –vc for try , this specifies the version control system being used. try_branch Equivalent to option –branch , this indicates that the current tree is on a non-trunk branch. try_topdir try_topfile Use try_topdir , equivalent to option –try-topdir , to explicitly indicate the top of your working tree, or try_topfile , equivalent to option –try-topfile to name a file that will only be found in that top-most directory. try_host try_username try_dir When try_connect is ssh , the command will use try_host for option –tryhost , try_username for option –username , and try_dir for option –trydir . Apologies for the confusing presence and absence of ‘try’. try_username try_password try_master Similarly, when try_connect is pb , the command will pay attention to try_username for option –username , try_password for option –passwd , and try_master for option –master . try_wait masterstatus try_wait and masterstatus (equivalent to option –wait and master , respectively) are used to ask the try command to wait for the requested build to complete. 2.7.2. buildbot-worker buildbot-worker command-line tool is used for worker management only and does not provide any additional functionality. One can create, start, stop and restart the worker. 2.7.2.1. create-worker This creates a new directory and populates it with files that let it be used as a worker’s base directory. You must provide several arguments, which are used to create the initial buildbot.tac file. The option -r option is advisable here, just like for create-master . buildbot-worker create-worker -r {BASEDIR} {MASTERHOST}:{PORT} {WORKERNAME} {PASSWORD} The create-worker options are described in Worker Options . 2.7.2.2. start This starts a worker which was already created in the given base directory. The daemon is launched in the background, with events logged to a file named twistd.log . buildbot-worker start [--nodaemon] BASEDIR The option –nodaemon option instructs Buildbot to skip daemonizing. The process will start in the foreground. It will only return to the command-line when it is stopped. 2.7.2.3. restart buildbot-worker restart [--nodaemon] BASEDIR This restarts a worker which is already running. It is equivalent to a stop followed by a start . The option –nodaemon option has the same meaning as for start . 2.7.2.4. stop This terminates the daemon worker running in the given directory. buildbot stop BASEDIR Previous Next © Copyright Buildbot Team Members. Built with Sphinx using a theme provided by Read the Docs . | 2026-01-13T09:30:34 |
https://docs.aws.amazon.com/zh_tw/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-permissions.html | 設定 Lambda@Edge 的 IAM 許可權限和角色 - Amazon CloudFront 設定 Lambda@Edge 的 IAM 許可權限和角色 - Amazon CloudFront 文件 Amazon CloudFront 開發人員指南 將 Lambda@Edge 函數與 CloudFront 分佈產生關聯所需的 IAM 許可 服務主體的函數執行角色 Lambda@Edge 的服務連結角色 本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。 設定 Lambda@Edge 的 IAM 許可權限和角色 若要設定 Lambda@Edge,您必須具有以下適用於 AWS Lambda的 IAM 許可權限和角色: IAM 許可權限 :這些許可權限可讓您建立 Lambda 函數,並將其與 CloudFront 分佈建立關聯。 Lambda 函數執行角色 (IAM 角色) – Lambda 服務主體會擔任此角色來執行您的函數。 Lambda@Edge 的服務連結角色 – 服務連結角色允許特定 AWS 服務 將 Lambda 函數複寫到 , AWS 區域 並讓 CloudWatch 使用 CloudFront 日誌檔案。 將 Lambda@Edge 函數與 CloudFront 分佈產生關聯所需的 IAM 許可 除了 Lambda 需要的 IAM 許可權限之外,IAM 使用者還需要下列 IAM 許可權限,以便將 Lambda 函數與 CloudFront 分佈建立關聯: lambda:GetFunction :授予許可權限以取得 Lambda 函數的組態資訊,以及預先簽署的 URL,以下載包含該函數的 .zip 檔案。 lambda:EnableReplication* :授予許可權限至資源政策,讓 Lambda 複寫服務能夠取得函數程式碼和組態。 lambda:DisableReplication* :授予許可權限至資源政策,以便讓 Lambda 複寫服務可以刪除函數。 重要 您必須在 lambda:EnableReplication * 和 lambda:DisableReplication * 動作的結尾新增星號 ( * )。 對於資源,指定要在 CloudFront 事件發生時執行之函數版本 ARN,如以下範例所示: arn:aws:lambda:us-east-1:123456789012:function: TestFunction :2 iam:CreateServiceLinkedRole :授予許可權限建立服務連結角色,供 Lambda@Edge 用來複寫 CloudFront 中的 Lambda 函數。第一次設定 Lambda@Edge 之後,會自動為您建立服務連結角色。您不需要將此許可權限新增至使用 Lambda@Edge 的其他分佈。 cloudfront:UpdateDistribution 或 cloudfront:CreateDistribution :授予許可權限以更新或建立分佈。 如需詳細資訊,請參閱下列主題: 適用於 Amazon CloudFront 的 Identity and Access Management 《 AWS Lambda 開發人員指南 》中的 Lambda 資源存取許可權限 服務主體的函數執行角色 您必須建立 lambda.amazonaws.com 和 edgelambda.amazonaws.com 服務主體可在執行函數時擔任的 IAM 角色。 提示 在 Lambda 主控台中建立函數時,您可以選擇使用 AWS 政策範本建立新的執行角色。此步驟會 自動 新增執行函數所需的 Lambda@Edge 許可權限。請參閱 教學課程:建立簡單 Lambda@Edge 函數中的步驟 5 。 如需有關手動建立 IAM 角色的詳細資訊,請參閱《 IAM 使用者指南 》中的 建立角色及附加政策 (主控台) 。 範例:角色信任政策 您可在 IAM 主控台的 信任關係 索引標籤之下新增此角色。請勿在 許可權限 索引標籤下新增此政策。 JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "lambda.amazonaws.com", "edgelambda.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] } 如需更多資訊瞭解授予執行角色所需的許可權限,請參閱《 AWS Lambda 開發人員指南 》中的 Lambda 資源存取許可權限 。 備註 根據預設,每當 CloudFront 事件觸發 Lambda 函數時,都會將資料寫入至 CloudWatch Logs。若要使用這些日誌,執行角色需要許可才能將資料寫入 CloudWatch Logs。您能使用預先定義的 AWSLambdaBasicExecutionRole 對執行角色授予許可權限。 如需 CloudWatch Logs 的詳細資訊,請參閱 邊緣函數日誌 。 如果您的 Lambda 函數程式碼存取其他 AWS 資源,例如從 S3 儲存貯體讀取物件,則執行角色需要執行該動作的許可。 Lambda@Edge 的服務連結角色 Lambda@Edge 使用 IAM 服務連結角色 。服務連結角色是直接連結至服務的一種特殊 IAM 角色類型。服務連結角色由服務預先定義,並包含該服務在代表您呼叫其他 AWS 服務時,需要用到的所有權限。 Lambda@Edge 使用以下 IAM 服務連結角色: AWSServiceRoleForLambdaReplicator – Lambda@Edge 使用此角色讓 Lambda@Edge 將函數複寫至 AWS 區域。 您初次在 CloudFront 中新增 Lambda@Edge 觸發條件時,將會自動建立名為 AWSServiceRoleForLambdaReplicator 的角色,以便讓 Lambda@Edge 將函數複寫至 AWS 區域。使用 Lambda@Edge 函數時需要這個角色。AWSServiceRoleForLambdaReplicator 角色的 ARN 類似以下範例: arn:aws:iam::123456789012:role/aws-service-role/replicator.lambda.amazonaws.com/AWSServiceRoleForLambdaReplicator AWSServiceRoleForCloudFrontLogger :CloudFront 使用此角色將日誌檔案推送至 CloudWatch。您可以使用日誌檔案來偵錯 Lambda@Edge 驗證錯誤。 您新增 Lambda@Edge 函數關聯,讓 CloudFront 將 Lambda@Edge 錯誤日誌檔案推送至 CloudWatch 時,會自動建立名為 AWSServiceRoleForCloudFrontLogger 的角色。AWSServiceRoleForCloudFrontLogger 角色的 ARN 看起來類似如下: arn:aws:iam::account_number:role/aws-service-role/logger.cloudfront.amazonaws.com/AWSServiceRoleForCloudFrontLogger 服務連結角色可讓設定及使用 Lambda@Edge 變得更輕鬆,因為您不必手動新增必要的許可。Lambda@Edge 會定義其服務連結角色的許可,而且只有 Lambda@Edge 能夠擔任此角色。已定義的許可包括信任政策和許可政策。許可政策無法連接到其他任何 IAM 實體。 您必須移除任何關聯的 CloudFront 或 Lambda@Edge 資源,然後才能刪除服務連結角色。這有助於保護您的 Lambda@Edge 資源,讓您不會移除在存取作用中資源時仍有需要的服務連結角色。 如需服務連結角色的詳細資訊,請參閱 CloudFront 的服務連結角色 。 Lambda@Edge 的服務連結角色許可 Lambda@Edge 使用兩個服務連結角色,分別名為 AWSServiceRoleForLambdaReplicator 及 AWSServiceRoleForCloudFrontLogger 。以下章節說明這些角色的許可。 內容 Lambda Replicator 的服務連結角色許可 CloudFront Logger 的服務連結角色許可 Lambda Replicator 的服務連結角色許可 這個服務連結的角色可讓 Lambda 將 Lambda@Edge 函式複製到 AWS 區域。 AWSServiceRoleForLambdaReplicator 服務連結角色信任 replicator.lambda.amazonaws.com 服務來擔任該角色。 角色許可政策允許 Lambda@Edge 在指定資源上完成下列動作: arn:aws:lambda:*:*:function:* 的 lambda:CreateFunction arn:aws:lambda:*:*:function:* 的 lambda:DeleteFunction arn:aws:lambda:*:*:function:* 的 lambda:DisableReplication all AWS resources 的 iam:PassRole all AWS resources 的 cloudfront:ListDistributionsByLambdaFunction CloudFront Logger 的服務連結角色許可 此服務連結角色可讓 CloudFront 將日誌檔案推送至您的 CloudWatch,以讓您偵錯 Lambda@Edge 驗證錯誤。 AWSServiceRoleForCloudFrontLogger 服務連結角色信任 logger.cloudfront.amazonaws.com 服務來擔任該角色。 角色許可政策允許 Lambda@Edge 在指定 arn:aws:logs:*:*:log-group:/aws/cloudfront/* 資源上完成下列動作: logs:CreateLogGroup logs:CreateLogStream logs:PutLogEvents 您必須設定許可,允許 IAM 實體 (例如使用者、群組或角色) 刪除 Lambda@Edge 服務連結角色。如需詳細資訊,請參閱 IAM 使用者指南 中的 服務連結角色許可 。 建立 Lambda@Edge 的服務連結角色 一般而言,您不需要手動建立 Lambda@Edge 的服務連結角色。此服務會在以下情境為您自動建立角色: 您第一次建立觸發條件時,服務會建立 AWSServiceRoleForLambdaReplicator 角色 (如果尚未存在)。這個角色可讓 Lambda 將 Lambda@Edge 函數複寫到 AWS 區域。 如果您刪除服務連結角色,則當您在分佈中為 Lambda@Edge 新增觸發條件時,將會重新建立此角色。 您更新或建立具有 Lambda@Edge 關聯的 CloudFront 分佈時,服務會建立一個 AWSServiceRoleForCloudFrontLogger 角色 (如果該角色尚不存在)。此角色可讓 CloudFront 將日誌檔案推送至 CloudWatch。 如果您刪除服務連結角色,當您更新或建立具有 Lambda@Edge 關聯的 CloudFront 分佈時,將會再次建立此角色。 若要手動建立這些服務連結角色,您可以執行下列 AWS Command Line Interface (AWS CLI) 命令: 建立 AWSServiceRoleForLambdaReplicator 角色 執行下列命令。 aws iam create-service-linked-role --aws-service-name replicator.lambda.amazonaws.com 建立 AWSServiceRoleForCloudFrontLogger 角色 執行下列命令。 aws iam create-service-linked-role --aws-service-name logger.cloudfront.amazonaws.com 編輯 Lambda@Edge 服務連結角色 Lambda@Edge 不允許您編輯 AWSServiceRoleForLambdaReplicator 或 AWSServiceRoleForCloudFrontLogger 服務連結角色。在服務建立服務連結角色之後,您將無法變更角色名稱,因為各種實體皆可能會參考角色。不過,您可以使用 IAM 來編輯角色描述。如需詳細資訊,請參閱《 IAM 使用者指南 》中的 編輯服務連結角色 。 AWS 區域 支援 Lambda@Edge 服務連結角色 CloudFront 在下列 AWS 區域中支援將服務連結角色用於 Lambda@Edge: 美國東部 (維吉尼亞北部) – us-east-1 美國東部 (俄亥俄) – us-east-2 美國西部 (加利佛尼亞北部) – us-west-1 美國西部 (奧勒岡) – us-west-2 亞太地區 (孟買) – ap-south-1 亞太地區 (首爾) – ap-northeast-2 亞太地區 (新加坡) – ap-southeast-1 亞太地區 (雪梨) – ap-southeast-2 亞太地區 (東京) – ap-northeast-1 歐洲 (法蘭克福) – eu-central-1 歐洲 (愛爾蘭) – eu-west-1 歐洲 (倫敦) – eu-west-2 南美洲 (聖保羅) – sa-east-1 您的瀏覽器已停用或無法使用 Javascript。 您必須啟用 Javascript,才能使用 AWS 文件。請參閱您的瀏覽器說明頁以取得說明。 文件慣用形式 教學課程:基本 Lambda@Edge 函數 撰寫及建立 Lambda@Edge 函數 此頁面是否有幫助? - 是 感謝您,讓我們知道我們做得很好! 若您有空,歡迎您告知我們值得讚許的地方,這樣才能保持良好服務。 此頁面是否有幫助? - 否 感謝讓我們知道此頁面仍須改善。很抱歉,讓您失望。 若您有空,歡迎您提供改善文件的方式。 | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/manual/appenders/file.html | File appenders :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Manual Implementation Configuration Appenders File appenders Edit this Page File appenders Log4j Core provides multiple appenders that store log messages in a file. These appenders differ in the way they access the file system and might provide different performance characteristics. File appenders do not offer a mechanism for external applications to force it to reopen the log file. External log archiving tools such as logrotate will therefore need to copy the current log file and then truncate it. Log events emitted during this operation will be lost. If you want to rotate your log files, use a rolling file appender instead. Appenders Log4j Core provides three file appender implementations: File The File Appender uses FileOutputStream to access log files. RandomAccessFile The RandomAccessFile Appender uses RandomAccessFile to access log files. MemoryMappedFile The MemoryMappedFile Appender maps log files into a MappedByteBuffer . Instead of making system calls to write to disk, this appender can simply change the program’s local memory, which is orders of magnitude faster. Two appenders, even from different logger contexts, share a common FileManager if they use the same value fileName attribute . Sharing a FileManager guarantees that multiple appenders will access the log file sequentially, but requires most of the remaining configuration parameters to be the same. Common configuration Table 1. Common configuration attributes Attribute Type Default value Description Required fileName Path The path to the current log file If the folder containing the file does not exist, it will be created. name String The name of the appender. Optional bufferSize int 8192 The size of the ByteBuffer internally used by the appender. See Buffering for more details. ignoreExceptions boolean true If false , logging exception will be forwarded to the caller of the logging statement. Otherwise, they will be ignored. Logging exceptions are always also logged to Status Logger immediateFlush boolean true If set to true , the appender will flush its internal buffer after each event. See Buffering for more details. Table 2. Common nested elements Type Multiplicity Description Filter zero or one Allows filtering log events just before they are formatted and sent. See also appender filtering stage . Layout zero or one Formats log events. See Layouts for more information. File configuration The File Appender provides the following configuration options, beyond the common ones : Table 3. File configuration attributes Attribute Type Default value Description append boolean true If true , the log file will be opened in APPEND mode . On most systems this guarantees atomic writes to the end of the file, even if the file is opened by multiple applications. bufferedIo boolean true If set to true , Log4j Core will format each log event in an internal buffer, before sending it to the underlying resource. See Buffering for more details. createOnDemand boolean false The appender creates the file on-demand. The appender only creates the file when a log event passes all filters and is routed to this appender. Defaults to false. filePermissions PosixFilePermissions null If not null , it specifies the POSIX file permissions to apply to each created file. The permissions must be provided in the format used by PosixFilePermissions.fromString() , e.g. rw-rw---- . The underlying files system shall support POSIX file attribute view. fileOwner String null If not null , it specifies the file owner to apply to each created file. The underlying files system shall support file owner attribute view. fileGroup String null If not null , it specifies the file group owner to apply to each created file. The underlying files system shall support POSIX file attribute view. locking boolean false If true , Log4j will lock the log file at each log event. Note that the effects of this setting depend on the Operating System: some systems like most POSIX OSes do not offer mandatory locking, but only advisory file locking. This setting can also reduce the performance of the appender. 📖 Plugin reference for File RandomAccessFile configuration The RandomAccessFile Appender provides the following configuration options, beyond the common ones : Table 4. RollingRandomAccessFile configuration attributes Attribute Type Default value Description append boolean true If true , the appender starts writing at the end of the file. This setting does not give the same atomicity guarantees as for the RollingFile Appender . The log file cannot be opened by multiple applications at the same time. Unlike the File appender , this appender always uses an internal buffer of size bufferSize . 📖 Plugin reference for RandomAccessFile MemoryMappedFile configuration The MemoryMappedFile Appender provides the following configuration options, beyond the common ones : Table 5. RollingRandomAccessFile configuration attributes Attribute Type Default value Description append boolean true If true , the appender starts writing at the end of the file. This setting does not give the same atomicity guarantees as for the RollingFile Appender . The log file cannot be opened by multiple applications at the same time. regionLength int 32 × 1024 × 1024 It specifies the size measured in bytes of the memory mapped log file buffer. Unlike other file appenders, this appender always uses a memory mapped buffer of size regionLength as its internal buffer. 📖 Plugin reference for MemoryMappedFile Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/manual/api.html#loggers | Log4j API :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Manual API Edit this Page Log4j API Log4j is essentially composed of a logging API called Log4j API , and its reference implementation called Log4j Core . What is a logging API and a logging implementation? Logging API A logging API is an interface your code or your dependencies directly logs against. It is required at compile-time. It is implementation agnostic to ensure that your application can write logs, but is not tied to a specific logging implementation. Log4j API, SLF4J , JUL (Java Logging) , JCL (Apache Commons Logging) , JPL (Java Platform Logging) and JBoss Logging are major logging APIs. Logging implementation A logging implementation is only required at runtime and can be changed without the need to recompile your software. Log4j Core, JUL (Java Logging) , Logback are the most well-known logging implementations. Are you looking for a crash course on how to use Log4j in your application or library? See Getting started . You can also check out Installation for the complete installation instructions. Log4j API provides A logging API that libraries and applications can code to A minimal logging implementation (aka. Simple logger) Adapter components to create a logging implementation This page tries to cover the most prominent Log4j API features. Did you know that Log4j provides specialized APIs for Kotlin and Scala? Check out Log4j Kotlin and Log4j Scala projects for details. Introduction To log, you need a Logger instance which you will retrieve from the LogManager . These are all part of the log4j-api module, which you can install as follows: Maven Gradle <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api</artifactId> <version>${log4j-api.version}</version> </dependency> implementation 'org.apache.logging.log4j:log4j-api:${log4j-api.version}' You can use the Logger instance to log by using methods like info() , warn() , error() , etc. These methods are named after the log levels they represent, a way to categorize log events by severity. The log message can also contain placeholders written as {} that will be replaced by the arguments passed to the method. import org.apache.logging.log4j.Logger; import org.apache.logging.log4j.LogManager; public class DbTableService { private static final Logger LOGGER = LogManager.getLogger(); (1) public void truncateTable(String tableName) throws IOException { LOGGER.warn("truncating table `{}`", tableName); (2) db.truncate(tableName); } } 1 The returned Logger instance is thread-safe and reusable. Unless explicitly provided as an argument, getLogger() associates the returned Logger with the enclosing class, that is, DbTableService in this example. 2 The placeholder {} in the message will be replaced with the value of tableName The generated log event , which contain the user-provided log message and log level (i.e., WARN ), will be enriched with several other implicitly derived contextual information: timestamp, class & method name, line number, etc. What happens to the generated log event will vary significantly depending on the configuration used. It can be pretty-printed to the console, written to a file, or get totally ignored due to insufficient severity or some other filtering. Log levels are used to categorize log events by severity and control the verbosity of the logs. Log4j contains various predefined levels, but the most common are DEBUG , INFO , WARN , and ERROR . With them, you can filter out less important logs and focus on the most critical ones. Previously we used Logger#warn() to log a warning message, which could mean that something is not right, but the application can continue. Log levels have a priority, and WARN is less severe than ERROR . Exceptions are often also errors. In this case, we might use the ERROR log level. Make sure to log exceptions that have diagnostics value. This is simply done by passing the exception as the last argument to the log method: LOGGER.warn("truncating table `{}`", tableName); try { db.truncate(tableName); } catch (IOException exception) { LOGGER.error("failed truncating table `{}`", tableName, exception); (1) throw new IOException("failed truncating table: " + tableName, exception); } 1 By using error() instead of warn() , we signal that the operation failed. While there is only one placeholder in the message, we pass two arguments: tableName and exception . Log4j will attach the last extra argument of type Throwable in a separate field to the generated log event. Log messages are often used interchangeably with log events . While this simplification holds for several cases, it is not technically correct. A log event, capturing the logging context (level, logger name, instant, etc.) along with the log message, is generated by the logging implementation (e.g., Log4j Core) when a user issues a log using a logger , e.g., LOGGER.info("Hello, world!") . Hence, log events are compound objects containing log messages . Click for an introduction to log event fields Log events contain fields that can be classified into three categories: Some fields are provided explicitly, in a Logger method call. The most important are the log level and the log message, which is a description of what happened, and it is addressed to humans. Some fields are contextual (e.g., Thread Context ) and are either provided explicitly by developers of other parts of the application, or is injected by Java instrumentation. The last category of fields is those that are computed automatically by the logging implementation employed. For clarity’s sake let us look at a log event formatted as JSON: { (1) "log.level": "INFO", "message": "Unable to insert data into my_table.", "error.type": "java.lang.RuntimeException", "error.message": null, "error.stack_trace": [ { "class": "com.example.Main", "method": "doQuery", "file.name": "Main.java", "file.line": 36 }, { "class": "com.example.Main", "method": "main", "file.name": "Main.java", "file.line": 25 } ], "marker": "SQL", "log.logger": "com.example.Main", (2) "tags": [ "SQL query" ], "labels": { "span_id": "3df85580-f001-4fb2-9e6e-3066ed6ddbb1", "trace_id": "1b1f8fc9-1a0c-47b0-a06f-af3c1dd1edf9" }, (3) "@timestamp": "2024-05-23T09:32:24.163Z", "log.origin.class": "com.example.Main", "log.origin.method": "doQuery", "log.origin.file.name": "Main.java", "log.origin.file.line": 36, "process.thread.id": 1, "process.thread.name": "main", "process.thread.priority": 5 } 1 Explicitly supplied fields: log.level The level of the event, either explicitly provided as an argument to the logger call, or implied by the name of the logger method message The log message that describes what happened error.* An optional Throwable explicitly passed as an argument to the logger call marker An optional marker explicitly passed as an argument to the logger call log.logger The logger name provided explicitly to LogManager.getLogger() or inferred by Log4j API 2 Contextual fields: tags The Thread Context stack labels The Thread Context map 3 Logging backend specific fields. In case you are using Log4j Core, the following fields can be automatically generated: @timestamp The instant of the logger call log.origin.* The location of the logger call in the source code process.thread.* The name of the Java thread, where the logger is called Best practices There are several widespread bad practices while using Log4j API. Let’s try to walk through the most common ones and see how to fix them. Don’t use toString() Don’t use Object#toString() in arguments, it is redundant! /* BAD! */ LOGGER.info("userId: {}", userId.toString()); Underlying message type and layout will deal with arguments: /* GOOD */ LOGGER.info("userId: {}", userId); Pass exception as the last extra argument Don’t call Throwable#printStackTrace() ! This not only circumvents the logging but can also leak sensitive information! /* BAD! */ exception.printStackTrace(); Don’t use Throwable#getMessage() ! This prevents the log event from getting enriched with the exception. /* BAD! */ LOGGER.info("failed", exception.getMessage()); /* BAD! */ LOGGER.info("failed for user ID `{}`: {}", userId, exception.getMessage()); Don’t provide both Throwable#getMessage() and Throwable itself! This bloats the log message with a duplicate exception message. /* BAD! */ LOGGER.info("failed for user ID `{}`: {}", userId, exception.getMessage(), exception); Pass exception as the last extra argument: /* GOOD */ LOGGER.error("failed", exception); /* GOOD */ LOGGER.error("failed for user ID `{}`", userId, exception); Don’t use string concatenation If you are using String concatenation while logging, you are doing something very wrong and dangerous! Don’t use String concatenation to format arguments! This circumvents the handling of arguments by message type and layout. More importantly, this approach is prone to attacks! Imagine userId being provided by the user with the following content: placeholders for non-existing args to trigger failure: {} {} {dangerousLookup} /* BAD! */ LOGGER.info("failed for user ID: " + userId); Use message parameters /* GOOD */ LOGGER.info("failed for user ID `{}`", userId); Use Supplier s to pass computationally expensive arguments If one or more arguments of the log statement are computationally expensive, it is not wise to evaluate them knowing that their results can be discarded. Consider the following example: /* BAD! */ LOGGER.info("failed for user ID `{}` and role `{}`", userId, db.findUserRoleById(userId)); The database query (i.e., db.findUserNameById(userId) ) can be a significant bottleneck if the created the log event will be discarded anyway – maybe the INFO level is not accepted for this logger, or due to some other filtering. The old-school way of solving this problem is to level-guard the log statement: /* OKAY */ if (LOGGER.isInfoEnabled()) { LOGGER.info(...); } While this would work for cases where the message can be dropped due to insufficient level, this approach is still prone to other filtering cases; e.g., maybe the associated marker is not accepted. Use Supplier s to pass arguments containing computationally expensive items: /* GOOD */ LOGGER.info("failed for user ID `{}` and role `{}`", () -> userId, () -> db.findUserRoleById(userId)); Use a Supplier to pass the message and its arguments containing computationally expensive items: /* GOOD */ LOGGER.info(() -> new ParameterizedMessage("failed for user ID `{}` and role `{}`", userId, db.findUserRoleById(userId))); Loggers Logger s are the primary entry point for logging. In this section we will introduce you to further details about Logger s. Refer to Architecture to see where Logger s stand in the big picture. Logger names Most logging implementations use a hierarchical scheme for matching logger names with logging configuration. In this scheme, the logger name hierarchy is represented by . (dot) characters in the logger name, in a fashion very similar to the hierarchy used for Java package names. For example, org.apache.logging.appender and org.apache.logging.filter both have org.apache.logging as their parent. In most cases, applications name their loggers by passing the current class’s name to LogManager.getLogger(…​) . Because this usage is so common, Log4j provides that as the default when the logger name parameter is either omitted or is null. For example, all Logger -typed variables below will have a name of com.example.LoggerNameTest : public class LoggerNameTest { Logger logger1 = LogManager.getLogger(LoggerNameTest.class); Logger logger2 = LogManager.getLogger(LoggerNameTest.class.getName()); Logger logger3 = LogManager.getLogger(); } We suggest you to use LogManager.getLogger() without any arguments since it delivers the same functionality with less characters and is not prone to copy-paste errors. Logger message factories Loggers translate LOGGER.info("Hello, {}!", name); calls to the appropriate canonical logging method: LOGGER.log(Level.INFO, messageFactory.createMessage("Hello, {}!", new Object[] {name})); Note that how Hello, {}! should be encoded given the {name} array as argument completely depends on the MessageFactory employed. Log4j allows users to customize this behaviour in several getLogger() methods of LogManager : LogManager.getLogger() (1) .info("Hello, {}!", name); (2) LogManager.getLogger(StringFormatterMessageFactory.INSTANCE) (3) .info("Hello, %s!", name); (4) 1 Create a logger using the default message factory 2 Use default parameter placeholders, that is, {} style 3 Explicitly provide the message factory, that is, StringFormatterMessageFactory . Note that there are several other getLogger() methods accepting a MessageFactory . 4 Note the placeholder change from {} to %s ! Passed Hello, %s! and name arguments will be implicitly translated to a String.format("Hello, %s!", name) call due to the employed StringFormatterMessageFactory . Log4j bundles several predefined message factories . Some common ones are accessible through convenient factory methods, which we will cover below. Formatter logger The Logger instance returned by default replaces the occurrences of {} placeholders with the toString() output of the associated parameter. If you need more control over how the parameters are formatted, you can also use the java.util.Formatter format strings by obtaining your Logger using LogManager#getFormatterLogger() : Logger logger = LogManager.getFormatterLogger(); logger.debug("Logging in user %s with birthday %s", user.getName(), user.getBirthdayCalendar()); logger.debug( "Logging in user %1$s with birthday %2$tm %2$te,%2$tY", user.getName(), user.getBirthdayCalendar()); logger.debug("Integer.MAX_VALUE = %,d", Integer.MAX_VALUE); logger.debug("Long.MAX_VALUE = %,d", Long.MAX_VALUE); Loggers returned by getFormatterLogger() are referred as formatter loggers . printf() method Formatter loggers give fine-grained control over the output format, but have the drawback that the correct type must be specified. For example, passing anything other than a decimal integer for a %d format parameter gives an exception. If your main usage is to use {} -style parameters, but occasionally you need fine-grained control over the output format, you can use the Logger#printf() method: Logger logger = LogManager.getLogger("Foo"); logger.debug("Opening connection to {}...", someDataSource); logger.printf(Level.INFO, "Hello, %s!", userName); Formatter performance Keep in mind that, contrary to the formatter logger, the default Log4j logger (i.e., {} -style parameters) is heavily optimized for several use cases and can operate garbage-free when configured correctly. You might reconsider your formatter logger usages for latency sensitive applications. Event logger EventLogger is a convenience to log StructuredDataMessage s, which format their content in a way compliant with the Syslog message format described in RFC 5424 . Event Logger is deprecated for removal! We advise users to switch to plain Logger instead. Read more on event loggers…​ Simple logger Even though Log4j Core is the reference implementation of Log4j API, Log4j API itself also provides a very minimalist implementation: Simple Logger . This is a convenience for environments where either a fully-fledged logging implementation is missing, or cannot be included for other reasons. SimpleLogger is the fallback Log4j API implementation if no other is available in the classpath. Read more on the simple logger…​ Status logger Status Logger is a standalone, self-sufficient Logger implementation to record events that occur in the logging system (i.e., Log4j) itself. It is the logging system used by Log4j for reporting status of its internals. Users can use the status logger to either emit logs in their custom Log4j components, or troubleshoot a Log4j configuration. Read more on the status logger…​ Fluent API The fluent API allows you to log using a fluent interface: LOGGER.atInfo() .withMarker(marker) .withLocation() .withThrowable(exception) .log("Login for user `{}` failed", userId); Read more on the Fluent API…​ Fish tagging Just as a fish can be tagged and have its movement tracked (aka. fish tagging [ 1 ] ), stamping log events with a common tag or set of data elements allows the complete flow of a transaction or a request to be tracked. You can use them for several purposes, such as: Provide extra information while serializing the log event Allow filtering of information so that it does not overwhelm the system or the individuals who need to make use of it Log4j provides fish tagging in several flavors: Levels Log levels are used to categorize log events by severity. Log4j contains predefined levels, of which the most common are DEBUG , INFO , WARN , and ERROR . Log4j also allows you to introduce your own custom levels too. Read more on custom levels…​ Markers Markers are programmatic labels developers can associate to log statements: public class MyApp { private static final Logger LOGGER = LogManager.getLogger(); private static final Marker ACCOUNT_MARKER = MarkerManager.getMarker("ACCOUNT"); public void removeUser(String userId) { logger.debug(ACCOUNT_MARKER, "Removing user with ID `{}`", userId); // ... } } Read more on markers…​ Thread Context Just like Java’s ThreadLocal , Thread Context facilitates associating information with the executing thread and making this information accessible to the rest of the logging system. Thread Context offers both map-structured – referred to as Thread Context Map or Mapped Diagnostic Context (MDC) stack-structured – referred to as Thread Context Stack or Nested Diagnostic Context (NDC) storage: ThreadContext.put("ipAddress", request.getRemoteAddr()); (1) ThreadContext.put("hostName", request.getServerName()); (1) ThreadContext.put("loginId", session.getAttribute("loginId")); (1) void performWork() { ThreadContext.push("performWork()"); (2) LOGGER.debug("Performing work"); (3) // Perform the work ThreadContext.pop(); (4) } ThreadContext.clear(); (5) 1 Adding properties to the thread context map 2 Pushing properties to the thread context stack 3 Added properties can later on be used to, for instance, filter the log event, provide extra information in the layout, etc. 4 Popping the last pushed property from the thread context stack 5 Clearing the thread context (for both stack and map!) Read more on Thread Context …​ Messages Whereas almost every other logging API and implementation accepts only String -typed input as message, Log4j generalizes this concept with a Message contract. Customizability of the message type enables users to have complete control over how a message is encoded by Log4j. This liberal approach allows applications to choose the message type best fitting to their logging needs; they can log plain String s, or custom PurchaseOrder objects. Log4j provides several predefined message types to cater for common use cases: Simple String -typed messages: LOGGER.info("foo"); LOGGER.info(new SimpleMessage("foo")); String -typed parameterized messages: LOGGER.info("foo {} {}", "bar", "baz"); LOGGER.info(new ParameterizedMessage("foo {} {}", new Object[] {"bar", "baz"})); Map -typed messages: LOGGER.info(new StringMapMessage().with("key1", "val1").with("key2", "val2")); Read more on messages…​ Flow tracing The Logger class provides traceEntry() , traceExit() , catching() , throwing() methods that are quite useful for following the execution path of applications. These methods generate log events that can be filtered separately from other debug logging. Read more on flow tracing…​ 1 . Fish tagging is first described by Neil Harrison in the "Patterns for Logging Diagnostic Messages" chapter of "Pattern Languages of Program Design 3" edited by R. Martin, D. Riehle, and F. Buschmann in 1997 . Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://docs.aws.amazon.com/ko_kr/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-permissions.html | Lambda@Edge에 대한 IAM 권한 및 역할 설정 - Amazon CloudFront Lambda@Edge에 대한 IAM 권한 및 역할 설정 - Amazon CloudFront 설명서 Amazon CloudFront 개발자 가이드 Lambda@Edge 함수를 CloudFront 배포와 연결하는 데 필요한 IAM 권한 서비스 보안 주체에 대한 함수 실행 역할 Lambda@Edge의 서비스 연결 역할 Lambda@Edge에 대한 IAM 권한 및 역할 설정 Lambda@Edge를 구성하려면 AWS Lambda에 대한 다음 IAM 권한과 역할이 있어야 합니다. IAM 권한 - 이러한 권한을 통해 Lambda 함수를 생성하고 CloudFront 배포와 연결할 수 있습니다. Lambda 함수 실행 역할 (IAM 역할) – Lambda 서비스 보안 주체가 이 역할을 맡아 함수를 실행합니다. Lambda@Edge 서비스 연결 역할 - 서비스 연결 역할은 특정 AWS 서비스가 Lambda 함수를 AWS 리전에 복제하고 CloudWatch가 CloudFront 로그 파일을 사용할 수 있도록 합니다. Lambda@Edge 함수를 CloudFront 배포와 연결하는 데 필요한 IAM 권한 사용자가 Lambda 함수를 CloudFront 배포와 연결하려면 Lambda에 필요한 IAM 권한 외에 다음과 같은 권한이 필요합니다. lambda:GetFunction – 함수가 포함된 .zip 파일을 다운로드하기 위한 미리 서명된 URL과 Lambda 함수에 대한 구성 정보를 가져오기 위한 권한을 부여합니다. lambda:EnableReplication* – Lambda 복제 서비스가 함수 코드 및 구성을 가져올 수 있도록 리소스 정책에 대한 권한을 부여합니다. lambda:DisableReplication* – Lambda 복제 서비스가 함수를 삭제할 수 있도록 리소스 정책에 대한 권한을 부여합니다. 중요 lambda:EnableReplication * 및 lambda:DisableReplication * 작업 끝에 별표( * )를 추가해야 합니다. 리소스의 경우, 다음 예시와 같이 CloudFront 이벤트가 발생할 때 실행할 함수 버전의 ARN을 지정합니다. arn:aws:lambda:us-east-1:123456789012:function: TestFunction :2 iam:CreateServiceLinkedRole – Lambda@Edge가 CloudFront에서 Lambda 함수를 복제하는 데 사용하는 서비스 연결 역할을 생성하기 위한 권한을 부여합니다. Lambda@Edge를 처음으로 구성한 후 서비스 연결 역할이 자동으로 생성됩니다. Lambda@Edge를 사용하는 다른 배포에는 이 권한을 추가할 필요가 없습니다. cloudfront:UpdateDistribution 또는 cloudfront:CreateDistribution – 배포를 업데이트 또는 생성하기 위한 권한을 부여합니다. 자세한 정보는 다음의 주제를 참조하세요. Amazon CloudFront용 Identity and Access Management AWS Lambda 개발자 안내서의 Lambda 리소스 액세스 권한 서비스 보안 주체에 대한 함수 실행 역할 함수를 실행할 때 lambda.amazonaws.com 및 edgelambda.amazonaws.com 서비스 보안 주체가 맡을 수 있는 IAM 역할을 생성해야 합니다. 작은 정보 Lambda 콘솔에서 함수를 생성할 때 AWS 정책 템플릿을 사용하여 새로운 실행 역할을 생성할 수 있습니다. 이 단계는 함수를 실행하는 데 필요한 Lambda@Edge 권한을 자동으로 추가합니다. 자습서의 5단계: 간단한 Lambda@Edge 함수 생성 을 참조하세요. IAM 역할 수동 생성에 대한 자세한 내용은 IAM 사용 설명서의 역할 생성 및 정책 연결(콘솔) 을 참조하세요. 예시: 역할 신뢰 정책 IAM 콘솔의 신뢰 관계 탭에서 이 역할을 추가할 수 있습니다. 권한 탭 아래에 이 정책을 추가하지 마세요. JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "lambda.amazonaws.com", "edgelambda.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] } 실행 역할에 부여해야 하는 권한에 대한 자세한 내용은 AWS Lambda 개발자 안내서의 Lambda 리소스 액세스 권한 을 참조하세요. Notes 기본적으로 CloudFront 이벤트가 Lambda 함수를 트리거할 때마다 데이터는 CloudWatch Logs에 기록됩니다. 이러한 로그를 사용하려면 실행 역할에 CloudWatch Logs에 데이터를 기록할 권한이 있어야 합니다. 사전 정의된 AWSLambdaBasicExecutionRole을 사용하여 실행 역할에 대한 권한을 부여할 수 있습니다. CloudWatch Logs에 대한 자세한 내용은 엣지 함수 로그 섹션을 참조하세요. Lambda 함수 코드가 S3 버킷에서 객체를 읽는 것처럼 다른 AWS 리소스에 액세스하는 경우 실행 역할에는 해당 작업을 수행할 수 있는 권한이 필요합니다. Lambda@Edge의 서비스 연결 역할 Lambda@Edge는 IAM 서비스 연결 역할 을 사용합니다. 서비스 연결 역할은 서비스에 직접 연결된 고유한 유형의 IAM 역할입니다. 서비스 연결 역할은 해당 서비스에서 사전 정의하며 서비스에서 사용자를 대신하여 다른 AWS 서비스를 호출하기 위해 필요한 모든 권한을 포함합니다. Lambda@Edge는 다음 IAM 서비스 연결 역할을 사용합니다. AWSServiceRoleForLambdaReplicator - Lambda@Edge에서는 이 역할을 사용해 Lambda@Edge가 AWS 리전에 함수를 복제하도록 합니다. CloudFront에 Lambda@Edge 트리거를 처음 추가할 때 AWSServiceRoleForLambdaReplicator라는 역할이 자동으로 생성되어 Lambda@Edge가 AWS 리전에 함수를 복제할 수 있도록 합니다. Lambda@Edge 함수를 사용하려면 이 역할이 필요합니다. AWSServiceRoleForLambdaReplicator 역할에 대한 ARN은 다음 예시와 같을 수 있습니다. arn:aws:iam::123456789012:role/aws-service-role/replicator.lambda.amazonaws.com/AWSServiceRoleForLambdaReplicator AWSServiceRoleForCloudFrontLogger – CloudFront는 이 역할을 사용하여 CloudWatch에 로그 파일을 푸시합니다. 로그 파일을 사용하여 Lambda@Edge 검증 오류를 디버깅할 수 있습니다. AWSServiceRoleForCloudFrontLogger 역할은 CloudFront에서 Lambda@Edge 오류 로그 파일을 CloudWatch로 푸시하도록 허용하기 위해 Lambda@Edge 함수 연결을 추가한 경우 자동으로 생성됩니다. AWSServiceRoleForCloudFrontLogger 역할의 ARN의 모양은 다음과 같습니다. arn:aws:iam::account_number:role/aws-service-role/logger.cloudfront.amazonaws.com/AWSServiceRoleForCloudFrontLogger 서비스 연결 역할이 있으면 필요한 권한을 수동으로 추가할 필요가 없으므로 Lambda@Edge를 설정 및 사용하기가 쉽습니다. 서비스 연결 역할의 권한은 Lambda@Edge에서 정의하며, Lambda@Edge만이 그 역할을 맡을 수 있습니다. 정의된 권한에는 신뢰 정책과 권한 정책이 포함됩니다. 권한 정책은 다른 어떤 IAM 엔터티에도 연결할 수 없습니다. 서비스 연결 역할을 삭제하려면 먼저 연결된 CloudFront 또는 Lambda@Edge 리소스를 제거해야 합니다. 그러면 활성 리소스에 액세스하는 데 필요한 서비스 연결 역할을 제거하지 않고 Lambda@Edge 리소스를 보호할 수 있습니다. 서비스 연결 역할에 대한 자세한 내용은 CloudFront 서비스 연결 역할 를 참조하세요. Lambda@Edge의 서비스 연결 역할 권한 Lambda@Edge는 AWSServiceRoleForLambdaReplicator 및 AWSServiceRoleForCloudFrontLogger 이라는 서비스 연결 역할을 사용합니다. 다음 단원에서는 이러한 각 역할에 대한 권한을 설명합니다. 목차 Lambda Replicator의 서비스 연결 역할 권한 CloudFront Logger에 대한 서비스 연결 역할 권한 Lambda Replicator의 서비스 연결 역할 권한 이 서비스 연결 역할을 통해 Lambda는 Lambda@Edge 함수를 AWS 리전에 복제할 수 있습니다. AWSServiceRoleForLambdaReplicator 서비스 연결 역할은 역할을 수임하기 위해 replicator.lambda.amazonaws.com 서비스를 신뢰합니다. 역할 권한 정책에서는 Lambda@Edge가 지정된 리소스에서 다음 작업을 완료할 수 있도록 허용합니다. lambda:CreateFunction 의 arn:aws:lambda:*:*:function:* lambda:DeleteFunction 의 arn:aws:lambda:*:*:function:* lambda:DisableReplication 의 arn:aws:lambda:*:*:function:* iam:PassRole 의 all AWS resources cloudfront:ListDistributionsByLambdaFunction 의 all AWS resources CloudFront Logger에 대한 서비스 연결 역할 권한 이 서비스 연결 역할을 사용하면 Lambda@Edge 검증 오류를 디버깅하기 위해 CloudFront에서 CloudWatch로 로그 파일을 푸시할 수 있습니다. AWSServiceRoleForCloudFrontLogger 서비스 연결 역할은 역할을 수임하기 위해 logger.cloudfront.amazonaws.com 서비스를 신뢰합니다. 역할 권한 정책에서는 Lambda@Edge가 지정된 arn:aws:logs:*:*:log-group:/aws/cloudfront/* 리소스에서 다음 작업을 완료할 수 있도록 허용합니다. logs:CreateLogGroup logs:CreateLogStream logs:PutLogEvents IAM 엔터티(예: 사용자, 그룹 또는 역할)가 Lambda@Edge 서비스 연결 역할을 삭제할 수 있도록 권한을 구성해야 합니다. 자세한 내용은 IAM 사용 설명서 의 서비스 연결 역할 권한 을 참조하세요. Lambda@Edge의 서비스 연결 역할 생성 일반적으로 Lambda@Edge에 대한 서비스 연결 역할은 수동으로 생성하지 않습니다. 다음 시나리오에서 서비스는 역할을 자동으로 생성합니다. 트리거를 처음으로 생성할 때, 이 서비스에서는 AWSServiceRoleForLambdaReplicator 역할이 없는 경우 해당 역할을 생성합니다. 이 역할을 통해 Lambda는 Lambda@Edge 함수를 AWS 리전에 복제할 수 있습니다. 서비스 연결 역할을 삭제하는 경우, 배포에서 Lambda@Edge의 새 트리거를 추가할 때 이 역할이 다시 생성됩니다. Lambda@Edge 연결이 있는 CloudFront 배포를 업데이트 또는 생성할 때, 이 서비스에서는 AWSServiceRoleForCloudFrontLogger라는 역할이 없는 경우 해당 역할을 생성합니다. 이 정책은 CloudFront에서 로그 파일을 CloudFront로 푸시할 수 있도록 허용합니다. 서비스 연결 역할을 삭제했다 하더라도, Lambda@Edge 연결이 있는 CloudFront 배포를 업데이트 또는 생성하는 경우 해당 역할이 다시 생성됩니다. 이러한 서비스 연결 역할을 수동으로 만들어야 하는 경우 다음 AWS Command Line Interface(AWS CLI) 명령을 실행합니다. AWSServiceRoleForLambdaReplicator 역할을 만들려면 다음 명령을 실행합니다. aws iam create-service-linked-role --aws-service-name replicator.lambda.amazonaws.com AWSServiceRoleForCloudFrontLogger 역할을 만들려면 다음 명령을 실행합니다. aws iam create-service-linked-role --aws-service-name logger.cloudfront.amazonaws.com Lambda@Edge 서비스 연결 역할 편집 Lambda@Edge에서는 AWSServiceRoleForLambdaReplicator 또는 AWSServiceRoleForCloudFrontLogger 서비스 연결 역할을 편집할 수 없습니다. 서비스에서 서비스 연결 역할을 만든 후에는 다양한 엔터티가 역할을 참조할 수 있기 때문에 역할 이름을 변경할 수 없습니다. 그러나 IAM을 사용하여 역할 설명을 편집할 수는 있습니다. 자세한 내용은 IAM 사용 설명서 의 서비스 연결 역할 편집 을 참조하세요. Lambda@Edge 서비스 연결 역할에 대해 지원되는 AWS 리전 CloudFront는 다음 AWS 리전에서 Lambda@Edge에 대한 서비스 연결 역할 사용을 지원합니다. 미국 동부(버지니아 북부) – us-east-1 미국 동부(오하이오) – us-east-2 미국 서부(캘리포니아 북부) – us-west-1 미국 서부(오리건) – us-west-2 아시아 태평양(뭄바이) – ap-south-1 아시아 태평양(서울) – ap-northeast-2 아시아 태평양(싱가포르) – ap-southeast-1 아시아 태평양(시드니) – ap-southeast-2 아시아 태평양(도쿄) – ap-northeast-1 유럽(프랑크푸르트) – eu-central-1 유럽(아일랜드) – eu-west-1 유럽(런던) – eu-west-2 남아메리카(상파울루) – sa-east-1 javascript가 브라우저에서 비활성화되거나 사용이 불가합니다. AWS 설명서를 사용하려면 Javascript가 활성화되어야 합니다. 지침을 보려면 브라우저의 도움말 페이지를 참조하십시오. 문서 규칙 자습서: 기본 Lambda@Edge 함수 Lambda@Edge 함수 작성 및 생성 이 페이지의 내용이 도움이 되었습니까? - 예 칭찬해 주셔서 감사합니다! 잠깐 시간을 내어 좋았던 부분을 알려 주시면 더 열심히 만들어 보겠습니다. 이 페이지의 내용이 도움이 되었습니까? - 아니요 이 페이지에 작업이 필요하다는 점을 알려 주셔서 감사합니다. 실망시켜 드려 죄송합니다. 잠깐 시간을 내어 설명서를 향상시킬 수 있는 방법에 대해 말씀해 주십시오. | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/manual/simple-logger.html | Simple Logger :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Manual API Loggers Simple Logger Edit this Page Simple Logger Even though Log4j Core is the reference implementation of Log4j API , Log4j API itself also provides a very minimalist implementation: SimpleLogger . This is a convenience for environments where either a fully-fledged logging implementation is missing, or cannot be included for other reasons. SimpleLogger is the default Log4j API implementation if no other is available in the classpath. Configuration Logger SimpleLogger can be configured using the following system properties: log4j2.simplelogLevel Env. variable LOG4J_SIMPLELOG_LEVEL Type Level Default value ERROR Default level for new logger instances. log4j2.simplelog.<loggerName>.level Env. variable LOG4J_SIMPLELOG_<loggerName>_LEVEL Type Level Default value value of log4j2.simplelogLevel Log level for a logger instance named <loggerName> . log4j2.simplelogShowContextMap Env. variable LOG4J_SIMPLELOG_SHOW_CONTEXT_MAP Type boolean Default value false If true , the full thread context map is included in each log message. log4j2.simplelogShowlogname Env. variable LOG4J_SIMPLELOG_SHOWLOGNAME Type boolean Default value false If true , the logger name is included in each log message. log4j2.simplelogShowShortLogname Env. variable LOG4J_SIMPLELOG_SHOW_SHORT_LOGNAME Type boolean Default value true If true , only the last component of a logger name is included in each log message. log4j2.simplelogShowdatetime Env. variable LOG4J_SIMPLELOG_SHOWDATETIME Type boolean Default value false If true , a timestamp is included in each log message. log4j2.simplelogDateTimeFormat Env. variable LOG4J_SIMPLELOG_DATE_TIME_FORMAT Type SimpleDateFormat pattern Default value yyyy/MM/dd HH:mm:ss:SSS zzz Date-time format to use. Ignored if log4j2.simplelogShowdatetime is false . log4j2.simplelogLogFile Env. variable LOG4J_SIMPLELOG_LOG_FILE Type Path or predefined constant Default value System.err Specifies the output stream used by all loggers. Its value can be the path to a log file or one of these constants: System.err logs to the standard error output stream, System.out logs to the standard output stream, Thread context For the configuration of the thread context, Simple Logger supports a subset of the properties supported by Log4j Core: log4j2.disableThreadContext Env. variable LOG4J_DISABLE_THREAD_CONTEXT Type boolean Default value false If true , the ThreadContext stack and map are disabled. log4j2.disableThreadContextStack Env. variable LOG4J_DISABLE_THREAD_CONTEXT_STACK Type boolean Default value false If true , the ThreadContext stack is disabled. log4j2.disableThreadContextMap Env. variable LOG4J_DISABLE_THREAD_CONTEXT_MAP Type boolean Default value false If true , the ThreadContext map is disabled. log4j2.threadContextMap Env. variable LOG4J_THREAD_CONTEXT_MAP Type Class<? extends ThreadContextMap> Default value DefaultThreadContextMap Fully specified class name of a custom ThreadContextMap implementation class. log4j2.isThreadContextMapInheritable Env. variable LOG4J_IS_THREAD_CONTEXT_MAP_INHERITABLE Type boolean Default value false If true uses an InheritableThreadLocal to copy the thread context map to newly created threads. Note that, as explained in Java’s Executors#privilegedThreadFactory() , when you are dealing with privileged threads , thread context might not get propagated completely. Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://docs.aws.amazon.com/zh_cn/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-permissions.html | 设置 Lambda@Edge 的 IAM 权限和角色 - Amazon CloudFront 设置 Lambda@Edge 的 IAM 权限和角色 - Amazon CloudFront 文档 Amazon CloudFront 开发人员指南 将 Lambda@Edge 函数与 CloudFront 分配关联所需的 IAM 权限 服务主体的函数执行角色 Lambda@Edge 的服务相关角色 设置 Lambda@Edge 的 IAM 权限和角色 要配置 Lambda@Edge,您必须针对 AWS Lambda 设置以下 IAM 权限和角色: IAM 权限 – 这些权限允许您创建自己的 Lambda 函数并将其与您的 CloudFront 分配相关联。 Lambda 函数执行角色 (IAM 角色)– Lambda 服务主体代入此角色来执行您的函数。 服务相关 Lambda@Edge 角色 – 服务相关角色允许特定 AWS 服务将 Lambda 函数复制到 AWS 区域,并允许 CloudWatch 使用 CloudFront 日志文件。 将 Lambda@Edge 函数与 CloudFront 分配关联所需的 IAM 权限 除了配置 Lambda 所需的 IAM 权限之外,您还需要以下权限才能将 Lambda 函数与 CloudFront 分配相关联: lambda:GetFunction – 授予相关权限,已获取 Lambda 函数的配置信息,以及一个用于下载包含该函数的 .zip 文件的预签名 URL。 lambda:EnableReplication* – 向资源策略授予相关权限,以便 Lambda 复制服务可以获取函数代码和配置。 lambda:DisableReplication* – 向资源策略授予相关权限,以便 Lambda 复制服务可以删除函数。 重要 您必须在 lambda:EnableReplication * 和 lambda:DisableReplication * 操作的末尾添加星号( * )。 对于资源,请指定当 CloudFront 事件发生时要执行的函数版本的 ARN,如以下示例所示: arn:aws:lambda:us-east-1:123456789012:function: TestFunction :2 iam:CreateServiceLinkedRole – 授予相关权限,以允许创建 Lambda@Edge 用于在 CloudFront 中复制 Lambda 函数所需的服务相关角色。在首次配置 Lambda@Edge 之后,将自动创建服务相关角色。您不需要将此权限添加至使用 Lambda@Edge 的其他分配中。 cloudfront:UpdateDistribution 或 cloudfront:CreateDistribution - 授予更新或创建分配的权限。 有关更多信息,请参阅以下主题: 适用于 Amazon CloudFront 的 Identity and Access Management 《AWS Lambda 开发人员指南》中的 Lambda 资源访问权限 服务主体的函数执行角色 您必须创建一个 IAM 角色,以便 lambda.amazonaws.com 和 edgelambda.amazonaws.com 服务主体在执行您的函数时可以代入该角色。 提示 当您在 Lambda 控制台中创建函数时,可以选择使用 AWS 策略模板创建新的执行角色。此步骤 会自动 添加执行函数所需的 Lambda@Edge 权限。请参阅 教程中的步骤 5:创建简单的 Lambda@Edge 函数 。 有关手动创建 IAM 角色的更多信息,请参阅《IAM 用户指南》 中的 创建角色并附加策略(控制台) 。 例 示例:角色信任策略 您可以在 IAM 控制台的 信任关系 选项卡下添加此角色。请勿在 权限 选项卡下添加此策略。 JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "lambda.amazonaws.com", "edgelambda.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] } 有关需要向执行角色授予的权限的更多信息,请参阅《AWS Lambda 开发人员指南》 中的 Lambda 资源访问权限 。 备注 默认情况下,每当 CloudFront 事件触发 Lambda 函数时,数据都会写入到 CloudWatch Logs。如果要使用这些日志,执行角色需要权限来将数据写入 CloudWatch Logs。您可以使用预定义的 AWSLambdaBasicExecutionRole 向执行角色授予权限。 有关 CloudWatch Logs 的更多信息,请参阅 边缘函数日志 。 如果您的 Lambda 函数代码访问其他 AWS 资源,比如从 S3 存储桶读取对象,则执行角色需要权限来执行此操作。 Lambda@Edge 的服务相关角色 Lambda@Edge 使用 IAM 服务相关角色 。服务相关角色是一种与服务直接关联的独特类型的 IAM 角色。服务相关角色是由服务预定义的,具有服务代表您调用其他 AWS 服务所需的所有权限。 Lambda@Edge 使用以下 IAM 服务相关角色: AWSServiceRoleForLambdaReplicator – Lambda@Edge 使用该角色来允许 Lambda@Edge 将函数复制到 AWS 区域。 当您首次在 CloudFront 中添加 Lambda@Edge 触发器时,会自动创建一个名为 AWSServiceRoleForLambdaReplicator 的角色,以允许 Lambda@Edge 将函数复制到 AWS 区域。使用 Lambda@Edge 函数也需要该角色。例如,AWSServiceRoleForLambdaReplicator 角色的 ARN 如下所示: arn:aws:iam::123456789012:role/aws-service-role/replicator.lambda.amazonaws.com/AWSServiceRoleForLambdaReplicator AWSServiceRoleForCloudFrontLogger – CloudFront 使用此角色将日志文件推送到 CloudWatch。您可以使用日志文件来调试 Lambda@Edge 验证错误。 在添加 Lambda@Edge 函数关联以允许 CloudFront 将 Lambda@Edge 错误日志文件推送到 CloudWatch 时,将自动创建 AWSServiceRoleForCloudFrontLogger 角色。AWSServiceRoleForCloudFrontLogger 角色的 ARN 如下所示: arn:aws:iam::account_number:role/aws-service-role/logger.cloudfront.amazonaws.com/AWSServiceRoleForCloudFrontLogger 通过使用服务相关角色,您可以更轻松地设置和使用 Lambda@Edge,因为您不必手动添加所需的权限。Lambda@Edge 定义其服务相关角色的权限,并且仅 Lambda@Edge 可以担任该角色。定义的权限包括信任策略和权限策略。不能将该权限策略附加到任何其他 IAM 实体。 您必须先删除任何关联的 CloudFront 或 Lambda@Edge 资源,然后才能删除服务相关角色。这有助于保护您的 Lambda@Edge 资源,使您不会删除访问活动资源时仍需要的服务相关角色。 有关服务相关角色的更多信息,请参阅 CloudFront 的服务相关角色 。 Lambda@Edge 的服务相关角色权限 Lambda@Edge 使用两个名为 AWSServiceRoleForLambdaReplicator 和 AWSServiceRoleForCloudFrontLogger 的服务相关角色。以下部分介绍了其中的每个角色的权限。 目录 Lambda Replicator 的服务相关角色权限 CloudFront Logger 的服务相关角色权限 Lambda Replicator 的服务相关角色权限 此服务相关角色允许 Lambda 将 Lambda@Edge 函数复制到AWS 区域。 AWSServiceRoleForLambdaReplicator 服务相关角色信任 replicator.lambda.amazonaws.com 服务来代入角色。 角色权限策略允许 Lambda@Edge 对指定的资源完成以下操作: lambda:CreateFunction 在 上。 arn:aws:lambda:*:*:function:* lambda:DeleteFunction 在 上。 arn:aws:lambda:*:*:function:* lambda:DisableReplication 在 上。 arn:aws:lambda:*:*:function:* iam:PassRole 在 上。 all AWS resources cloudfront:ListDistributionsByLambdaFunction 在 上。 all AWS resources CloudFront Logger 的服务相关角色权限 该服务相关角色允许 CloudFront 将日志文件推送到 CloudWatch 账户,以便您可以调试 Lambda@Edge 验证错误。 AWSServiceRoleForCloudFrontLogger 服务相关角色信任 logger.cloudfront.amazonaws.com 服务来代入角色。 该角色权限策略允许 Lambda@Edge 对指定的 arn:aws:logs:*:*:log-group:/aws/cloudfront/* 资源执行以下操作: logs:CreateLogGroup logs:CreateLogStream logs:PutLogEvents 您必须配置权限以允许 IAM 实体(如用户、组或角色)删除 Lambda@Edge 服务相关角色。有关更多信息,请参阅《IAM 用户指南》 中的 服务相关角色权限 。 为 Lambda@Edge 创建服务相关角色 通常您不需要为 Lambda@Edge 手动创建服务相关角色。在以下情况下,该服务自动为您创建角色: 在首次创建触发器时,该服务会创建一个 AWSServiceRoleForLambdaReplicator 角色(如果该角色尚不存在)。该角色允许 Lambda 将 Lambda@Edge 函数复制到 AWS 区域。 如果您删除服务相关角色,则在分配中为 Lambda@Edge 添加新触发器时,将再次创建该角色。 在更新或创建具有 Lambda@Edge 关联的 CloudFront 分配时,该服务会创建 AWSServiceRoleForCloudFrontLogger 角色(如果该角色尚不存在)。该角色允许 CloudFront 将日志文件推送到 CloudWatch。 如果删除服务相关角色,在更新或创建具有 Lambda@Edge 关联的 CloudFront 分配时,将再次创建该角色。 要手动创建这些服务相关角色,可以运行以下 AWS Command Line Interface(AWS CLI)命令: 创建 AWSServiceRoleForLambdaReplicator 角色 运行以下命令。 aws iam create-service-linked-role --aws-service-name replicator.lambda.amazonaws.com 创建 AWSServiceRoleForCloudFrontLogger 角色 运行以下命令。 aws iam create-service-linked-role --aws-service-name logger.cloudfront.amazonaws.com 编辑 Lambda@Edge 服务相关角色 Lambda@Edge 不允许您编辑 AWSServiceRoleForLambdaReplicator 或 AWSServiceRoleForCloudFrontLogger 服务相关角色。在该服务创建服务相关角色后,您无法更改该角色的名称,因为不同的实体可能会引用该角色。但是,您可以使用 IAM 编辑角色描述。有关更多信息,请参阅《IAM 用户指南》 中的 编辑服务相关角色 。 支持 Lambda@Edge 服务相关角色的 AWS 区域 CloudFront 支持在以下AWS 区域对 Lambda@Edge 使用服务相关角色: 美国东部(弗吉尼亚州北部)– us-east-1 美国东部(俄亥俄州)– us-east-2 美国西部(加利福尼亚北部)– us-west-1 美国西部(俄勒冈州)– us-west-2 亚太地区(孟买)– ( ap-south-1 ) 亚太地区(首尔)– ( ap-northeast-2 ) 亚太地区(新加坡)– ( ap-southeast-1 ) 亚太地区(悉尼)– ap-southeast-2 亚太地区(东京)– ( ap-northeast-1 ) 欧洲地区(法兰克福)– eu-central-1 欧洲地区(爱尔兰)– eu-west-1 欧洲地区(伦敦)– eu-west-2 南美洲(圣保罗)– ( sa-east-1 ) Javascript 在您的浏览器中被禁用或不可用。 要使用 Amazon Web Services 文档,必须启用 Javascript。请参阅浏览器的帮助页面以了解相关说明。 文档惯例 教程:基本 Lambda@Edge 函数 编写和创建 Lambda@Edge 函数 此页面对您有帮助吗?- 是 感谢您对我们工作的肯定! 如果不耽误您的时间,请告诉我们做得好的地方,让我们做得更好。 此页面对您有帮助吗?- 否 感谢您告诉我们本页内容还需要完善。很抱歉让您失望了。 如果不耽误您的时间,请告诉我们如何改进文档。 | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/566 | LLVM Weekly - #566, November 4th 2024 LLVM Weekly - #566, November 4th 2024 Welcome to the five hundred and sixty-sixth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org , @llvmweekly or @asbradbury on Twitter, or @llvmweekly@fosstodon.org or @asb@fosstodon.org . News and articles from around the web and events LLVM 19.1.3 was released . The Call for Papers is out for the ninth LLVM Performance Workshop at CGO . The deadline is January 25th, and the event will take place 1-5th March. Sahil Patidar wrote on the LLVM blog about achievements in the out-of-process execution for clang-repl GSoC project . Rafael Eckstein blogged about building the LLVM test suite in order to test the effect of a particular pass . The next Portland LLVM social will take place on November 7th . According to the LLVM calendar in the coming week there will be the following: Office hours with the following hosts: Anastasia Stulova, Kristof Beyls, Johannes Doerfert. Online sync-ups on the following topics: MLIR C/C++ frontend, pointer authentication, new contributors, OpenMP, Clang C/C++ language working group, Flang, SPIR-V, RISC-V, MLIR, LLVM embedded toolchains, HLSL. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Renato Golin, in collaboration with others, started an RFC discussion on a MLIR project charter and restructuring . The RFC is detailed and there’s a lot of discussion that I can’t meaningfully summarise, so be sure to dive in and read if the future direction of MLIR is of interest. Amr Hesham shared a new project, llql that allows you to run SQL-like queries against LLVM IR. Tobias Hieta started a discussion about a potential ABI break in the 19.1.3 release and how to handle it . Schrodinger Zhu started a discussion about what target triple should be used when targeting LLVM libc . Steven Wu provided an update on the now-restarted upstreaming effort for LLVM content addressable storage (CAS) . Aaron Ballman put out a call for volunteers to be a maintainer for a Clang component . More notes from LLVM Dev Meeting round tables were shared (thanks to those taking notes and sharing!): MLIR , office hours , bounds safety , debug info . Aaron Ballman notes that discussions seem to be winding down with consensus reached for RFCs on typed allocator support , and controlling diagnostic severities at file-level granularity . So do speak up if you disagree. Issue #71 of MLIR news is now out . Maksim Levental proposed an incubator for MLIR language frontends / bindings . There have been plenty of questions about the details so far, but those who have responded largely seem positive. LLVM commits The documentation on llvm-lit options was updated. a8398bd . A TrieRawHashMap data structure was added to LLVM’s ADT library. b510cdb . Support was added for the FEAT_PAuthLR DWARF instruction. 86f76c3 . A new instcombine-no-verify-fixpoint function attribute was introduced. f78610a . The RVA23U64, RVA23S64, RVB23U64, and RVB23S64 RISC-V profiles were marked as non-experimental. ba7555e , 7544d3a . An llvm.sincos intrinsic was introduced. c3260c6 . MVT::iPTRAny was renamed to MVT::pAny . 9467645 . When writing memprof profile information, it’s now possible to generate random hotness (for testing). bb39151 . Assembler/disassembler support was added for new AArch64 SME and SVE instructions. 95c5042 , b185e92 , c485ee1 . Mentions of IRC in the documentation are now updated to point to Discord instead. 0ab44fd . Clang commits Following an RFC, support was removed for RenderScript. af7c58b . clang-sycl-linker was added, providing a tool to link SYCL offloading device code. eeee5a4 . The -startfiles flag was added for GPU targets for use when compiling with libc, and start files should be included. d4c4180 . Function effect analysis was documented. 034cae4 . swift_attr can now be applied to types. d3daa3c . Nondeterministic pointer usage checks were moved to clang-tidy. 3d6923d . Other project commits Syscall and setjmp/longjmp support for i386 was added to LLVM’s libc. 8413599 , b1320d3 . LLDB can now break on call-site locations. f147437 . The lldb-repro utility was removed, as the reproducer functionality was removed from lldb. 88591aa . A “walk pattern” rewrite driver was added to MLIR, intended to be fast and simple. 0f8a6b7 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://docs.aws.amazon.com/ja_jp/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-testing-debugging.html | Lambda@Edge 関数をテストおよびデバッグする - Amazon CloudFront Lambda@Edge 関数をテストおよびデバッグする - Amazon CloudFront ドキュメント Amazon CloudFront デベロッパーガイド Lambda@Edge 関数をテストする CloudFront での Lambda@Edge 関数エラーを識別する 無効な Lambda@Edge 関数レスポンス (検証エラー) のトラブルシューティング Lambda@Edge 関数実行エラーをトラブルシューティングする Lambda@Edge リージョンを判断する アカウントがログを CloudWatch にプッシュするかどうかを判断する Lambda@Edge 関数をテストおよびデバッグする Lambda@Edge 関数コードのスタンドアロンをテストすること、目的のタスクの完了を確認すること、統合のテストを行うこと、CloudFront で関数が正しく機能しているか確認することは重要です。 統合テスト中または関数がデプロイされた後に、HTTP 5xx エラーなどの CloudFront エラーのデバッグが必要になることがあります。エラーは、Lambda 関数から返される無効なレスポンス、関数がトリガーされるときの実行時のエラー、または Lambda サービスによる実行スロットリングが原因のエラーの可能性があります。このトピックのセクションでは、どのタイプの障害が問題であるかを判別するための戦略、そしてその問題を解決するためのステップを共有します。 注記 エラーをトラブルシューティングするときに CloudWatch ログファイルまたはメトリクスを確認する場合は、関数が実行される場所に最も近い AWS リージョン に表示または保存されていることに注意してください。したがって、例えば英国のユーザーがいるウェブサイトまたはウェブアプリケーションで、ディストリビューションに関連する Lambda 関数がある場合は、リージョンを変更してロンドン AWS リージョン の CloudWatch メトリクスまたはログファイルを表示する必要があります。詳細については、「 Lambda@Edge リージョンを判断する 」を参照してください。 トピック Lambda@Edge 関数をテストする CloudFront での Lambda@Edge 関数エラーを識別する 無効な Lambda@Edge 関数レスポンス (検証エラー) のトラブルシューティング Lambda@Edge 関数実行エラーをトラブルシューティングする Lambda@Edge リージョンを判断する アカウントがログを CloudWatch にプッシュするかどうかを判断する Lambda@Edge 関数をテストする Lambda 関数をテストするには、スタンドアロンテストと統合テストの 2 つのステップがあります。 スタンドアロン機能のテスト CloudFront に Lambda 関数を追加する前に、Lambda コンソールでテスト機能を使用するか他の方法を使用して、必ず最初に機能をテストしてください。Lambda コンソールでのテストの詳細については、「AWS Lambda 開発者ガイド」の「 コンソールを使用して Lambda 関数を呼び出す 」を参照してください。 CloudFront での関数のオペレーションのテスト 統合テストを完了することが重要です。ここで、関数はディストリビューションに関連付けられ、CloudFront イベントに基づいて実行されます。関数が正しいイベントに対してトリガーされることを確認し、CloudFront に対して有効で正しいレスポンスを返します。例えば、イベント構造が正しいこと、有効なヘッダーだけが含まれていることなどを確認します。 Lambda コンソールの関数で統合テストを繰り返す場合、コードを変更する際や関数を呼び出す CloudFront トリガーを変更する際は、Lambda@Edge チュートリアルのステップを参照しください。例えば、チュートリアルの「 ステップ 4: 関数を実行する CloudFront トリガーを追加する 」のステップで説明しているように、関数の番号付きバージョンを操作していることを確認します。 変更を加えた場合やデプロイした場合、更新した関数と CloudFront トリガーがすべてのリージョンにレプリケートするまで数分かかることに注意してください。通常、これには数分かかりますが、最大で 15 分かかる場合があります。 レプリケーションが終了したかどうかを確認するには、CloudFront コンソールに移動し、ディストリビューションを表示します。 レプリケーションのデプロイが完了したかどうかを確認するには CloudFront コンソール ( https://console.aws.amazon.com/cloudfront/v4/home ) を開きます。 ディストリビューション名を選択します。 ディストリビューションのステータスが [ 進行中 ] から [ デプロイ済み ] に戻ったことを確認します。この場合、関数はレプリケートされたことを意味します。続いて、次のセクションのステップに従って関数が機能することを確認します。 コンソールでのテストでは関数のロジックのみを検証します。また、Lambda@Edge に固有のサービスクォータ (以前は制限と呼ばれていました) は適用されないことに注意してください。 CloudFront での Lambda@Edge 関数エラーを識別する 関数のロジックが正常に機能することを確認した後でも、CloudFront での関数の実行時に HTTP 5xx エラーが発生することがあります。HTTP 5xx エラーはさまざまな理由で返される可能性があります。これには、Lambda 関数エラーやその他の CloudFront の問題が含まれる場合があります。 Lambda@Edge 関数を使用している場合は、CloudFront コンソールのグラフを使用してエラーの原因を突き止め、それを修正することができます。例えば、HTTP 5xx エラーの原因が CloudFront によるものか、Lambda 関数によるものかを確認し、特定の関数については関連するログファイルを表示して問題を調査できます。 HTTP エラー全般を CloudFront でトラブルシューティングするには、 CloudFront でのエラーレスポンスステータスコードのトラブルシューティング のトピックのトラブルシューティング手順を参照してください。 CloudFront で Lambda@Edge 関数エラーが発生する原因 Lambda 関数が HTTP 5xx エラーの原因となる可能性がある理由はいくつかあります。実行するトラブルシューティングステップはエラーのタイプによって異なります。エラーは次のように分類されます。 Lambda 関数実行エラー 関数に未処理の例外があるか、コードにエラーがあって CloudFront が Lambda からレスポンスを得られない場合は、実行エラーが発生します。たとえば、コードにコールバック (エラー) が含まれている場合です。 無効な Lambda 関数のレスポンスが CloudFront に返される 関数の実行後、CloudFront が Lambda からレスポンスを受け取ります。レスポンスのオブジェクト構造が Lambda@Edge イベント構造 に従わない場合、またはレスポンスに無効なヘッダーや他の無効なフィールドが含まれている場合、エラーが返されます。 CloudFront での実行は、Lambda サービスのクォータ (以前は制限と呼ばれていました) のために調整されます。 Lambda サービスは各リージョンでの実行を制限し、クォータに達するとエラーが返されます。詳細については、「 Lambda@Edge のクォータ 」を参照してください。 障害のタイプを判断する方法 デバッグするときにどこに焦点を合わせて CloudFront から返されたエラーを解決するかを決めるのに役立つように、なぜ CloudFront が HTTP エラーを返しているのかを識別することは役立ちます。これを開始するには、AWS マネジメントコンソール で CloudFront コンソールの [ Monitoring ] (モニタリング) セクションにあるグラフを使うことができます。CloudFront コンソールの [ Monitoring (モニタリング) ] セクションでのグラフ表示の詳細については、「 Amazon CloudWatch で CloudFront メトリクスをモニタリングする 」を参照してください。 次のグラフは、エラーが発生源によって返されたのか Lambda 関数によって返されたのかを追跡し、Lambda 関数からのエラーである場合に問題の種類を絞り込む場合に特に役立ちます。 エラー率グラフ 各ディストリビューションの [ Overview ] タブに表示できるグラフの1つが、[ Error rates ] グラフです。このグラフは、ディストリビューションに対するすべてのリクエストに対するエラーの割合をパーセンテージで表示します。グラフは、Lambda 関数の合計エラー率、合計 4xx エラー、合計 5xx エラー、合計 5xx エラーを示しています。エラーの種類と量に基づいて、原因を調査してトラブルシューティングするための手順を実行できます。 Lambda エラーが表示された場合は、関数が返す特定の種類のエラーを調べることで、さらに詳しく調べることができます。[ Lambda@Edge errors ] タブには、特定の関数に関する問題を特定するのに役立つように、関数エラーをタイプ別に分類したグラフが含まれています。 CloudFront エラーが表示された場合は、トラブルシューティングを行い、オリジンエラーを修正したり、CloudFront 設定を変更したりすることができます。詳細については、「 CloudFront でのエラーレスポンスステータスコードのトラブルシューティング 」を参照してください。 Execution エラーと無効な関数レスポンスグラフ [ Lambda@Edge errors ] タブには、特定のディストリビューションに対する Lambda@Edge エラーをタイプ別に分類したグラフが含まれています。例えば、1 つのグラフに AWS リージョン 別の実行エラーがすべて表示されます。 問題のトラブルシューティングを容易にするために、地域別に特定の関数のログファイルを開いて調べることで、特定の問題を探すことができます。 リージョン別に特定の関数のログファイルを表示するには [Lambda@Edge エラー] タブの [関連する Lambda@Edge 関数] で、関数名を選択し、 [メトリクスの表示] を選択します。 次に、関数名のページの右上隅で、 [関数ログの表示] を選択し、リージョンを選択します。 例えば、米国西部 (オレゴン) リージョンの [エラー] グラフに問題が表示される場合は、ドロップダウンリストからそのリージョンを選択します。これにより、Amazon CloudWatch コンソールが開きます。 そのリージョンの CloudWatch コンソールの [ログストリーム] で、ログストリームを選択して関数のイベントを表示します。 さらに、トラブルシューティングとエラーの修正に関する推奨事項については、この章の次のセクションを参照してください。 スロットルグラフ [ Lambda@Edge errors ] タブには、[ Throttles ] グラフも含まれます。場合によっては、リージョンの同時実行性のクォータに達すると、Lambda サービスがリージョンごとに関数呼び出しを調整します。 制限の超過 エラーが表示される場合は、Lambda サービスがリージョンの実行に課すクォータに関数が達しています。クォータの増加をリクエストする方法など、詳細については、「 Lambda@Edge のクォータ 」を参照してください。 HTTP エラーのトラブルシューティングでこの情報を使用する方法の例については、「 AWS でコンテンツ配信をデバッグするための 4 つのステップ 」を参照してください。 無効な Lambda@Edge 関数レスポンス (検証エラー) のトラブルシューティング 問題が Lambda 検証エラーであると特定した場合は、Lambda 関数が CloudFront に無効なレスポンスを返していることを意味します。このセクションのガイダンスに従い、関数を確認し、レスポンスが CloudFront 要件に従っていることを確認する手順を実行します。 CloudFront は、次の 2 つの方法で Lambda 関数からのレスポンスを検証します。 Lambda レスポンスは、必要なオブジェクト構造に従う必要があります。 不正なオブジェクト構造の例には次のようなものがあります。解析できない JSON、必須フィールドの欠落、レスポンスの無効なオブジェクト。詳細については、「 Lambda@Edge イベント構造 」を参照してください。 レスポンスには有効なオブジェクト値のみを含める必要があります。 レスポンスに有効なオブジェクトが含まれるがサポートされていない値がある場合、エラーが発生します。例には、許可されていない、または読み取り専用のヘッダーの追加または更新 (「 エッジ関数に対する制限 」を参照)、ボディサイズの上限の超過 (Lambda@Edge エラー トピックの「 生成されるレスポンスのサイズに対する制限」を参照 )、および無効な文字または値 (「 Lambda@Edge イベント構造 」を参照) などがあります。 Lambda が CloudFront に無効なレスポンスを返すと、Lambda 関数が実行されるリージョンで CloudFront が CloudWatch にプッシュするログファイルに、エラーメッセージが書き込まれます。これは、無効なレスポンスがあるときにログファイルを CloudWatch に送信するデフォルトの動作です。ただし、機能をリリースする前に Lambda 関数を CloudFront と関連付けると、関数に対して有効にならない可能性があります。詳細については、このトピックの後半の「 アカウントがログを CloudWatch にプッシュするかどうかを判断する 」を参照してください。 CloudFront は、関数を実行した場所に対応するリージョンで、ディストリビューションに関連するロググループにログファイルをプッシュします。ロググループの形式は /aws/cloudfront/LambdaEdge/ DistributionId です。ここで DistributionId はディストリビューションの ID です。CloudWatch ログファイルがあるリージョンを決定するには、このトピックの後半の「 Lambda@Edge のリージョンの判別 」を参照してください。 再現可能なエラーの場合、エラーになる新しいリクエストを作成し、障害のある CloudFront レスポンス ( X-Amz-Cf-Id ヘッダー) でリクエスト ID を見つけて、ログファイル内の 1 つの障害を特定します。ログファイルのエントリには、エラーが返される理由を特定するのに役立つ情報が含まれます。また、対応する Lambda リクエスト ID もリスト表示されるため、1 つのリクエストのコンテキストでの根本原因を分析することもできます。 エラーが断続的な場合は、CloudFront アクセスログを使用して障害が発生したリクエストのリクエスト ID を見つけ、対応するエラーメッセージの CloudWatch Logs を検索します。詳細については、前のセクションの「 Determining the Type of Failure 」を参照してください。 Lambda@Edge 関数実行エラーをトラブルシューティングする Lambda 実行エラーが問題である場合は、Lambda 関数のログ記録ステートメントを作成しておくと、CloudWatch での関数の実行をモニタリングする CloudFront ログファイルにメッセージを書き込み、正常に機能しているかどうかを判断するのに役立ちます。その後、これらのステートメントを CloudWatch ログファイルで検索して、関数が正常に機能していることを確認できます。 注記 Lambda@Edge 関数を変更していない場合でも、Lambda 関数の実行環境を更新すると、この関数に影響を与え、実行エラーが発生する可能性があります。テストおよび新しいバージョンへの移行の詳細については、「 Upcoming updates to the AWS Lambda and AWS Lambda@Edge execution environment 」を参照してください。 Lambda@Edge リージョンを判断する Lambda@Edge 関数がトラフィックを受信しているリージョンを確認するには、 のCloudFront コンソールでその関数のメトリクスを表示しますAWS マネジメントコンソール メトリックスは AWS リージョンごとに表示されます。同じページで、リージョンを選択してそのリージョンのログファイルを表示し、問題を調査することができます。CloudFront が Lambda 関数を実行したときに作成されたログファイルを表示するには、正しい AWS リージョンの CloudWatch ログファイルを確認する必要があります。 CloudFront コンソールの [ Monitoring (モニタリング) ] セクションでのグラフ表示の詳細については、「 Amazon CloudWatch で CloudFront メトリクスをモニタリングする 」を参照してください。 アカウントがログを CloudWatch にプッシュするかどうかを判断する デフォルトでは、CloudFront によって無効な Lambda 関数レスポンスのログ記録が有効になり、 Lambda@Edge 用のサービスにリンクされたロール のいずれかを使用してログファイルが CloudWatch にプッシュされます。無効な Lambda 関数レスポンスのログ機能がリリースされた前に CloudFront に追加した Lambda@Edge 関数がある場合は、CloudFront トリガーを追加するなど、Lambda@Edge 設定を次に更新するときにログ記録が有効になります。 次を実行して、アカウントでログファイルを CloudWatch にプッシュすることが有効になっているか確認できます。 CloudWatch にログが表示されるかどうかを確認する – Lambda@Edge 関数が実行されたリージョンを確認します。詳細については、「 Lambda@Edge リージョンを判断する 」を参照してください。 関連するサービスリンクロールが IAM のアカウントに存在するかどうかを確認する – アカウントに IAM ロール AWSServiceRoleForCloudFrontLogger が必要です。このロールの詳細については、「 Lambda@Edge 用のサービスにリンクされたロール 」を参照してください。 ブラウザで JavaScript が無効になっているか、使用できません。 AWS ドキュメントを使用するには、JavaScript を有効にする必要があります。手順については、使用するブラウザのヘルプページを参照してください。 ドキュメントの表記規則 Lambda@Edge 関数にトリガーを追加する (コンソール) 関数とレプリカを削除する このページは役に立ちましたか? - はい ページが役に立ったことをお知らせいただき、ありがとうございます。 お時間がある場合は、何が良かったかお知らせください。今後の参考にさせていただきます。 このページは役に立ちましたか? - いいえ このページは修正が必要なことをお知らせいただき、ありがとうございます。ご期待に沿うことができず申し訳ありません。 お時間がある場合は、ドキュメントを改善する方法についてお知らせください。 | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/hibernate.html | Integrating with Hibernate :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Resources Integrating with Hibernate Edit this Page Integrating with Hibernate Hibernate is an Object/Relational Mapping (ORM) solution for Java environments. It uses JBoss Logging as its logging API. If you have a working Log4j installation , JBoss Logging requires no extra installation steps on your part, since it is shipped with an integrated bridge to Log4j API – see Supported Log Managers by JBoss Logging for more information. Struggling with the logging API, implementation, and bridge concepts? Click for an introduction. Logging API A logging API is an interface your code or your dependencies directly logs against. It is required at compile-time. It is implementation agnostic to ensure that your application can write logs, but is not tied to a specific logging implementation. Log4j API, SLF4J , JUL (Java Logging) , JCL (Apache Commons Logging) , JPL (Java Platform Logging) and JBoss Logging are major logging APIs. Logging implementation A logging implementation is only required at runtime and can be changed without the need to recompile your software. Log4j Core, JUL (Java Logging) , Logback are the most well-known logging implementations. Logging bridge Logging implementations accept input from a single logging API of their preference; Log4j Core from Log4j API, Logback from SLF4J, etc. A logging bridge is a simple logging implementation of a logging API that forwards all messages to a foreign logging API. Logging bridges allow a logging implementation to accept input from other logging APIs that are not their primary logging API. For instance, log4j-slf4j2-impl bridges SLF4J calls to Log4 API and effectively enables Log4j Core to accept input from SLF4J. To make things a little bit more tangible, consider the following visualization of a typical Log4j Core installation with bridges for an application: Figure 1. Visualization of a typical Log4j Core installation with SLF4J, JUL, and JPL bridges Configuration After successfully wiring Hibernate – to be precise, JBoss Logging – to log using Log4j API, you can fine-tune the verbosity of Hibernate loggers in your Log4j Core installation to accommodate your needs: XML JSON YAML Properties Snippet from an example log4j2.xml configuring Hibernate-specific loggers <Loggers> <!-- Log just the SQL --> <Logger name="org.hibernate.SQL" level="DEBUG"/> <!-- Log JDBC bind parameters and extracted values Warning! (1) JDBC bind parameters can contain sensitive data! Passwords, credit card numbers, etc. Use these logger configurations with care! --> <!-- <Logger name="org.hibernate.type" level="TRACE"/> <Logger name="org.hibernate.orm.jdbc.bind" level="TRACE"/> <Logger name="org.hibernate.orm.jdbc.extract" level="TRACE"/> --> Snippet from an example log4j2.json configuring Hibernate-specific loggers "Loggers": { "Logger": [ // Log just the SQL { "name": "org.hibernate.SQL", "level": "DEBUG" } // Log JDBC bind parameters and extracted values // // Warning! (1) // JDBC bind parameters can contain sensitive data: // Passwords, credit card numbers, etc. // Use these logger configurations with care! //{ // "name": "org.hibernate.type", // "level": "TRACE" //}, //{ // "name": "org.hibernate.orm.jdbc.bind", // "level": "TRACE" //}, //{ // "name": "org.hibernate.orm.jdbc.extract", // "level": "TRACE" //} Snippet from an example log4j2.yaml configuring Hibernate-specific loggers Loggers: Logger: # Log just the SQL - name: "org.hibernate.SQL" level: "DEBUG" # Log JDBC bind parameters and extracted values # # Warning! (1) # JDBC bind parameters can contain sensitive data! # Passwords, credit card numbers, etc. # Use these logger configurations with care! #- name: "org.hibernate.type" # level: "TRACE" #- name: "org.hibernate.orm.jdbc.bind" # level: "TRACE" #- name: "org.hibernate.orm.jdbc.extract" # level: "TRACE" Snippet from an example log4j2.properties configuring Hibernate-specific loggers # Log just the SQL logger.0.name = org.hibernate.SQL logger.0.level = DEBUG # Log JDBC bind parameters and extracted values # # Warning! (1) # JDBC bind parameters can contain sensitive data! # Passwords, credit card numbers, etc. # Use these logger configurations with great care! #logger.1.name = org.hibernate.type #logger.1.level = TRACE #logger.2.name = org.hibernate.orm.jdbc.bind #logger.2.level = TRACE #logger.3.name = org.hibernate.orm.jdbc.bind #logger.3.level = TRACE 1 Due to the sensitivity of the data involved, you are strongly advised to use these logger configurations only in development environments . See the Logging Best Practices section in Hibernate Manual for further details. We strongly advise you to avoid using the hibernate.show_sql property ! (It maps to spring.jpa.show-sql in Spring Boot.) hibernate.show_sql writes to the standard error output stream, not to the logging API. Logger-based configuration exemplified above gives a finer-grained control over logging and integrates with the logging system. Combining hibernate.show_sql with logger-based configuration duplicates the logging effort. Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/manual/api.html#fish-tagging | Log4j API :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Manual API Edit this Page Log4j API Log4j is essentially composed of a logging API called Log4j API , and its reference implementation called Log4j Core . What is a logging API and a logging implementation? Logging API A logging API is an interface your code or your dependencies directly logs against. It is required at compile-time. It is implementation agnostic to ensure that your application can write logs, but is not tied to a specific logging implementation. Log4j API, SLF4J , JUL (Java Logging) , JCL (Apache Commons Logging) , JPL (Java Platform Logging) and JBoss Logging are major logging APIs. Logging implementation A logging implementation is only required at runtime and can be changed without the need to recompile your software. Log4j Core, JUL (Java Logging) , Logback are the most well-known logging implementations. Are you looking for a crash course on how to use Log4j in your application or library? See Getting started . You can also check out Installation for the complete installation instructions. Log4j API provides A logging API that libraries and applications can code to A minimal logging implementation (aka. Simple logger) Adapter components to create a logging implementation This page tries to cover the most prominent Log4j API features. Did you know that Log4j provides specialized APIs for Kotlin and Scala? Check out Log4j Kotlin and Log4j Scala projects for details. Introduction To log, you need a Logger instance which you will retrieve from the LogManager . These are all part of the log4j-api module, which you can install as follows: Maven Gradle <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api</artifactId> <version>${log4j-api.version}</version> </dependency> implementation 'org.apache.logging.log4j:log4j-api:${log4j-api.version}' You can use the Logger instance to log by using methods like info() , warn() , error() , etc. These methods are named after the log levels they represent, a way to categorize log events by severity. The log message can also contain placeholders written as {} that will be replaced by the arguments passed to the method. import org.apache.logging.log4j.Logger; import org.apache.logging.log4j.LogManager; public class DbTableService { private static final Logger LOGGER = LogManager.getLogger(); (1) public void truncateTable(String tableName) throws IOException { LOGGER.warn("truncating table `{}`", tableName); (2) db.truncate(tableName); } } 1 The returned Logger instance is thread-safe and reusable. Unless explicitly provided as an argument, getLogger() associates the returned Logger with the enclosing class, that is, DbTableService in this example. 2 The placeholder {} in the message will be replaced with the value of tableName The generated log event , which contain the user-provided log message and log level (i.e., WARN ), will be enriched with several other implicitly derived contextual information: timestamp, class & method name, line number, etc. What happens to the generated log event will vary significantly depending on the configuration used. It can be pretty-printed to the console, written to a file, or get totally ignored due to insufficient severity or some other filtering. Log levels are used to categorize log events by severity and control the verbosity of the logs. Log4j contains various predefined levels, but the most common are DEBUG , INFO , WARN , and ERROR . With them, you can filter out less important logs and focus on the most critical ones. Previously we used Logger#warn() to log a warning message, which could mean that something is not right, but the application can continue. Log levels have a priority, and WARN is less severe than ERROR . Exceptions are often also errors. In this case, we might use the ERROR log level. Make sure to log exceptions that have diagnostics value. This is simply done by passing the exception as the last argument to the log method: LOGGER.warn("truncating table `{}`", tableName); try { db.truncate(tableName); } catch (IOException exception) { LOGGER.error("failed truncating table `{}`", tableName, exception); (1) throw new IOException("failed truncating table: " + tableName, exception); } 1 By using error() instead of warn() , we signal that the operation failed. While there is only one placeholder in the message, we pass two arguments: tableName and exception . Log4j will attach the last extra argument of type Throwable in a separate field to the generated log event. Log messages are often used interchangeably with log events . While this simplification holds for several cases, it is not technically correct. A log event, capturing the logging context (level, logger name, instant, etc.) along with the log message, is generated by the logging implementation (e.g., Log4j Core) when a user issues a log using a logger , e.g., LOGGER.info("Hello, world!") . Hence, log events are compound objects containing log messages . Click for an introduction to log event fields Log events contain fields that can be classified into three categories: Some fields are provided explicitly, in a Logger method call. The most important are the log level and the log message, which is a description of what happened, and it is addressed to humans. Some fields are contextual (e.g., Thread Context ) and are either provided explicitly by developers of other parts of the application, or is injected by Java instrumentation. The last category of fields is those that are computed automatically by the logging implementation employed. For clarity’s sake let us look at a log event formatted as JSON: { (1) "log.level": "INFO", "message": "Unable to insert data into my_table.", "error.type": "java.lang.RuntimeException", "error.message": null, "error.stack_trace": [ { "class": "com.example.Main", "method": "doQuery", "file.name": "Main.java", "file.line": 36 }, { "class": "com.example.Main", "method": "main", "file.name": "Main.java", "file.line": 25 } ], "marker": "SQL", "log.logger": "com.example.Main", (2) "tags": [ "SQL query" ], "labels": { "span_id": "3df85580-f001-4fb2-9e6e-3066ed6ddbb1", "trace_id": "1b1f8fc9-1a0c-47b0-a06f-af3c1dd1edf9" }, (3) "@timestamp": "2024-05-23T09:32:24.163Z", "log.origin.class": "com.example.Main", "log.origin.method": "doQuery", "log.origin.file.name": "Main.java", "log.origin.file.line": 36, "process.thread.id": 1, "process.thread.name": "main", "process.thread.priority": 5 } 1 Explicitly supplied fields: log.level The level of the event, either explicitly provided as an argument to the logger call, or implied by the name of the logger method message The log message that describes what happened error.* An optional Throwable explicitly passed as an argument to the logger call marker An optional marker explicitly passed as an argument to the logger call log.logger The logger name provided explicitly to LogManager.getLogger() or inferred by Log4j API 2 Contextual fields: tags The Thread Context stack labels The Thread Context map 3 Logging backend specific fields. In case you are using Log4j Core, the following fields can be automatically generated: @timestamp The instant of the logger call log.origin.* The location of the logger call in the source code process.thread.* The name of the Java thread, where the logger is called Best practices There are several widespread bad practices while using Log4j API. Let’s try to walk through the most common ones and see how to fix them. Don’t use toString() Don’t use Object#toString() in arguments, it is redundant! /* BAD! */ LOGGER.info("userId: {}", userId.toString()); Underlying message type and layout will deal with arguments: /* GOOD */ LOGGER.info("userId: {}", userId); Pass exception as the last extra argument Don’t call Throwable#printStackTrace() ! This not only circumvents the logging but can also leak sensitive information! /* BAD! */ exception.printStackTrace(); Don’t use Throwable#getMessage() ! This prevents the log event from getting enriched with the exception. /* BAD! */ LOGGER.info("failed", exception.getMessage()); /* BAD! */ LOGGER.info("failed for user ID `{}`: {}", userId, exception.getMessage()); Don’t provide both Throwable#getMessage() and Throwable itself! This bloats the log message with a duplicate exception message. /* BAD! */ LOGGER.info("failed for user ID `{}`: {}", userId, exception.getMessage(), exception); Pass exception as the last extra argument: /* GOOD */ LOGGER.error("failed", exception); /* GOOD */ LOGGER.error("failed for user ID `{}`", userId, exception); Don’t use string concatenation If you are using String concatenation while logging, you are doing something very wrong and dangerous! Don’t use String concatenation to format arguments! This circumvents the handling of arguments by message type and layout. More importantly, this approach is prone to attacks! Imagine userId being provided by the user with the following content: placeholders for non-existing args to trigger failure: {} {} {dangerousLookup} /* BAD! */ LOGGER.info("failed for user ID: " + userId); Use message parameters /* GOOD */ LOGGER.info("failed for user ID `{}`", userId); Use Supplier s to pass computationally expensive arguments If one or more arguments of the log statement are computationally expensive, it is not wise to evaluate them knowing that their results can be discarded. Consider the following example: /* BAD! */ LOGGER.info("failed for user ID `{}` and role `{}`", userId, db.findUserRoleById(userId)); The database query (i.e., db.findUserNameById(userId) ) can be a significant bottleneck if the created the log event will be discarded anyway – maybe the INFO level is not accepted for this logger, or due to some other filtering. The old-school way of solving this problem is to level-guard the log statement: /* OKAY */ if (LOGGER.isInfoEnabled()) { LOGGER.info(...); } While this would work for cases where the message can be dropped due to insufficient level, this approach is still prone to other filtering cases; e.g., maybe the associated marker is not accepted. Use Supplier s to pass arguments containing computationally expensive items: /* GOOD */ LOGGER.info("failed for user ID `{}` and role `{}`", () -> userId, () -> db.findUserRoleById(userId)); Use a Supplier to pass the message and its arguments containing computationally expensive items: /* GOOD */ LOGGER.info(() -> new ParameterizedMessage("failed for user ID `{}` and role `{}`", userId, db.findUserRoleById(userId))); Loggers Logger s are the primary entry point for logging. In this section we will introduce you to further details about Logger s. Refer to Architecture to see where Logger s stand in the big picture. Logger names Most logging implementations use a hierarchical scheme for matching logger names with logging configuration. In this scheme, the logger name hierarchy is represented by . (dot) characters in the logger name, in a fashion very similar to the hierarchy used for Java package names. For example, org.apache.logging.appender and org.apache.logging.filter both have org.apache.logging as their parent. In most cases, applications name their loggers by passing the current class’s name to LogManager.getLogger(…​) . Because this usage is so common, Log4j provides that as the default when the logger name parameter is either omitted or is null. For example, all Logger -typed variables below will have a name of com.example.LoggerNameTest : public class LoggerNameTest { Logger logger1 = LogManager.getLogger(LoggerNameTest.class); Logger logger2 = LogManager.getLogger(LoggerNameTest.class.getName()); Logger logger3 = LogManager.getLogger(); } We suggest you to use LogManager.getLogger() without any arguments since it delivers the same functionality with less characters and is not prone to copy-paste errors. Logger message factories Loggers translate LOGGER.info("Hello, {}!", name); calls to the appropriate canonical logging method: LOGGER.log(Level.INFO, messageFactory.createMessage("Hello, {}!", new Object[] {name})); Note that how Hello, {}! should be encoded given the {name} array as argument completely depends on the MessageFactory employed. Log4j allows users to customize this behaviour in several getLogger() methods of LogManager : LogManager.getLogger() (1) .info("Hello, {}!", name); (2) LogManager.getLogger(StringFormatterMessageFactory.INSTANCE) (3) .info("Hello, %s!", name); (4) 1 Create a logger using the default message factory 2 Use default parameter placeholders, that is, {} style 3 Explicitly provide the message factory, that is, StringFormatterMessageFactory . Note that there are several other getLogger() methods accepting a MessageFactory . 4 Note the placeholder change from {} to %s ! Passed Hello, %s! and name arguments will be implicitly translated to a String.format("Hello, %s!", name) call due to the employed StringFormatterMessageFactory . Log4j bundles several predefined message factories . Some common ones are accessible through convenient factory methods, which we will cover below. Formatter logger The Logger instance returned by default replaces the occurrences of {} placeholders with the toString() output of the associated parameter. If you need more control over how the parameters are formatted, you can also use the java.util.Formatter format strings by obtaining your Logger using LogManager#getFormatterLogger() : Logger logger = LogManager.getFormatterLogger(); logger.debug("Logging in user %s with birthday %s", user.getName(), user.getBirthdayCalendar()); logger.debug( "Logging in user %1$s with birthday %2$tm %2$te,%2$tY", user.getName(), user.getBirthdayCalendar()); logger.debug("Integer.MAX_VALUE = %,d", Integer.MAX_VALUE); logger.debug("Long.MAX_VALUE = %,d", Long.MAX_VALUE); Loggers returned by getFormatterLogger() are referred as formatter loggers . printf() method Formatter loggers give fine-grained control over the output format, but have the drawback that the correct type must be specified. For example, passing anything other than a decimal integer for a %d format parameter gives an exception. If your main usage is to use {} -style parameters, but occasionally you need fine-grained control over the output format, you can use the Logger#printf() method: Logger logger = LogManager.getLogger("Foo"); logger.debug("Opening connection to {}...", someDataSource); logger.printf(Level.INFO, "Hello, %s!", userName); Formatter performance Keep in mind that, contrary to the formatter logger, the default Log4j logger (i.e., {} -style parameters) is heavily optimized for several use cases and can operate garbage-free when configured correctly. You might reconsider your formatter logger usages for latency sensitive applications. Event logger EventLogger is a convenience to log StructuredDataMessage s, which format their content in a way compliant with the Syslog message format described in RFC 5424 . Event Logger is deprecated for removal! We advise users to switch to plain Logger instead. Read more on event loggers…​ Simple logger Even though Log4j Core is the reference implementation of Log4j API, Log4j API itself also provides a very minimalist implementation: Simple Logger . This is a convenience for environments where either a fully-fledged logging implementation is missing, or cannot be included for other reasons. SimpleLogger is the fallback Log4j API implementation if no other is available in the classpath. Read more on the simple logger…​ Status logger Status Logger is a standalone, self-sufficient Logger implementation to record events that occur in the logging system (i.e., Log4j) itself. It is the logging system used by Log4j for reporting status of its internals. Users can use the status logger to either emit logs in their custom Log4j components, or troubleshoot a Log4j configuration. Read more on the status logger…​ Fluent API The fluent API allows you to log using a fluent interface: LOGGER.atInfo() .withMarker(marker) .withLocation() .withThrowable(exception) .log("Login for user `{}` failed", userId); Read more on the Fluent API…​ Fish tagging Just as a fish can be tagged and have its movement tracked (aka. fish tagging [ 1 ] ), stamping log events with a common tag or set of data elements allows the complete flow of a transaction or a request to be tracked. You can use them for several purposes, such as: Provide extra information while serializing the log event Allow filtering of information so that it does not overwhelm the system or the individuals who need to make use of it Log4j provides fish tagging in several flavors: Levels Log levels are used to categorize log events by severity. Log4j contains predefined levels, of which the most common are DEBUG , INFO , WARN , and ERROR . Log4j also allows you to introduce your own custom levels too. Read more on custom levels…​ Markers Markers are programmatic labels developers can associate to log statements: public class MyApp { private static final Logger LOGGER = LogManager.getLogger(); private static final Marker ACCOUNT_MARKER = MarkerManager.getMarker("ACCOUNT"); public void removeUser(String userId) { logger.debug(ACCOUNT_MARKER, "Removing user with ID `{}`", userId); // ... } } Read more on markers…​ Thread Context Just like Java’s ThreadLocal , Thread Context facilitates associating information with the executing thread and making this information accessible to the rest of the logging system. Thread Context offers both map-structured – referred to as Thread Context Map or Mapped Diagnostic Context (MDC) stack-structured – referred to as Thread Context Stack or Nested Diagnostic Context (NDC) storage: ThreadContext.put("ipAddress", request.getRemoteAddr()); (1) ThreadContext.put("hostName", request.getServerName()); (1) ThreadContext.put("loginId", session.getAttribute("loginId")); (1) void performWork() { ThreadContext.push("performWork()"); (2) LOGGER.debug("Performing work"); (3) // Perform the work ThreadContext.pop(); (4) } ThreadContext.clear(); (5) 1 Adding properties to the thread context map 2 Pushing properties to the thread context stack 3 Added properties can later on be used to, for instance, filter the log event, provide extra information in the layout, etc. 4 Popping the last pushed property from the thread context stack 5 Clearing the thread context (for both stack and map!) Read more on Thread Context …​ Messages Whereas almost every other logging API and implementation accepts only String -typed input as message, Log4j generalizes this concept with a Message contract. Customizability of the message type enables users to have complete control over how a message is encoded by Log4j. This liberal approach allows applications to choose the message type best fitting to their logging needs; they can log plain String s, or custom PurchaseOrder objects. Log4j provides several predefined message types to cater for common use cases: Simple String -typed messages: LOGGER.info("foo"); LOGGER.info(new SimpleMessage("foo")); String -typed parameterized messages: LOGGER.info("foo {} {}", "bar", "baz"); LOGGER.info(new ParameterizedMessage("foo {} {}", new Object[] {"bar", "baz"})); Map -typed messages: LOGGER.info(new StringMapMessage().with("key1", "val1").with("key2", "val2")); Read more on messages…​ Flow tracing The Logger class provides traceEntry() , traceExit() , catching() , throwing() methods that are quite useful for following the execution path of applications. These methods generate log events that can be filtered separately from other debug logging. Read more on flow tracing…​ 1 . Fish tagging is first described by Neil Harrison in the "Patterns for Logging Diagnostic Messages" chapter of "Pattern Languages of Program Design 3" edited by R. Martin, D. Riehle, and F. Buschmann in 1997 . Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://docs.aws.amazon.com/pt_br/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-testing-debugging.html | Testar e depurar as funções do Lambda@Edge - Amazon CloudFront Testar e depurar as funções do Lambda@Edge - Amazon CloudFront Documentação Amazon CloudFront Guia do Desenvolvedor Testar as funções do Lambda@Edge Identificar erros de função do Lambda@Edge no CloudFront Solução de problemas de respostas inválidas de funções do Lambda@Edge (erros de validação) Solução de problemas de erros de execução de funções do Lambda@Edge Determinar a região do Lambda@Edge Determinar se a sua conta envia logs ao CloudWatch Testar e depurar as funções do Lambda@Edge É importante testar o código da sua função do Lambda@Edge independentemente para ter certeza de que ele conclui a tarefa pretendida e fazer testes de integração para garantir que a função funcione corretamente com o CloudFront. Durante o teste de integração ou depois que a função foi implantada, talvez seja necessário depurar erros do CloudFront, como erros HTTP 5xx. Os erros podem ser uma resposta inválida retornada da função do Lambda, erros de execução quando a função é acionada ou erros devido a uma limitação de execução do serviço do Lambda. As seções neste tópico compartilham estratégias para determinar qual tipo de falha é o problema e, em seguida, as etapas que você pode realizar para corrigir o problema. nota Ao revisar os arquivos de log ou as métricas do CloudWatch durante a solução de erros, esteja ciente de que eles são exibidos ou armazenados na Região da AWS mais próxima do local em que a função foi executada. Portanto, se você tiver um site ou aplicação web com usuários no Reino Unido e, por exemplo, tiver uma função do Lambda associada à distribuição, deverá alterar a região para visualizar as métricas ou os arquivos de log do CloudWatch para a Região da AWS Londres. Para obter mais informações, consulte Determinar a região do Lambda@Edge . Tópicos Testar as funções do Lambda@Edge Identificar erros de função do Lambda@Edge no CloudFront Solução de problemas de respostas inválidas de funções do Lambda@Edge (erros de validação) Solução de problemas de erros de execução de funções do Lambda@Edge Determinar a região do Lambda@Edge Determinar se a sua conta envia logs ao CloudWatch Testar as funções do Lambda@Edge Há duas etapas para testar a função do Lambda: teste autônomo e teste de integração. Testar a funcionalidade autônoma Antes de adicionar a função do Lambda ao CloudFront, teste a funcionalidade primeiro usando os recursos de teste no console do Lambda ou usando outros métodos. Consulte mais informações sobre como testar no console do Lambda em Invocar a função do Lambda usando o console no Guia do desenvolvedor do AWS Lambda . Testar a operação da função no CloudFront É importante concluir o teste de integração, quando a função está associada a uma distribuição e é executada com base em um evento do CloudFront. Certifique-se de que a função seja acionada para o evento correto e retorne uma resposta válida e correta para o CloudFront. Por exemplo, verifique se a estrutura do evento está correta, se apenas os cabeçalhos válidos estão incluídos e assim por diante. Ao iterar os testes de integração com a função no console do Lambda, consulte as etapas do tutorial do Lambda@Edge à medida que você modifica o código ou altera o trigger do CloudFront que chama a função. Por exemplo, verifique se você está trabalhando em uma versão numerada da função, conforme descrito nesta etapa do tutorial: Etapa 4: adicionar um acionador do CloudFront para executar a função . Ao fazer alterações e implantá-las, lembre-se de que a função e os triggers atualizados do CloudFront levarão vários minutos para serem replicados em todas as regiões. Isso geralmente leva alguns minutos, mas pode demorar até 15 minutos. Você pode verificar se a replicação foi concluída acessando o console do CloudFront e visualizando a distribuição. Como verificar se a implantação da replicação foi concluída Abra o console do CloudFront em https://console.aws.amazon.com/cloudfront/v4/home . Escolha o nome da distribuição. Verifique o status de distribuição para mudar de In Progress (Em progresso) para Deployed (Implantado), o que significa que sua função foi replicada. Em seguida, siga as etapas na próxima seção para verificar se a função funciona. Esteja ciente de que o teste no console valida apenas a lógica da sua função e não aplica cotas de serviço (anteriormente conhecidas como limites) específicas do Lambda@Edge. Identificar erros de função do Lambda@Edge no CloudFront Depois de verificar se a lógica da função funciona corretamente, você ainda poderá ver erros HTTP 5xx quando a função for executada no CloudFront. Os erros HTTP 5xx podem ser retornados por vários motivos, que incluem erros ou outros problemas da função do Lambda no CloudFront. Ao usar as funções do Lambda@Edge, é possível usar gráficos no console do CloudFront para ajudar a identificar o que está causando o erro e corrigi-lo. Por exemplo, é possível ver se os erros HTTP 5xx são causados pelo CloudFront ou pelas funções do Lambda e, no caso de funções específicas, visualizar os arquivos de log relacionados para investigar o problema. Para solucionar a maioria dos erros HTTP do CloudFront, consulte as etapas de solução de problemas no seguinte tópico: Solucionar problemas de códigos de status de resposta de erros no CloudFront . O que causa erros na função do Lambda@Edge no CloudFront Existem vários motivos pelos quais uma função do Lambda pode causar um erro HTTP 5xx, e as etapas de solução de problemas que você deve seguir dependem do tipo de erro. Os erros podem ser categorizados assim: Um erro de execução de função do Lambda Ocorre um erro de execução quando o CloudFront não obtém uma resposta do Lambda porque existem exceções não tratadas na função ou há um erro no código. Por exemplo, se o código incluir um retorno de chamada (erro). Uma resposta inválida da função do Lambda é retornada ao CloudFront Após a execução da função, o CloudFront recebe uma resposta do Lambda. Um erro será retornado se a estrutura do objeto da resposta não estiver em conformidade com o Estrutura de eventos do Lambda@Edge ou se a resposta contiver cabeçalhos inválidos ou outros campos inválidos. A execução no CloudFront é limitada devido às cotas de serviço do Lambda (anteriormente conhecidas como limites) O serviço do Lambda limita as execuções em cada região e retornará um erro se você exceder a cota. Para obter mais informações, consulte Cotas do Lambda@Edge . Como determinar o tipo de falha Para ajudar a decidir onde se concentrar na depuração de erros e solucionar os problemas retornados pelo CloudFront, é útil identificar por que o CloudFront está retornando um erro HTTP. Para iniciar a busca, você pode usar os gráficos fornecidos na seção Monitoring (Monitoramento) do console do CloudFront no Console de gerenciamento da AWS. Para obter mais informações sobre como visualizar gráficos na seção Monitoring (Monitoramento) no console do CloudFront, consulte Monitorar métricas do CloudFront com o Amazon CloudWatch . Os seguintes gráficos podem ser especialmente úteis ao verificar se os erros estão sendo retornados por origens ou por uma função do Lambda, e para restringir o tipo de problema quando se trata de um erro de função do Lambda. Gráfico de taxas de erro Um dos gráficos que você pode visualizar na guia Overview (Visão geral) para cada uma das suas distribuições é um gráfico Error rates (Taxas de erro) . Esse gráfico exibe a taxa de erros como porcentagem do total de solicitações que chegam à sua distribuição. O gráfico mostra o total de taxa de erros, total de erros 4xx, total de erros 5xx e total de erros 5xx provenientes de funções Lambda. Com base no tipo de erro e volume, você pode executar etapas para investigar e solucionar o problema. Se você encontrar erros do Lambda, você poderá investigar mais detalhadamente, observando os tipos específicos de erros retornados pela função. A guia Lambda@Edge errors (Erros do Lambda@Edge) inclui gráficos que categorizam os erros por tipo de função para ajudar a identificar o problema em uma função específica. Se encontrar erros do CloudFront, você poderá solucionar e trabalhar para corrigir os erros de origem ou alterar a configuração do CloudFront. Para obter mais informações, consulte Solucionar problemas de códigos de status de resposta de erros no CloudFront . Gráficos de erros de execução e repostas de funções inválidas A guia Lambda@Edge errors (Erros do Lambda@Edge) inclui gráficos que categorizam os erros do Lambda@Edge em uma distribuição específica, por tipo. Por exemplo, um grafo mostra todos os erros de execução por Região da AWS. Para facilitar a solução de problemas, você pode procurar problemas específicos ao abrir e examinar os arquivos de log para funções específicas por região. Como visualizar os arquivos de log de uma função específica por região Na guia Erros do Lambda@Edge , em Funções associadas do Lambda@Edge , escolha o nome da função e selecione Visualizar métricas . Em seguida, na página com o nome da função, no canto superior direito, selecione Visualizar logs da função e escolha uma região. Por exemplo, se você encontrar problemas no grafo Erros para a região Oeste dos EUA (Oregon), escolha essa região na lista suspensa. Isso abrirá o console do Amazon CloudWatch. No console do CloudWatch dessa região, em Fluxos de log , escolha um fluxo de log para visualizar os eventos da função. Também leia as seguintes seções neste capítulo para obter mais recomendações sobre como solucionar problemas e corrigir erros. Gráfico de limitações A guia Lambdga@Edge errors (Erros do Lambda@Edge) também inclui um gráfico de Throttles (Limitações) . Ocasionalmente, o serviço Lambda limita as invocações de função por região quando a cota (anteriormente conhecida como limite) de simultaneidade regional é atingida. Se você encontrar um erro limit exceeded (limite excedido) , isso significa que a função atingiu uma cota que o serviço Lambda impõe para execuções em uma região. Para obter mais informações, incluindo como solicitar um aumento na cota, consulte Cotas do Lambda@Edge . Para obter um exemplo de como usar essas informações para solucionar erros HTTP, consulte Quatro etapas para depurar a entrega de conteúdo na AWS . Solução de problemas de respostas inválidas de funções do Lambda@Edge (erros de validação) Se você identificar que o problema é um erro de validação do Lambda, a função do Lambda poderá estar retornando uma resposta inválida ao CloudFront. Siga as orientações nesta seção para tomar medidas para analisar a função e garantir que a resposta esteja de acordo com os requisitos do CloudFront. O CloudFront valida a resposta de uma função do Lambda de duas maneiras: A resposta do Lambda deve estar de acordo com a estrutura de objeto necessária. Exemplos de estruturas de objeto inválidas incluem o seguinte: JSON não analisável, campos obrigatórios ausentes e um objeto inválido na resposta. Para mais informações, consulte o Estrutura de eventos do Lambda@Edge . A resposta deve incluir apenas valores de objeto válidos. Um erro ocorrerá se a resposta incluir um objeto válido, mas tiver valores sem suporte. Os exemplos incluem o seguinte: adicionar ou atualizar cabeçalhos permitidos ou somente leitura (consulte Restrições das funções de borda ), exceder o tamanho máximo do corpo (consulte Restrições sobre o tamanho da resposta gerada no tópico Erros do Lambda@Edge) e caracteres ou valores inválidos (consulte o Estrutura de eventos do Lambda@Edge ). Quando o Lambda retorna uma resposta inválida ao CloudFront, as mensagens de erro são gravados em arquivos de log que o CloudFront envia por push ao CloudWatch na região em que a função do Lambda foi executada. O comportamento padrão é enviar os arquivos de log ao CloudWatch quando há uma resposta inválida. No entanto, se você tiver associado uma função do Lambda ao CloudFront antes do lançamento dessa funcionalidade, talvez ela não esteja habilitada para a função. Para obter mais informações, consulte Determinar se a conta envia logs por push ao CloudWatch , mais adiante neste tópico. O CloudFront envia arquivos de log à região correspondente ao local onde a função foi executada, no grupo de logs associado à sua distribuição. Os grupos de log têm o seguinte formato: /aws/cloudfront/LambdaEdge/ DistributionId , em que DistributionId é o ID da distribuição. Para determinar a região na qual você pode encontrar os arquivos de log do CloudWatch, consulte Determinar a região do Lambda@Edge mais adiante neste tópico. Se for possível reproduzir o erro, você poderá criar uma nova solicitação que resulte no erro e encontrar o id da solicitação em uma resposta do CloudFront com falha (cabeçalho X-Amz-Cf-Id ) para localizar uma única falha nos arquivos de log. A entrada do arquivo de log inclui informações que podem ajudar a identificar porque o erro está sendo retornado, e também lista o id da solicitação do Lambda correspondente, para que você possa analisar a causa raiz no contexto de uma única solicitação. Se um erro for intermitente, você poderá usar os logs de acesso do CloudFront para encontrar o id de uma solicitação que falhou e depois pesquisar as mensagens de erro correspondentes nos CloudWatch Logs. Para mais informações, consulte a seção anterior, Determinar o tipo de falha . Solução de problemas de erros de execução de funções do Lambda@Edge Se o problema for um erro de execução do Lambda, poderá ser útil criar instruções de registro em log para funções do Lambda, gravar mensagens nos arquivos de log CloudWatch que monitoram a execução da função no CloudFront e determinar se ela está funcionando conforme o esperado. Depois, você pode pesquisar essas instruções nos arquivos de log do CloudWatch para verificar se a sua função está funcionando. nota Mesmo que você não tenha alterado a função do Lambda@Edge, as atualizações no ambiente de execução da função do Lambda podem afetá-la e um erro de execução poderá ser retornado. Para obter informações sobre como testar e migrar para uma versão mais recente, consulte Próximas atualizações no AWS Lambda e no ambiente de execução do AWS Lambda@Edge . Determinar a região do Lambda@Edge Para ver as regiões em que a função do Lambda@Edge está recebendo tráfego, visualize os gráficos das métricas da função no console do CloudFron no Console de gerenciamento da AWS. As métricas são exibidas para cada região da AWS. Na mesma página, é possível escolher uma região e visualizar os arquivos de log para essa região a fim de investigar problemas. Revise os arquivos de log do CloudWatch na região correta da AWS para ver os arquivos de log criados quando o CloudFront executou a função do Lambda. Para obter mais informações sobre como visualizar gráficos na seção Monitoring (Monitoramento) no console do CloudFront, consulte Monitorar métricas do CloudFront com o Amazon CloudWatch . Determinar se a sua conta envia logs ao CloudWatch Por padrão, o CloudFront habilita o registro em log de respostas de função inválidas do Lambda e envia por push os arquivos de log para o CloudWatch usando uma das Funções vinculadas ao serviço para o Lambda@Edge . Se você tiver funções do Lambda@Edge adicionadas ao CloudFront antes do lançamento do recurso de log de respostas de função inválidas do Lambda, o registro em log será habilitado quando você atualizar a configuração do Lambda@Edge, por exemplo, adicionando um trigger do CloudFront. É possível verificar se o envio por push dos arquivos de log ao CloudWatch está habilitado para a conta, fazendo o seguinte: Verifique se os logs aparecem no CloudWatch : examine na região em que a função do Lambda@Edge foi executada. Para obter mais informações, consulte Determinar a região do Lambda@Edge . Determine se o perfil vinculado ao serviço relacionado existe na sua conta do IAM : você deve ter o perfil do IAM AWSServiceRoleForCloudFrontLogger na sua conta. Para obter mais informações sobre essa função, consulte Funções vinculadas ao serviço para o Lambda@Edge . O Javascript está desativado ou não está disponível no seu navegador. Para usar a documentação da AWS, o Javascript deve estar ativado. Consulte as páginas de Ajuda do navegador para obter instruções. Convenções do documento Adicionar acionadores a uma função do Lambda@Edge Excluir funções e réplicas Essa página foi útil? - Sim Obrigado por nos informar que estamos fazendo um bom trabalho! Se tiver tempo, conte-nos sobre o que você gostou para que possamos melhorar ainda mais. Essa página foi útil? - Não Obrigado por nos informar que precisamos melhorar a página. Lamentamos ter decepcionado você. Se tiver tempo, conte-nos como podemos melhorar a documentação. | 2026-01-13T09:30:34 |
https://docs.aws.amazon.com/id_id/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-testing-debugging.html#lambda-edge-testing-debugging-troubleshooting-invalid-responses | Uji dan debug fungsi Lambda @Edge - Amazon CloudFront Uji dan debug fungsi Lambda @Edge - Amazon CloudFront Dokumentasi Amazon CloudFront Panduan Developerr Uji fungsi Lambda @Edge Anda Identifikasi kesalahan fungsi Lambda @Edge di CloudFront Memecahkan masalah respons fungsi Lambda @Edge yang tidak valid (kesalahan validasi) Memecahkan masalah kesalahan eksekusi fungsi Lambda @Edge Tentukan Wilayah Lambda @Edge Tentukan apakah akun Anda mendorong log ke CloudWatch Terjemahan disediakan oleh mesin penerjemah. Jika konten terjemahan yang diberikan bertentangan dengan versi bahasa Inggris aslinya, utamakan versi bahasa Inggris. Uji dan debug fungsi Lambda @Edge Penting untuk menguji kode fungsi Lambda @Edge Anda secara mandiri, untuk memastikan bahwa itu menyelesaikan tugas yang dimaksudkan, dan untuk melakukan pengujian integrasi, untuk memastikan bahwa fungsi berfungsi dengan benar. CloudFront Selama pengujian integrasi atau setelah fungsi Anda di-deploy, Anda mungkin perlu men-debug CloudFront kesalahan, seperti kesalahan HTTP 5xx. Kesalahan dapat menjadi respons tidak valid yang dikembalikan dari fungsi Lambda, kesalahan eksekusi saat fungsi dipicu, atau kesalahan akibat perotasian eksekusi oleh layanan Lambda. Bagian-bagian dalam topik ini membagikan strategi untuk menentukan jenis kegagalan mana yang menjadi masalahnya, kemudian langkah-langkah yang dapat Anda ambil untuk memperbaiki masalah. catatan Saat Anda meninjau file CloudWatch log atau metrik saat Anda memecahkan masalah kesalahan, ketahuilah bahwa kesalahan tersebut ditampilkan atau disimpan di lokasi Wilayah AWS terdekat dengan lokasi di mana fungsi dijalankan. Jadi, jika Anda memiliki situs web atau aplikasi web dengan pengguna di Britania Raya, dan Anda memiliki fungsi Lambda yang terkait dengan distribusi Anda, misalnya, Anda harus mengubah Wilayah untuk CloudWatch melihat metrik atau file log untuk London. Wilayah AWS Untuk informasi selengkapnya, lihat Tentukan Wilayah Lambda @Edge . Topik Uji fungsi Lambda @Edge Anda Identifikasi kesalahan fungsi Lambda @Edge di CloudFront Memecahkan masalah respons fungsi Lambda @Edge yang tidak valid (kesalahan validasi) Memecahkan masalah kesalahan eksekusi fungsi Lambda @Edge Tentukan Wilayah Lambda @Edge Tentukan apakah akun Anda mendorong log ke CloudWatch Uji fungsi Lambda @Edge Anda Terdapat dua langkah untuk menguji fungsi Lambda Anda: pengujian mandiri dan pengujian integrasi. Uji fungsionalitas mandiri Sebelum Anda menambahkan fungsi Lambda CloudFront, pastikan untuk menguji fungsionalitas terlebih dahulu dengan menggunakan kemampuan pengujian di konsol Lambda atau dengan menggunakan metode lain. Untuk informasi selengkapnya tentang pengujian di konsol Lambda, lihat Memanggil fungsi Lambda menggunakan konsol di Panduan Pengembang .AWS Lambda Uji operasi fungsi Anda di CloudFront Penting untuk menyelesaikan pengujian integrasi, di mana fungsi Anda dikaitkan dengan distribusi dan berjalan berdasarkan CloudFront peristiwa. Pastikan bahwa fungsi dipicu untuk acara yang tepat, dan mengembalikan respons yang valid dan benar untuk CloudFront. Misalnya, pastikan bahwa struktur acara sudah benar, bahwa hanya header yang valid yang disertakan, dan sebagainya. Saat Anda mengulangi pengujian integrasi dengan fungsi Anda di konsol Lambda, lihat langkah-langkah dalam tutorial Lambda @Edge saat Anda memodifikasi kode atau mengubah CloudFront pemicu yang memanggil fungsi Anda. Misalnya, pastikan bahwa Anda bekerja dalam versi bernomor dari fungsi Anda, seperti yang dijelaskan dalam langkah tutorial ini: Langkah 4: Tambahkan CloudFront pemicu untuk menjalankan fungsi . Saat Anda membuat perubahan dan menerapkannya, ketahuilah bahwa fungsi dan CloudFront pemicu Anda yang diperbarui akan memakan waktu beberapa menit untuk mereplikasi di semua Wilayah. Ini biasanya memerlukan waktu beberapa menit, tetapi dapat memakan waktu hingga 15 menit. Anda dapat memeriksa untuk melihat apakah replikasi selesai dengan membuka CloudFront konsol dan melihat distribusi Anda. Untuk memeriksa apakah replikasi Anda telah selesai digunakan Buka CloudFront konsol di https://console.aws.amazon.com/cloudfront/v4/home . Pilih nama distribusi. Periksa status distribusi yang akan diubah dari Sedang Berlangsung kembali ke Diterapkan , yang berarti fungsi Anda telah direplikasi. Kemudian ikuti langkah-langkah di bagian berikutnya untuk memverifikasi bahwa fungsi berfungsi. Ketahuilah bahwa pengujian di konsol hanya memvalidasi logika fungsi Anda, dan tidak menerapkan kuota layanan apa pun (sebelumnya dikenal sebagai batas) yang khusus untuk Lambda @Edge. Identifikasi kesalahan fungsi Lambda @Edge di CloudFront Setelah Anda memverifikasi bahwa logika fungsi Anda berfungsi dengan benar, Anda mungkin masih melihat kesalahan HTTP 5xx saat fungsi Anda berjalan. CloudFront Kesalahan HTTP 5xx dapat dikembalikan karena berbagai alasan, yang dapat mencakup kesalahan fungsi Lambda atau masalah lain di dalamnya. CloudFront Jika Anda menggunakan fungsi Lambda @Edge, Anda dapat menggunakan grafik di CloudFront konsol untuk membantu melacak penyebab kesalahan, dan kemudian bekerja untuk memperbaikinya. Misalnya, Anda dapat melihat apakah kesalahan HTTP 5xx disebabkan oleh CloudFront atau oleh fungsi Lambda, dan kemudian, untuk fungsi tertentu, Anda dapat melihat file log terkait untuk menyelidiki masalah tersebut. Untuk memecahkan masalah kesalahan HTTP secara umum di CloudFront, lihat langkah-langkah pemecahan masalah dalam topik berikut:. Memecahkan masalah kode status respons kesalahan di CloudFront Apa yang menyebabkan kesalahan fungsi Lambda @Edge di CloudFront Ada beberapa alasan mengapa fungsi Lambda dapat menyebabkan kesalahan HTTP 5xx, dan langkah-langkah pemecahan masalah yang harus Anda ambil bergantung pada jenis kesalahan. Kesalahan dapat dikategorikan sebagai berikut: Kesalahan eksekusi fungsi Lambda Kesalahan eksekusi terjadi ketika CloudFront tidak mendapatkan respons dari Lambda karena ada pengecualian yang tidak tertangani dalam fungsi atau ada kesalahan dalam kode. Misalnya, jika kode menyertakan callback(Kesalahan). Respons fungsi Lambda yang tidak valid dikembalikan ke CloudFront Setelah fungsi berjalan, CloudFront menerima respons dari Lambda. Kesalahan dikembalikan jika struktur objek tanggapan tidak sesuai dengan Struktur acara Lambda @Edge , atau respons berisi header yang tidak valid atau kolom tidak valid lainnya. Eksekusi di CloudFront dibatasi karena kuota layanan Lambda (sebelumnya dikenal sebagai batas) Eksekusi throttle layanan Lambda di setiap Wilayah, dan menghasilkan kesalahan jika Anda melebihi kuota. Untuk informasi selengkapnya, lihat Kuotas di Lambda@Edge . Cara menentukan jenis kegagalan Untuk membantu Anda memutuskan di mana harus fokus saat Anda men-debug dan bekerja untuk menyelesaikan kesalahan yang dikembalikan oleh CloudFront, akan sangat membantu untuk mengidentifikasi CloudFront mengapa mengembalikan kesalahan HTTP. Untuk memulai, Anda dapat menggunakan grafik yang disediakan di bagian Pemantauan CloudFront konsol di Konsol Manajemen AWS. Untuk informasi selengkapnya tentang melihat grafik di bagian Pemantauan CloudFront konsol, lihat Pantau CloudFront metrik dengan Amazon CloudWatch . Grafik berikut akan sangat membantu ketika Anda ingin melacak apakah kesalahan dikembalikan oleh asal atau fungsi Lambda, dan untuk mempersempit jenis masalah ketika itu adalah kesalahan dari fungsi Lambda. Grafik harga kesalahan Salah satu grafik yang dapat Anda lihat pada Ikhtisar untuk setiap distribusi Anda adalah Tingkat kesalahan grafik. Grafik ini menampilkan tingkat kesalahan sebagai persentase dari total permintaan yang datang ke distribusi Anda. Grafik menunjukkan tingkat kesalahan total, total 4xx kesalahan, total 5xx kesalahan, dan total 5xx kesalahan dari fungsi Lambda. Berdasarkan jenis dan volume kesalahan, Anda dapat mengambil langkah untuk menyelidiki dan memecahkan masalah penyebab. Jika Anda melihat kesalahan Lambda, Anda dapat menyelidiki lebih lanjut dengan melihat jenis kesalahan tertentu yang dikembalikan oleh fungsi tersebut. Kesalahan Lambda@Edge tab menyertakan grafik yang mengategorikan kesalahan fungsi berdasarkan jenis untuk membantu Anda menemukan masalah dari fungsi tertentu. Jika Anda melihat CloudFront kesalahan, Anda dapat memecahkan masalah dan bekerja untuk memperbaiki kesalahan asal atau mengubah konfigurasi Anda CloudFront . Untuk informasi selengkapnya, lihat Memecahkan masalah kode status respons kesalahan di CloudFront . Grafik kesalahan pelaksanaan dan respons fungsi tidak valid Kesalahan Lambda@Edge tab mencakup grafik yang mengkategorikan kesalahan Lambda@Edge untuk distribusi tertentu, berdasarkan jenis. Misalnya, satu grafik menunjukkan semua kesalahan eksekusi oleh Wilayah AWS. Untuk mempermudah pemecahan masalah, Anda dapat mencari masalah tertentu dengan membuka dan memeriksa file log untuk fungsi tertentu berdasarkan Wilayah. Untuk melihat file log untuk fungsi tertentu menurut Wilayah Pada tab kesalahan Lambda @Edge , di bawah fungsi Lambda @Edge Terkait, pilih nama fungsi, lalu pilih Lihat metrik. Selanjutnya, pada halaman dengan nama fungsi Anda, di sudut kanan atas, pilih Lihat log fungsi , lalu pilih Region. Misalnya, jika Anda melihat masalah dalam grafik Kesalahan untuk Wilayah AS Barat (Oregon), pilih Wilayah itu dari daftar tarik-turun. Ini membuka CloudWatch konsol Amazon. Di CloudWatch konsol untuk Wilayah itu, di bawah Aliran log , pilih aliran log untuk melihat peristiwa untuk fungsi tersebut. Selain itu, baca bagian berikut dalam bab ini untuk rekomendasi lebih lanjut tentang pemecahan masalah dan memperbaiki kesalahan. Grafik trotel Kesalahan Lambda@Edge juga mencakup Trotel grafik. Terkadang, layanan Lambda merombak invokasi fungsi Anda dengan basis per Wilayah, jika Anda mencapai kuota konkurensi regional (sebelumnya disebut batas). Jika Anda melihat kesalahan yang melebihi , fungsi Anda telah mencapai kuota yang dikenakan layanan Lambda pada eksekusi di Wilayah. Untuk informasi lebih lanjut, termasuk cara meminta peningkatan kuota, lihat Kuotas di Lambda@Edge . Sebagai contoh tentang cara menggunakan informasi ini dalam mengatasi masalah kesalahan HTTP, lihat Empat langkah untuk melakukan debug pengiriman konten Anda di AWS . Memecahkan masalah respons fungsi Lambda @Edge yang tidak valid (kesalahan validasi) Jika Anda mengidentifikasi bahwa masalah Anda adalah kesalahan validasi Lambda, itu berarti bahwa fungsi Lambda Anda mengembalikan respons yang tidak valid. CloudFront Ikuti panduan di bagian ini untuk mengambil langkah-langkah untuk meninjau fungsi Anda dan memastikan bahwa respons Anda sesuai dengan CloudFront persyaratan. CloudFront memvalidasi respons dari fungsi Lambda dengan dua cara: Respon Lambda harus sesuai dengan struktur objek yang diperlukan. Contoh struktur objek yang buruk mencakup hal berikut: JSON yang tidak dapat dipisahkan, kolom wajib yang hilang, dan objek tidak valid dalam respons. Untuk informasi lebih lanjut, lihat Struktur acara Lambda @Edge . Respons harus menyertakan hanya nilai objek yang valid. Kesalahan akan terjadi jika respons mencakup objek valid tetapi memiliki nilai yang tidak didukung. Contohnya meliputi yang berikut ini: menambahkan atau memperbarui header yang masuk daftar tidak diizinkan atau hanya baca (lihat Pembatasan pada fungsi edge ), melebihi ukuran izi maksimum (lihat dalam Pembatasan Ukuran Respons yang Dihasilkan dalam topic Kesalahan Lambda@Edge) dan karakter atau nilai tidak valid (lihat Struktur acara Lambda @Edge ). Ketika Lambda mengembalikan respons yang tidak valid CloudFront, pesan kesalahan ditulis ke file log yang CloudFront mendorong ke CloudWatch Wilayah tempat fungsi Lambda dijalankan. Ini adalah perilaku default untuk mengirim file log CloudWatch ketika ada respons yang tidak valid. Namun, jika Anda mengaitkan fungsi Lambda CloudFront sebelum fungsionalitas dirilis, fungsi tersebut mungkin tidak diaktifkan untuk fungsi Anda. Untuk informasi lebih lanjut, lihat Tentukan apakah Akun Anda Mendorong Log ke CloudWatch nanti dalam topik. CloudFront mendorong file log ke Wilayah yang sesuai dengan tempat fungsi Anda dijalankan, di grup log yang terkait dengan distribusi Anda. Grup log memiliki format berikut: /aws/cloudfront/LambdaEdge/ DistributionId , di DistributionId mana ID distribusi Anda. Untuk menentukan Wilayah tempat Anda dapat menemukan file CloudWatch log, lihat Menentukan Wilayah Lambda @Edge nanti dalam topik ini. Jika kesalahan dapat direproduksi, Anda dapat membuat permintaan baru yang menghasilkan kesalahan dan kemudian menemukan id permintaan dalam CloudFront respons gagal ( X-Amz-Cf-Id header) untuk menemukan satu kegagalan dalam file log. Entri file log mencakup informasi yang dapat membantu Anda mengidentifikasi mengapa kesalahan dikembalikan, dan juga mencantumkan id permintaan Lambda yang sesuai sehingga Anda dapat menganalisis akar masalah dalam konteks permintaan tunggal. Jika kesalahan terputus-putus, Anda dapat menggunakan log CloudFront akses untuk menemukan id permintaan untuk permintaan yang gagal, dan kemudian mencari CloudWatch log untuk pesan kesalahan yang sesuai. Untuk informasi lebih lanjut, lihat bagian sebelumnya, Menentukan Jenis Kegagalan . Memecahkan masalah kesalahan eksekusi fungsi Lambda @Edge Jika masalahnya adalah kesalahan eksekusi Lambda, akan sangat membantu untuk membuat pernyataan logging untuk fungsi Lambda, untuk menulis pesan ke file CloudWatch log yang memantau eksekusi fungsi Anda CloudFront dan menentukan apakah berfungsi seperti yang diharapkan. Kemudian Anda dapat mencari pernyataan tersebut di file CloudWatch log untuk memverifikasi bahwa fungsi Anda berfungsi. catatan Bahkan jika Anda belum mengubah fungsi Lambda@Edge Anda, pembaruan pada lingkungan pelaksanaan fungsi Lambda dapat memengaruhinya dan dapat mengembalikan kesalahan pelaksanaan. Untuk informasi tentang pengujian dan migrasi ke versi yang lebih baru, lihat Pembaruan mendatang untuk lingkungan eksekusi AWS Lambda dan AWS Lambda @Edge . Tentukan Wilayah Lambda @Edge Untuk melihat Wilayah tempat fungsi Lambda @Edge Anda menerima lalu lintas, lihat metrik untuk fungsi di CloudFront konsol di. Konsol Manajemen AWS Metrik ditampilkan untuk setiap AWS Wilayah. Di halaman yang sama, Anda dapat memilih Wilayah dan melihat file log untuk Wilayah tersebut sehingga Anda dapat menyelidiki masalah. Anda harus meninjau file CloudWatch log di AWS Wilayah yang benar untuk melihat file log yang dibuat saat CloudFront menjalankan fungsi Lambda Anda. Untuk informasi selengkapnya tentang melihat grafik di bagian Pemantauan CloudFront konsol, lihat Pantau CloudFront metrik dengan Amazon CloudWatch . Tentukan apakah akun Anda mendorong log ke CloudWatch Secara default, CloudFront memungkinkan pencatatan respons fungsi Lambda yang tidak valid, dan mendorong file log ke CloudWatch dengan menggunakan salah satu file. Peran terkait layanan untuk Lambda @Edge Jika Anda memiliki fungsi Lambda @Edge yang Anda tambahkan CloudFront sebelum fitur log respons fungsi Lambda yang tidak valid dirilis, logging diaktifkan saat Anda memperbarui konfigurasi Lambda @Edge Anda, misalnya, dengan menambahkan pemicu. CloudFront Anda dapat memverifikasi bahwa mendorong file log ke CloudWatch diaktifkan untuk akun Anda dengan melakukan hal berikut: Periksa untuk melihat apakah log muncul CloudWatch — Pastikan Anda melihat di Wilayah tempat fungsi Lambda @Edge dijalankan. Untuk informasi selengkapnya, lihat Tentukan Wilayah Lambda @Edge . Tentukan apakah peran terkait layanan terkait ada di akun Anda di IAM — Anda harus memiliki peran AWSServiceRoleForCloudFrontLogger IAM di akun Anda. Untuk informasi selengkapnya tentang peran ini, silakan lihat Peran terkait layanan untuk Lambda @Edge . Javascript dinonaktifkan atau tidak tersedia di browser Anda. Untuk menggunakan Dokumentasi AWS, Javascript harus diaktifkan. Lihat halaman Bantuan browser Anda untuk petunjuk. Konvensi Dokumen Tambahkan pemicu ke fungsi Lambda @Edge Hapus fungsi dan replika Apakah halaman ini membantu Anda? - Ya Terima kasih telah memberitahukan bahwa hasil pekerjaan kami sudah baik. Jika Anda memiliki waktu luang, beri tahu kami aspek apa saja yang sudah bagus, agar kami dapat menerapkannya secara lebih luas. Apakah halaman ini membantu Anda? - Tidak Terima kasih telah memberi tahu kami bahwa halaman ini perlu ditingkatkan. Maaf karena telah mengecewakan Anda. Jika Anda memiliki waktu luang, beri tahu kami bagaimana dokumentasi ini dapat ditingkatkan. | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/manual/plugins.html | Plugins :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Manual Implementation Extending Plugins Edit this Page Plugins Log4j plugin system is the de facto extension mechanism embraced by various Log4j Core components. Plugins make it possible for extensible components to receive feature implementations without any explicit links in between. It is analogous to a dependency injection framework, but curated for Log4j-specific needs. Log4j plugin system is implemented by Log4j Core, the logging implementation. It is deliberately not a part of the Log4j API to keep the logging API footprint small. Did you know about Plugin reference , the documentation extracted from the source code of all predefined Log4j plugins? Like Javadoc, but specialized for plugins! In this section we will give an overview of the Log4j plugin system by answering certain questions: How can you declare a plugin? How can you declare a plugin that needs to be represented in a Log4j configuration file? How can you register your plugin to Log4j? How does Log4j discover plugins? How can you load other plugins in a plugin? Declaring plugins A class can be declared as a plugin by adding a @Plugin annotation, which is essentially composed of following attributes: name Name of the plugin. It is recommended to be distinct among plugins sharing the same category . name matching is case-insensitive. category (optional) A name used for grouping a set of plugins. category matching is case-sensitive. elementType (deprecated) We don’t recommend the usage of elementType anymore. Existing usages are kept for backward compatibility reasons with the legacy configuration syntax: <appender type="ConsoleAppender" . See LowerLookup.java (a lookup for lower-casing its input) for a simple example. Click to read more on name collision and overriding an existing plugin The name attribute of plugins of a certain category is recommended to be distinct and this matching is case-insensitive. In case of a name collision, a warning will be emitted, and the plugin discovery order will determine the effective plugin. For example, to override the File plugin which is provided by the built-in File Appender , you would need to place your plugin in a JAR file in the classpath ahead of Log4j Core JAR. In an OSGi environment, the order that bundles are scanned for plugins generally follows the same order that bundles were installed into the framework; see getBundles() and SynchronousBundleListener . In short, name collisions are even more unpredictable in an OSGi environment. Declaring plugins represented in a configuration file If your plugin needs to be represented by an element in a configuration file (such as an appender , layout , logger , or filter ), following requirements must be met: The category attribute of the @Plugin annotation must be set to Node.CATEGORY ( Core ) It must have a plugin factory See JsonTemplateLayout.java for an example and notice these details: There are two plugin declarations: JsonTemplateLayout and JsonTemplateLayout.EventTemplateAdditionalField Both plugin declarations Set the category attribute to Node.CATEGORY Provide a @PluginBuilderFactory -annotated static method Declaring plugin factories A plugin factory is responsible for Creating an instance of the plugin Receiving values ( Configuration instance, configuration attributes, etc.) available in the context Every plugin that needs to be represented by an element in a configuration file must declare a plugin factory using one of the following: a @PluginFactory -annotated static method What is expected to be received is modelled as method arguments. Intended for simple plugins that receive less than a handful of values. See CsvParameterLayout.java for an example on @PluginFactory usage. a @PluginBuilderFactory -annotated static method of return type Builder<T> What is expected to be received is modelled as fields of a builder class. Intended for more sophisticated wiring needs. Click for advantages of builder class over factory method Attribute names don’t need to be specified, if they match the field name Default values can be specified in code rather than through an annotation. This also allows a runtime-calculated default value, which isn’t allowed in annotations. Default values are specified via code rather than relying on reflection and injection, so they work programmatically as well as in a configuration file. Adding new optional parameters doesn’t require existing programmatic configuration to be refactored. Easier to write unit tests using builders rather than factory methods with optional parameters. See JsonTemplateLayout.java for an example on @PluginBuilderFactory usage. If a plugin class implements Collection or Map , then no factory method is used. Instead, the class is instantiated using the default constructor, and all child configuration nodes are added to the Collection or Map . Plugin factory attribute types To allow the current Configuration to populate the correct arguments for the @PluginFactory -annotated method (or fields for the builder class), every argument to the method must be annotated using one of the following attribute types. @PluginAliases Identifies a list of aliases for a @Plugin , @PluginAttribute , or @PluginBuilderAttribute @PluginAttribute Denotes a configuration element attribute. The parameter must be convertible from a String using a TypeConverter . Most built-in types are already supported, but custom TypeConverter plugins may also be provided for more type support. Note that PluginBuilderAttribute can be used in builder class fields as an easier way to provide default values. @PluginConfiguration The current Configuration object will be passed to the plugin as a parameter. @PluginElement The parameter may represent a complex object that itself has parameters that can be configured. This also supports injecting an array of elements. @PluginNode The current Node being parsed will be passed to the plugin as a parameter. @PluginValue The value of the current Node or its attribute named value . Each attribute or element annotation must include the name that must be present in the configuration in order to match the configuration item to its respective parameter. For plugin builders, the names of the fields will be used by default if no name is specified in the annotation. Plugin factory attribute type converters TypeConverter s are a certain group of plugins for converting String s read from configuration file elements into the types used in plugin factory attributes. Other plugins can already be injected via the @PluginElement annotation ; now, any type supported by TypeConverter s can be used in a @PluginAttribute -annotated factory attribute. Conversion of enum types are supported on demand and do not require custom TypeConverter s. A large number of built-in Java classes ( int , long , BigDecimal , etc.) are already supported; see TypeConverters for a more exhaustive listing. You can create custom TypeConverter s as follows: Extend from the TypeConverter interface Set the category attribute of the @Plugin annotation to TypeConverters.CATEGORY ( TypeConverter ). Unlike other plugins, the plugin name of a TypeConverter is purely cosmetic. Have a default constructor Optionally, extend from Comparable<TypeConverter<?>> , which will be used for determining the order in case of multiple TypeConverter candidates for a certain type See TypeConverters.java for example implementations. Plugin factory attribute validators Plugin factory fields and parameters can be automatically validated at runtime using constraint validators inspired by Bean Validation . The following annotations are bundled in Log4j, but custom ConstraintValidator can be created as well. @Required This annotation validates that a value is non-empty. This covers a check for null as well as several other scenarios: empty CharSequence objects, empty arrays, empty Collection instances, and empty Map instances. @ValidHost This annotation validates that a value corresponds to a valid host name. This uses the same validation as InetAddress.getByName(String) . @ValidPort This annotation validates that a value corresponds to a valid port number between 0 and 65535. Registering plugins To properly work, each Log4j plugin needs: To be registered in the Log4j Plugin Descriptor (i.e., Log4j2Plugins.dat ). This file is generated using the PluginProcessor annotation processor at compile-time. (Optionally) To be registered in the GraalVM reachability metadata descriptor , which will allow the plugin to be used in GraalVM native applications. The GraalVmProcessor annotation processor creates such a file at compile-time. The GraalVmProcessor requires your project’s groupId and artifactId to correctly generate the GraalVM reachability metadata file in the recommended location. Provide these values to the processor using the log4j.graalvm.groupId and log4j.graalvm.artifactId annotation processor options. You need to configure your build tool as follows to use both plugin processors: Maven Gradle <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>${maven-compiler-plugin.version}</version> <configuration> <compilerArgs> <!-- Provide the project coordinates to `GraalVmProcessor`: --> <arg>-Alog4j.graalvm.groupId=${project.groupId}</arg> <arg>-Alog4j.graalvm.artifactId=${project.artifactId}</arg> </compilerArgs> </configuration> <executions> <execution> <!-- ~ Explicitly list the annotation processors for the default compile execution. ~ This is required starting with JDK 23, where annotation processors are not enabled automatically. ~ Explicit configuration also improves build reliability and clarity. ~ For more details, see: https://inside.java/2024/06/18/quality-heads-up/ --> <id>default-compile</id> <configuration> <annotationProcessorPaths> <!-- Include `log4j-core` providing 1. `org.apache.logging.log4j.core.config.plugins.processor.PluginProcessor` that generates `Log4j2Plugins.dat` 2. `org.apache.logging.log4j.core.config.plugins.processor.GraalVmProcessor` that generates the GraalVM reachability metadata file --> <path> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <version>2.25.3</version> </path> </annotationProcessorPaths> <annotationProcessors> <!-- Process sources using `PluginProcessor` to generate `Log4j2Plugins.dat` --> <processor>org.apache.logging.log4j.core.config.plugins.processor.PluginProcessor</processor> <!-- Process sources using `GraalVmProcessor` to generate a GraalVM reachability metadata file --> <processor>org.apache.logging.log4j.core.config.plugins.processor.GraalVmProcessor</processor> </annotationProcessors> </configuration> </execution> </executions> </plugin> compileJava { // Provide the project coordinates to the `GraalVmProcessor`: options.compilerArgs << '-Alog4j.graalvm.groupId=org.example' options.compilerArgs << '-Alog4j.graalvm.artifactId=example' } dependencies { // Process sources using: // * `PluginProcessor` to generate `Log4j2Plugins.dat` // * `GraalVmProcessor` to generate a GraalVM reachability metadata file annotationProcessor('org.apache.logging.log4j:log4j-core:2.25.3') } Discovering plugins PluginManager is responsible for discovering plugins and loading their descriptions. It locates plugins by looking in following places in given order: Plugin descriptor files on the classpath (using the class loader that loaded the log4j-core artifact). These files are generated automatically at compile-time by the Log4j plugin annotation processor. See Registering plugins for details. [OSGi only] Serialized plugin listing files in each active OSGi bundle. A BundleListener is added on activation to continue checking new bundles after Log4j Core has started. [Deprecated] A comma-separated list of packages specified by the log4j.plugin.packages system property [Deprecated] Packages passed to the static PluginManager.addPackages() method before Log4j configuration takes place [Deprecated] The packages attribute declared at the root element of your Log4j configuration file Loading plugins It is pretty common that a plugin uses other plugins; appenders accept layouts, some layouts accept key-value pairs, etc. You can do this as follows: If your plugin has a plugin factory (i.e., it is represented by a configuration file element), you can use the @PluginElement annotation to receive other plugins. See @PluginElement("EventTemplateAdditionalField") usage in JsonTemplateLayout.java for an example. Otherwise, you can use PluginUtil , which is a convenient wrapper around PluginManager , to discover and load plugins. See TemplateResolverFactories.java for example usages. Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://docs.aws.amazon.com/ko_kr/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-event-request-response.html | Lambda@Edge가 요청 및 응답을 처리하는 방법 - Amazon CloudFront Lambda@Edge가 요청 및 응답을 처리하는 방법 - Amazon CloudFront 설명서 Amazon CloudFront 개발자 가이드 Lambda@Edge가 요청 및 응답을 처리하는 방법 CloudFront 배포를 Lambda@Edge 함수와 연결하면 CloudFront가 CloudFront 엣지 로케이션에서 요청 및 응답을 가로챕니다. 다음과 같은 CloudFront 이벤트가 발생할 때 Lambda 함수를 실행할 수 있습니다. CloudFront가 최종 사용자의 요청을 수신할 때(최종 사용자 요청) CloudFront가 오리진에 요청을 전달하기 전(오리진 요청) CloudFront가 오리진의 응답을 수신할 때(오리진 응답) CloudFront가 최종 사용자에게 응답을 반환하기 전(최종 사용자 응답) AWS WAF를 사용하는 경우, Lambda@Edge 최종 사용자 요청은 AWS WAF 규칙이 적용된 후에 실행됩니다. 자세한 내용은 요청 및 응답 작업 수행 및 Lambda@Edge 이벤트 구조 (을)를 참조하세요. javascript가 브라우저에서 비활성화되거나 사용이 불가합니다. AWS 설명서를 사용하려면 Javascript가 활성화되어야 합니다. 지침을 보려면 브라우저의 도움말 페이지를 참조하십시오. 문서 규칙 Lambda@Edge를 사용하여 사용자 지정 Lambda@Edge 사용 방법 이 페이지의 내용이 도움이 되었습니까? - 예 칭찬해 주셔서 감사합니다! 잠깐 시간을 내어 좋았던 부분을 알려 주시면 더 열심히 만들어 보겠습니다. 이 페이지의 내용이 도움이 되었습니까? - 아니요 이 페이지에 작업이 필요하다는 점을 알려 주셔서 감사합니다. 실망시켜 드려 죄송합니다. 잠깐 시간을 내어 설명서를 향상시킬 수 있는 방법에 대해 말씀해 주십시오. | 2026-01-13T09:30:34 |
https://support.microsoft.com/lv-lv/windows/p%C4%81rvald%C4%ABt-s%C4%ABkfailus-microsoft-edge-skat%C4%ABt-at%C4%BCaut-blo%C4%B7%C4%93t-dz%C4%93st-un-izmantot-168dab11-0753-043d-7c16-ede5947fc64d | Pārvaldīt sīkfailus Microsoft Edge: skatīt, atļaut, bloķēt, dzēst un izmantot - Microsoft atbalsts Saistītās tēmas × Windows drošība, drošums un konfidencialitāte Pārskats Drošības, drošuma un konfidencialitātes pārskats Windows drošība Palīdzības saņemšana saistībā ar Windows drošību Palieciet aizsargāts ar Windows drošību Pirms sava Xbox vai Windows datora nodošanas pārstrādei, pārdošanas vai dāvināšanas Ļaunprogrammatūras noņemšana no Windows datora Windows drošums Palīdzības saņemšana saistībā ar Windows drošību Pārlūkošanas vēstures skatīšana un dzēšana pārlūkprogrammā Microsoft Edge Sīkfailu dzēšana un pārvaldība Droša jūsu vērtīgā satura noņemšana, atkārtoti instalējot operētājsistēmu Windows Pazaudētas Windows ierīces atrašana un bloķēšana Windows konfidencialitāte Palīdzības saņemšana saistībā ar Windows konfidencialitāti Windows konfidencialitātes iestatījumi, kurus izmanto lietojumprogrammas Datu skatīšana konfidencialitātes informācijas panelī Pāriet uz galveno saturu Microsoft Atbalsts Atbalsts Atbalsts Sākums Microsoft 365 Office Produkti Microsoft 365 Outlook Microsoft Teams OneDrive Microsoft Copilot OneNote Windows vēl ... Ierīces Surface Datora piederumi Xbox Datorspēles HoloLens Surface Hub Aparatūras garantijas Konts & norēķini Konts Microsoft Store un norēķini Resursi Jaunumi Kopienas forumi Microsoft 365 administratori Mazo uzņēmumu portāls Izstrādātājs Izglītība Ziņot par atbalsta krāpšanu Produkta drošība Vairāk Iegādāties Microsoft 365 Viss Microsoft Global Microsoft 365 Teams Copilot Windows Surface Xbox Atbalsts Programmatūra Programmatūra Windows programmas AI OneDrive Outlook OneNote Microsoft Teams Datoriem un Ierīces Datoriem un Ierīces Accessories Izklaide Izklaide Personālā datora spēles Uzņēmumiem Uzņēmumiem Microsoft drošība Azure Dynamics 365 Microsoft 365 darbam Microsoft Industry Microsoft Power Platform Windows 365 Izstrāde un IT Izstrāde un IT Microsoft izstrādātājs Microsoft Learn Atbalsts mākslīgā intelekta tirgus programmām Microsoft tehniskā kopiena Microsoft Marketplace Visual Studio Marketplace Rewards Citi Citi Bezmaksas lejupielādes un drošība Izglītība Skatīt vietnes karti Meklēt Meklēt palīdzību Nav rezultātu Atcelt Pierakstīties Pierakstīties, izmantojot Microsoft Pierakstīties vai izveidot kontu Sveicināti! Atlasīt citu kontu. Jums ir vairāki konti Izvēlieties kontu, ar kuru vēlaties pierakstīties. Saistītās tēmas Windows drošība, drošums un konfidencialitāte Pārskats Drošības, drošuma un konfidencialitātes pārskats Windows drošība Palīdzības saņemšana saistībā ar Windows drošību Palieciet aizsargāts ar Windows drošību Pirms sava Xbox vai Windows datora nodošanas pārstrādei, pārdošanas vai dāvināšanas Ļaunprogrammatūras noņemšana no Windows datora Windows drošums Palīdzības saņemšana saistībā ar Windows drošību Pārlūkošanas vēstures skatīšana un dzēšana pārlūkprogrammā Microsoft Edge Sīkfailu dzēšana un pārvaldība Droša jūsu vērtīgā satura noņemšana, atkārtoti instalējot operētājsistēmu Windows Pazaudētas Windows ierīces atrašana un bloķēšana Windows konfidencialitāte Palīdzības saņemšana saistībā ar Windows konfidencialitāti Windows konfidencialitātes iestatījumi, kurus izmanto lietojumprogrammas Datu skatīšana konfidencialitātes informācijas panelī Pārvaldīt sīkfailus Microsoft Edge: skatīt, atļaut, bloķēt, dzēst un izmantot Attiecas uz Windows 10 Windows 11 Microsoft Edge Sīkfaili ir nelieli datu fragmenti, ko jūsu apmeklētās tīmekļa vietnes saglabā jūsu ierīcē. Tie ir dažādi nolūki, piemēram, pieteikšanās akreditācijas datu iegaumēšana, vietnes preferences un sekošana lietotāju darbībai. Tomēr, ja vēlaties izdzēst sīkfailus konfidencialitātes apsvērumu dēļ vai novērst pārlūkošanas problēmas. Šajā rakstā ir sniegti norādījumi, kā: Visu sīkfailu skatīšana Atļaut visus sīkfailus Sīkfailu atļaušana konkrētā tīmekļa vietnē Trešās puses sīkfailu bloķēšana Bloķēt visus sīkfailus Sīkfailu bloķēšana noteiktā vietnē Visu sīkfailu dzēšana Sīkfailu dzēšana noteiktā vietnē Sīkfailu dzēšana ikreiz, kad aizverat pārlūkprogrammu Izmantojiet sīkfailus, lai iepriekš ielādētu lapu ātrākai pārlūkošanai Visu sīkfailu skatīšana Atveriet pārlūkprogrammu Edge, pārlūkprogrammas loga augšējā labajā stūrī atlasiet Iestatījumi un citas iespējas. Atlasiet Iestatījumi > konfidencialitāte, meklēšana un pakalpojumi . Atlasiet Sīkfaili , pēc tam noklikšķiniet uz Skatīt visus sīkfailus un vietnes datus, lai skatītu visu saglabāto sīkfailus un saistīto informāciju par vietni. Atļaut visus sīkfailus Atļaujot sīkfailus, tīmekļa vietnes varēs saglabāt un izgūt datus jūsu pārlūkprogrammā, kas var uzlabot pārlūkošanas pieredzi, iegaumējot jūsu preferences un pieteikšanās informāciju. Atveriet pārlūkprogrammu Edge, pārlūkprogrammas loga augšējā labajā stūrī atlasiet Iestatījumi un citas iespējas. Atlasiet Iestatījumi > Konfidencialitāte, meklēšana un pakalpojumi . Atlasiet Sīkfaili un iespējojiet opciju Atļaut vietnēm saglabāt un lasīt sīkfailu datus (ieteicams), lai atļautu visus sīkfailus. Sīkfailu atļaušana konkrētā vietnē Atļaujot sīkfailus, tīmekļa vietnes varēs saglabāt un izgūt datus jūsu pārlūkprogrammā, kas var uzlabot pārlūkošanas pieredzi, iegaumējot jūsu preferences un pieteikšanās informāciju. Atveriet pārlūkprogrammu Edge, pārlūkprogrammas loga augšējā labajā stūrī atlasiet Iestatījumi un citas iespējas. Atlasiet Iestatījumi > konfidencialitāte, meklēšana un pakalpojumi . Atlasiet Sīkfaili un dodieties uz sadaļu Atļauts saglabāt sīkfailus. Atlasiet Pievienot vietni, lai atļautu sīkfailus katrā vietnē, ievadot vietnes vietrādi URL. Trešās puses sīkfailu bloķēšana Ja nevēlaties, lai trešo pušu vietnes glabātu sīkfailus jūsu datorā, varat bloķēt sīkfailus. Šādi rīkojoties, noteiktas lapas var netikt rādītas pareizi, vai varat saņemt ziņojumu no vietnes, ka šīs vietnes apskatīšanas nolūkos ir nepieciešams atļaut sīkfailus. Atveriet pārlūkprogrammu Edge, pārlūkprogrammas loga augšējā labajā stūrī atlasiet Iestatījumi un citas iespējas. Atlasiet Iestatījumi > Konfidencialitāte, meklēšana un pakalpojumi . Atlasiet Sīkfaili un iespējojiet slēdzi Bloķēt trešās puses sīkfailus. Bloķēt visus sīkfailus Ja nevēlaties, lai trešo pušu vietnes glabātu sīkfailus jūsu datorā, varat bloķēt sīkfailus. Šādi rīkojoties, noteiktas lapas var netikt rādītas pareizi, vai varat saņemt ziņojumu no vietnes, ka šīs vietnes apskatīšanas nolūkos ir nepieciešams atļaut sīkfailus. Atveriet pārlūkprogrammu Edge, pārlūkprogrammas loga augšējā labajā stūrī atlasiet Iestatījumi un citas iespējas. Atlasiet Iestatījumi > konfidencialitāte, meklēšana un pakalpojumi . Atlasiet Sīkfaili un atspējojiet opciju Atļaut vietnēm saglabāt un lasīt sīkfailu datus (ieteicams), lai bloķētu visus sīkfailus. Sīkfailu bloķēšana noteiktā vietnē Microsoft Edge ļauj bloķēt sīkfailus konkrētā vietnē, tomēr tas var neļaut dažām lapām rādīt pareizi, vai arī no vietnes var tikt parādīts ziņojums, kurā teikts, ka sīkfailiem ir ļaušana šīs vietnes skatīšanai. Lai bloķētu konkrētas vietnes sīkfailus: Atveriet pārlūkprogrammu Edge, pārlūkprogrammas loga augšējā labajā stūrī atlasiet Iestatījumi un citas iespējas. Atlasiet Iestatījumi > konfidencialitāte, meklēšana un pakalpojumi . Atlasiet Sīkfaili un dodieties uz sadaļu Nav atļauts saglabāt un lasīt sīkfailus . Atlasiet Pievienot vietni, lai bloķētu sīkfailus katrā vietnē, ievadot vietnes vietrādi URL. Visu sīkfailu dzēšana Atveriet pārlūkprogrammu Edge, pārlūkprogrammas loga augšējā labajā stūrī atlasiet Iestatījumi un citas iespējas. Atlasiet Iestatījumi > konfidencialitāte, meklēšana un pakalpojumi . Atlasiet Notīrīt pārlūkošanas datus un pēc tam atlasiet Izvēlēties, ko notīrīt blakus notīrīt pārlūkošanas datus tūlīt . Sadaļā Laika diapazons sarakstā izvēlieties laika diapazonu. Atlasiet Sīkfaili un citi vietnes dati un pēc tam atlasiet Notīrīt tūlīt . Piezīme.: Vai arī varat izdzēst sīkfailus, nospiežot taustiņu kombināciju CTRL + SHIFT + DELETE kopā un pēc tam izpildot 4. un 5. darbību. Tagad visi jūsu sīkfaili un citi vietnes dati tiks izdzēsti atlasītajā laika diapazonā. Šādi jūs tiksiet aizriets no lielākās vietas. Sīkfailu dzēšana noteiktā vietnē Atveriet pārlūkprogrammu Edge, atlasiet Iestatījumi un citas > iestatījumi > Konfidencialitāte, meklēšana un pakalpojumi . Atlasiet Sīkfaili , pēc tam noklikšķiniet uz Skatīt visus sīkfailus un vietnes datus un meklējiet vietni, kuras sīkfailus vēlaties izdzēst. Atlasiet lejupvērsto bultiņu pa labi no vietnes, kuras sīkfailus vēlaties izdzēst, un atlasiet Dzēst . Sīkfaili atlasītajai vietnei tagad tiek izdzēsti. Atkārtojiet šo darbību katrai vietnei, kuras sīkfailus vēlaties izdzēst. Sīkfailu dzēšana ikreiz, kad aizverat pārlūkprogrammu Atveriet pārlūkprogrammu Edge, atlasiet Iestatījumi un > iestatījumi > Konfidencialitāte, meklēšana un pakalpojumi . Atlasiet Notīrīt pārlūkošanas datus un pēc tam atlasiet Izvēlēties, ko notīrīt ikreiz, kad aizverat pārlūkprogrammu . Ieslēdziet pārslēgu Sīkfaili un citi vietnes dati. Kad šis līdzeklis ir ieslēgts, ikreiz, kad aizverat pārlūkprogrammu Edge, visi sīkfaili un citi vietnes dati tiek izdzēsti. Šādi jūs tiksiet aizriets no lielākās vietas. Izmantojiet sīkfailus, lai iepriekš ielādētu lapu ātrākai pārlūkošanai Atveriet pārlūkprogrammu Edge, pārlūkprogrammas loga augšējā labajā stūrī atlasiet Iestatījumi un citas iespējas. Atlasiet Iestatījumi > konfidencialitāte, meklēšana un pakalpojumi . Atlasiet Sīkfaili un iespējojiet pārslēgu Ielādēt lapas, lai ātrāk pārlūkotu un meklētu. ABONĒT RSS PLŪSMAS Nepieciešama papildu palīdzība? Vēlaties vairāk opciju? Atklāt Kopiena Sazināties ar mums Izpētiet abonementa priekšrocības, pārlūkojiet apmācības kursus, uzziniet, kā aizsargāt ierīci un veikt citas darbības. Microsoft 365 abonementa priekšrocības Microsoft 365 apmācība Microsoft drošība Pieejamības centrs Kopienas palīdz uzdot jautājumus un atbildēt uz tiem, sniegt atsauksmes, kā arī saņemt informāciju no ekspertiem ar bagātīgām zināšanām. Jautājiet Microsoft kopienai Microsoft Tech kopiena Programmas Windows Insider dalībnieki Programmas Microsoft 365 Insider dalībnieki Atrodiet risinājumus visbiežāk sastopamām problēmām vai saņemiet palīdzību no atbalsta dienesta aģenta. Tiešsaistes atbalsts Vai šī informācija bija noderīga? Jā Nē Paldies! Vai jums ir vēl kādas atsauksmes par Microsoft? Vai varat palīdzēt mums veikt uzlabojumus? (Nosūtiet atsauksmes korporācijai Microsoft, lai mēs varētu palīdzēt.) Cik lielā mērā esat apmierināts ar valodas kvalitāti? Kas ietekmēja jūsu pieredzi? Atrisināja manu problēmu Skaidri norādījumi Viegli sekot Bez žargona Attēli palīdzēja Tulkojuma kvalitāte Neatbilst ekrānam Nepareizi norādījumi Pārāk tehnisks Nepietiek informācijas Nepietiek attēlu Tulkojuma kvalitāte Vai vēlaties sniegt papildu atsauksmes? (Neobligāti) Iesniegt atsauksmes Nospiežot Iesniegt, jūsu atsauksmes tiks izmantotas Microsoft produktu un pakalpojumu uzlabošanai. Jūsu IT administrators varēs vākt šos datus. Paziņojums par konfidencialitāti. Paldies par jūsu atsauksmēm! × Jaunumi Copilot organizācijām Copilot individuālai lietošanai Microsoft 365 Windows 11 lietotnes Microsoft Store Konta profils Lejupielādes centrs Atgrieztie vienumi Pasūtījumu izsekošana Otrreizējā pārstrāde Commercial Warranties Izglītība Microsoft Education Ierīces izglītībai Microsoft Teams izglītības iestādēm Microsoft 365 Education Office Education Pedagogu apmācība un attīstība Piedāvājumi skolēniem un vecākiem Azure skolēniem Uzņēmējdarbība Microsoft drošība Azure Dynamics 365 Microsoft 365 Microsoft Advertising Microsoft 365 Copilot Microsoft Teams Izstrāde un IT Microsoft izstrādātājs Microsoft Learn Atbalsts mākslīgā intelekta tirgus programmām Microsoft tehniskā kopiena Microsoft Marketplace Microsoft Power Platform Marketplace Rewards Visual Studio Uzņēmējsabiedrība Karjera Microsoft privātums Investori Ilgtspējība Latviešu (Latvija) Jūsu konfidencialitātes izvēles iespējas — atteikšanās ikona Jūsu konfidencialitātes izvēles iespējas Jūsu konfidencialitātes izvēles iespējas — atteikšanās ikona Jūsu konfidencialitātes izvēles iespējas Patērētāju veselības konfidencialitāte Sazināties ar Microsoft Konfidencialitāte Pārvaldīt sīkfailus Izmantošanas noteikumi Prečzīmes Par mūsu reklāmām EU Compliance DoCs © Microsoft 2026 | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/manual/installation.html | Installation :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Manual Installation Edit this Page Installation In this page we will elaborate on various ways to install Log4j in your library or application. Shortcuts Below we share some shortcuts for the impatient. We strongly advise you to skim through this page to get a grip on fundamental logging concepts and understand which recipe fits your bill best. Are you a library developer? You just need to log against a logging API . See Installing Log4j API . Are you an application developer? Your code and libraries it depends on are most probably already logging against a logging API, you just need to install a logging implementation . See Installing Log4j Core . Are you a Spring Boot application developer? See Installing Log4j Core for Spring Boot applications . Are you migrating from…​ Log4j 1 , Logback , or SLF4J ? Concepts (APIs, Implementations, and Bridges) It is crucial to understand certain concepts in logging to be able to talk about the installation of them. Logging API A logging API is an interface your code or your dependencies directly logs against. It is required at compile-time. It is implementation agnostic to ensure that your application can write logs, but is not tied to a specific logging implementation. Log4j API, SLF4J , JUL (Java Logging) , JCL (Apache Commons Logging) , JPL (Java Platform Logging) and JBoss Logging are major logging APIs. Logging implementation A logging implementation is only required at runtime and can be changed without the need to recompile your software. Log4j Core, JUL (Java Logging) , Logback are the most well-known logging implementations. Logging bridge Logging implementations accept input from a single logging API of their preference; Log4j Core from Log4j API, Logback from SLF4J, etc. A logging bridge is a simple logging implementation of a logging API that forwards all messages to a foreign logging API. Logging bridges allow a logging implementation to accept input from other logging APIs that are not their primary logging API. For instance, log4j-slf4j2-impl bridges SLF4J calls to Log4 API and effectively enables Log4j Core to accept input from SLF4J. With this in mind, the type of software you are writing determines whether you should be installing a logging API, implementation, or bridge: Libraries They only require a logging API and delegate the choice of the implementation to applications. If a logging implementation is required by tests of the library, it should be in the appropriate test scope. Applications They need a logging implementation, but also bridges of each of the major logging APIs to support log statements from the libraries they use. For example, your application might be logging against Log4j API and one of its dependencies against SLF4J. In this case, you need to install log4j-core and log4j-slf4j2-impl . To make things a little bit more tangible, consider the following visualization of a typical Log4j Core installation with bridges for an application: Figure 1. Visualization of a typical Log4j Core installation with SLF4J, JUL, and JPL bridges Requirements The Log4j 2 runtime requires a minimum of Java 8. See the Download page for older releases supporting Java 6 and 7. Configuring the build tool The easiest way to install Log4j is through a build tool such as Maven or Gradle. The rest of the instructions in this page assume you use one of these. Importing the Bill-of-Materials (aka. BOM) To keep your Log4j module versions in sync with each other, a BOM (Bill of Material) file is provided for your convenience. You can import the BOM in your build tool of preference: Maven Gradle <dependencyManagement> <dependencies> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-bom</artifactId> <version>2.25.3</version> <scope>import</scope> <type>pom</type> </dependency> </dependencies> </dependencyManagement> dependencies { implementation platform('org.apache.logging.log4j:log4j-bom:2.25.3') } Once you import the BOM, you don’t need to explicitly provide the versions of the Log4j artifacts managed by it. In the rest of the explanations, we will assume that the Log4j BOM is imported. Using snapshots Do you want to test the latest ( unstable! ) development version? Click here details. You can access the latest development snapshots by using the https://repository.apache.org/content/groups/snapshots/ repository. Snapshots are published for development and testing purposes; they should not be used at production! Maven Gradle <repositories> <repository> <id>apache.snapshots</id> <name>Apache Snapshot Repository</name> <url>https://repository.apache.org/snapshots</url> <releases> <enabled>false</enabled> </releases> </repository> </repositories> repositories { mavenCentral() maven { url 'https://repository.apache.org/snapshots' } } Installing Log4j API The easiest way to install Log4j API is through a dependency management tool such as Maven or Gradle, by adding the following dependency: Maven Gradle <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api</artifactId> <version>${log4j-api.version}</version> </dependency> implementation 'org.apache.logging.log4j:log4j-api:${log4j-api.version}' Installing a logging implementation Log4j provides several modules to facilitate the deployment of different logging implementations: Simple Logger This is a fallback implementation embedded into the Log4j API artifact. The usage of this implementation generates an error message unless you enable it explicitly. See Installing Simple Logger for more details. log4j-core The reference implementation. Log4 Core primarily accepts input from Log4j API. Refer to Installing Log4j Core for the installation instructions. log4j-to-jul The bridge that translates Log4j API calls to JUL (Java Logging) . See Installing JUL for the installation instructions. log4j-to-slf4j The bridge that translates Log4j API calls to SLF4J . Since currently only Logback implements SLF4J natively, refer to Installing Logback for the installation instructions. To ensure that your code does not directly depend on a particular logging implementation, the logging backend should be put in the appropriate scope of your dependency manager: Software type Build tool Maven Gradle Application runtime runtimeOnly Library test testRuntimeOnly Installing Simple Logger The Simple Logger implementation is embedded in the Log4j API and does not need any external dependency. It is intended as a convenience for environments where either a fully-fledged logging implementation is missing, or cannot be included for other reasons. The Log4j API will log an error to the Status Logger to avoid its unintentional usages: 2024-10-03T11:53:34.281462230Z main ERROR Log4j API could not find a logging provider. To remove the warning and confirm that you want to use Simple Logger, add a log4j2.component.properties file at the root of your class path with content: # Activate Simple Logger implementation log4j.provider = org.apache.logging.log4j.simple.internal.SimpleProvider Installing Log4j Core Log4j Core is the reference logging implementation of the Log4j project. It primarily accepts input from Log4j API. Do you have a Spring Boot application? You can directly skip to Installing Log4j Core for Spring Boot applications . To install Log4j Core as your logging implementation, you need to add the following dependency to your application: Maven Gradle <dependencies> <!-- Logging implementation (Log4j Core) --> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <scope>runtime</scope> </dependency> <!-- Logging bridges will follow... --> </dependencies> runtimeOnly 'org.apache.logging.log4j:log4j-core' // Logging bridges will follow... Installing bridges If either your application or one of its dependencies logs against a logging API that is different from Log4j API, you need to bridge that API to Log4j API. Do you need bridges? And if so, which ones? If you have any direct or transitive dependency on org.slf4j:slf4j-api , you need the SLF4J-to-Log4j bridge . If you have any direct or transitive dependency on commons-logging:commons-logging , you need the JCL-to-Log4j bridge . If it is a standalone application (i.e., not running in a Java EE container), you will probably need JUL-to-Log4j and JPL-to-Log4j bridges. The following sections explain the installation of Log4j-provided bridges. Installing SLF4J-to-Log4j bridge You can translate SLF4J calls to Log4j API using the log4j-slf4j2-impl artifact: Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-slf4j2-impl</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-slf4j2-impl' Are you still using SLF4J 1.x? Add this example instead: Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-slf4j-impl</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-slf4j-impl' Installing JUL-to-Log4j bridge You can translate JUL (Java Logging) calls to Log4j API using the log4j-jul artifact: Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-jul</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-jul' In order to activate the bridge from JUL to Log4j API, you also need to add: -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager to the JVM parameters in your application launcher. The JUL-to-Log4j bridge supports additional configuration and installation methods. See JUL-to-Log4j bridge for more information. Installing JPL-to-Log4j bridge You can translate JPL (Java Platform Logging) calls to Log4j API using the log4j-jpl artifact: Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-jpl</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-jpl' Installing JCL-to-Log4j bridge Since version 1.3.0 Apache Commons Logging natively supports Log4j API. You can enforce the version of a transitive dependency using the dependency management mechanism appropriate to your build tool: Maven Gradle Maven users should add an entry to the <dependencyManagement> section of their POM file: <dependencyManagement> <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> <version>1.3.5</version> </dependency> </dependencyManagement> Gradle users should refer to the Using a platform to control transitive versions of the Gradle documentation. Are you using Commons Logging 1.2.0 or earlier? You need to install the following dependency instead: Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-jcl</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-jcl' Installing JBoss Logging-to-Log4j bridge JBoss Logging is shipped with an integrated bridge to Log4j API and requires no steps on your part. See Supported Log Managers for more information. Installing Log4j Core for Spring Boot applications Spring Boot users should replace the spring-boot-starter-logging dependency with spring-boot-starter-log4j2 : Maven Gradle <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-logging</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-log4j2</artifactId> <scope>runtime</scope> </dependency> </dependencies> configurations { all.exclude group: 'org.springframework.boot', module: 'spring-boot-starter-logging' } dependencies { runtimeOnly group: 'org.springframework.boot', module: 'spring-boot-starter-log4j2' } The spring-boot-starter-log4j2 artifact will automatically install Log4j Core, JUL-to-Log4j bridge , and configure them. You don’t need to add any other dependency or configure JUL anymore. See Spring Boot Logging documentation for further information. Installing Log4j Core for GraalVM applications See Using Log4j Core in our GraalVM guide for more details on how to create GraalVM native applications that use Log4j Core. Configuring Log4j Core As with any other logging implementation, Log4j Core needs to be properly configured. Log4j Core supports many different configuration formats: JSON, XML, YAML, and Java properties. To configure Log4j Core, see Configuration file . A basic configuration can be obtained by adding one of these files to your application’s classpath: log4j2.xml log4j2.json log4j2.yaml log4j2.properties <?xml version="1.0" encoding="UTF-8"?> <Configuration xmlns="https://logging.apache.org/xml/ns" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://logging.apache.org/xml/ns https://logging.apache.org/xml/ns/log4j-config-2.xsd"> <Appenders> <Console name="CONSOLE"> <PatternLayout pattern="%d [%t] %5p %c{1.} - %m%n"/> (1) </Console> </Appenders> <Loggers> <Root level="INFO"> <AppenderRef ref="CONSOLE"/> </Root> </Loggers> </Configuration> { "Configuration": { "Appenders": { "Console": { "name": "CONSOLE", "PatternLayout": { "pattern": "%d [%t] %5p %c{1.} - %m%n" (1) } } }, "Loggers": { "Root": { "level": "INFO", "AppenderRef": { "ref": "CONSOLE" } } } } } Configuration: Appenders: Console: name: CONSOLE PatternLayout: pattern: "%d [%t] %5p %c{1.} - %m%n" (1) Loggers: Root: level: INFO AppenderRef: ref: CONSOLE appender.0.type = Console appender.0.name = CONSOLE appender.0.layout.type = PatternLayout (1) appender.0.layout.pattern = %d [%t] %5p %c{1.} - %m%n rootLogger.level = INFO rootLogger.appenderRef.0.ref = CONSOLE 1 While Pattern Layout is a good first choice and preferable for tests, we recommend using a structured format such as JSON Template Layout for production deployments. To use these formats, the following additional dependencies are required: log4j2.xml log4j2.json log4j2.yaml log4j2.properties JPMS users need to add: module foo.bar { requires java.xml; } to their module-info.java descriptor. Maven Gradle <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> <version>2.19.1</version> <scope>runtime</scope> </dependency> runtimeOnly 'com.fasterxml.jackson.core:jackson-databind:2.19.1' Maven Gradle <dependency> <groupId>com.fasterxml.jackson.dataformat</groupId> <artifactId>jackson-dataformat-yaml</artifactId> <version>2.19.1</version> <scope>runtime</scope> </dependency> runtimeOnly 'com.fasterxml.jackson.dataformat:jackson-dataformat-yaml:2.19.1' No dependency required. Installing JUL Are you using JBoss Log Manager as your JUL implementation? You can skip this section and use the log4j2-jboss-logmanager and slf4j-jboss-logmanager bridges from the JBoss Logging project instead. Java Platform contains a very simple logging API and its implementation called JUL (Java Logging) . Since it is embedded in the platform, it only requires the addition of bridges from Log4j API and SLF4J: Maven Gradle <dependencies> <!-- Log4j-to-JUL bridge --> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-to-jul</artifactId> <scope>runtime</scope> </dependency> <!-- SLF4J-to-JUL bridge --> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-jdk14</artifactId> <version>2.0.17</version> <scope>runtime</scope> </dependency> <!-- ... --> </dependencies> runtimeOnly 'org.apache.logging.log4j:log4j-to-jul' // Log4j-to-JUL bridge runtimeOnly 'org.slf4j:slf4j-jdk14:2.0.17' // SLF4J-to-JUL bridge See also: java.util.logging.LogManager , to learn more about JUL configuration, Log4j-to-JUL bridge to learn more about the log4j-to-jul artifact. Installing JUL for GraalVM applications See Using JUL in our GraalVM guide for more details on how to create GraalVM native applications that use JUL. Installing Logback To install Logback as the logging implementation, you only need to add a Log4j-to-SLF4J bridge: Maven Gradle <dependencies> <!-- Logging implementation (Logback) --> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>{logback-version}</version> <scope>runtime</scope> </dependency> <!-- Log4j-to-SLF4J bridge --> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-to-slf4j</artifactId> <scope>runtime</scope> </dependency> </dependencies> runtimeOnly 'ch.qos.logback:logback-classic:1.3.15' runtimeOnly 'org.apache.logging.log4j:log4j-to-slf4j' // Log4j-to-SLF4J bridge To configure Logback, see Logback’s configuration documentation . Installing Logback for GraalVM applications See Using Logback in our GraalVM guide for more details on how to create GraalVM native applications that use Logback. Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/manual/installation.html | Installation :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Manual Installation Edit this Page Installation In this page we will elaborate on various ways to install Log4j in your library or application. Shortcuts Below we share some shortcuts for the impatient. We strongly advise you to skim through this page to get a grip on fundamental logging concepts and understand which recipe fits your bill best. Are you a library developer? You just need to log against a logging API . See Installing Log4j API . Are you an application developer? Your code and libraries it depends on are most probably already logging against a logging API, you just need to install a logging implementation . See Installing Log4j Core . Are you a Spring Boot application developer? See Installing Log4j Core for Spring Boot applications . Are you migrating from…​ Log4j 1 , Logback , or SLF4J ? Concepts (APIs, Implementations, and Bridges) It is crucial to understand certain concepts in logging to be able to talk about the installation of them. Logging API A logging API is an interface your code or your dependencies directly logs against. It is required at compile-time. It is implementation agnostic to ensure that your application can write logs, but is not tied to a specific logging implementation. Log4j API, SLF4J , JUL (Java Logging) , JCL (Apache Commons Logging) , JPL (Java Platform Logging) and JBoss Logging are major logging APIs. Logging implementation A logging implementation is only required at runtime and can be changed without the need to recompile your software. Log4j Core, JUL (Java Logging) , Logback are the most well-known logging implementations. Logging bridge Logging implementations accept input from a single logging API of their preference; Log4j Core from Log4j API, Logback from SLF4J, etc. A logging bridge is a simple logging implementation of a logging API that forwards all messages to a foreign logging API. Logging bridges allow a logging implementation to accept input from other logging APIs that are not their primary logging API. For instance, log4j-slf4j2-impl bridges SLF4J calls to Log4 API and effectively enables Log4j Core to accept input from SLF4J. With this in mind, the type of software you are writing determines whether you should be installing a logging API, implementation, or bridge: Libraries They only require a logging API and delegate the choice of the implementation to applications. If a logging implementation is required by tests of the library, it should be in the appropriate test scope. Applications They need a logging implementation, but also bridges of each of the major logging APIs to support log statements from the libraries they use. For example, your application might be logging against Log4j API and one of its dependencies against SLF4J. In this case, you need to install log4j-core and log4j-slf4j2-impl . To make things a little bit more tangible, consider the following visualization of a typical Log4j Core installation with bridges for an application: Figure 1. Visualization of a typical Log4j Core installation with SLF4J, JUL, and JPL bridges Requirements The Log4j 2 runtime requires a minimum of Java 8. See the Download page for older releases supporting Java 6 and 7. Configuring the build tool The easiest way to install Log4j is through a build tool such as Maven or Gradle. The rest of the instructions in this page assume you use one of these. Importing the Bill-of-Materials (aka. BOM) To keep your Log4j module versions in sync with each other, a BOM (Bill of Material) file is provided for your convenience. You can import the BOM in your build tool of preference: Maven Gradle <dependencyManagement> <dependencies> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-bom</artifactId> <version>2.25.3</version> <scope>import</scope> <type>pom</type> </dependency> </dependencies> </dependencyManagement> dependencies { implementation platform('org.apache.logging.log4j:log4j-bom:2.25.3') } Once you import the BOM, you don’t need to explicitly provide the versions of the Log4j artifacts managed by it. In the rest of the explanations, we will assume that the Log4j BOM is imported. Using snapshots Do you want to test the latest ( unstable! ) development version? Click here details. You can access the latest development snapshots by using the https://repository.apache.org/content/groups/snapshots/ repository. Snapshots are published for development and testing purposes; they should not be used at production! Maven Gradle <repositories> <repository> <id>apache.snapshots</id> <name>Apache Snapshot Repository</name> <url>https://repository.apache.org/snapshots</url> <releases> <enabled>false</enabled> </releases> </repository> </repositories> repositories { mavenCentral() maven { url 'https://repository.apache.org/snapshots' } } Installing Log4j API The easiest way to install Log4j API is through a dependency management tool such as Maven or Gradle, by adding the following dependency: Maven Gradle <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api</artifactId> <version>${log4j-api.version}</version> </dependency> implementation 'org.apache.logging.log4j:log4j-api:${log4j-api.version}' Installing a logging implementation Log4j provides several modules to facilitate the deployment of different logging implementations: Simple Logger This is a fallback implementation embedded into the Log4j API artifact. The usage of this implementation generates an error message unless you enable it explicitly. See Installing Simple Logger for more details. log4j-core The reference implementation. Log4 Core primarily accepts input from Log4j API. Refer to Installing Log4j Core for the installation instructions. log4j-to-jul The bridge that translates Log4j API calls to JUL (Java Logging) . See Installing JUL for the installation instructions. log4j-to-slf4j The bridge that translates Log4j API calls to SLF4J . Since currently only Logback implements SLF4J natively, refer to Installing Logback for the installation instructions. To ensure that your code does not directly depend on a particular logging implementation, the logging backend should be put in the appropriate scope of your dependency manager: Software type Build tool Maven Gradle Application runtime runtimeOnly Library test testRuntimeOnly Installing Simple Logger The Simple Logger implementation is embedded in the Log4j API and does not need any external dependency. It is intended as a convenience for environments where either a fully-fledged logging implementation is missing, or cannot be included for other reasons. The Log4j API will log an error to the Status Logger to avoid its unintentional usages: 2024-10-03T11:53:34.281462230Z main ERROR Log4j API could not find a logging provider. To remove the warning and confirm that you want to use Simple Logger, add a log4j2.component.properties file at the root of your class path with content: # Activate Simple Logger implementation log4j.provider = org.apache.logging.log4j.simple.internal.SimpleProvider Installing Log4j Core Log4j Core is the reference logging implementation of the Log4j project. It primarily accepts input from Log4j API. Do you have a Spring Boot application? You can directly skip to Installing Log4j Core for Spring Boot applications . To install Log4j Core as your logging implementation, you need to add the following dependency to your application: Maven Gradle <dependencies> <!-- Logging implementation (Log4j Core) --> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <scope>runtime</scope> </dependency> <!-- Logging bridges will follow... --> </dependencies> runtimeOnly 'org.apache.logging.log4j:log4j-core' // Logging bridges will follow... Installing bridges If either your application or one of its dependencies logs against a logging API that is different from Log4j API, you need to bridge that API to Log4j API. Do you need bridges? And if so, which ones? If you have any direct or transitive dependency on org.slf4j:slf4j-api , you need the SLF4J-to-Log4j bridge . If you have any direct or transitive dependency on commons-logging:commons-logging , you need the JCL-to-Log4j bridge . If it is a standalone application (i.e., not running in a Java EE container), you will probably need JUL-to-Log4j and JPL-to-Log4j bridges. The following sections explain the installation of Log4j-provided bridges. Installing SLF4J-to-Log4j bridge You can translate SLF4J calls to Log4j API using the log4j-slf4j2-impl artifact: Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-slf4j2-impl</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-slf4j2-impl' Are you still using SLF4J 1.x? Add this example instead: Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-slf4j-impl</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-slf4j-impl' Installing JUL-to-Log4j bridge You can translate JUL (Java Logging) calls to Log4j API using the log4j-jul artifact: Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-jul</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-jul' In order to activate the bridge from JUL to Log4j API, you also need to add: -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager to the JVM parameters in your application launcher. The JUL-to-Log4j bridge supports additional configuration and installation methods. See JUL-to-Log4j bridge for more information. Installing JPL-to-Log4j bridge You can translate JPL (Java Platform Logging) calls to Log4j API using the log4j-jpl artifact: Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-jpl</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-jpl' Installing JCL-to-Log4j bridge Since version 1.3.0 Apache Commons Logging natively supports Log4j API. You can enforce the version of a transitive dependency using the dependency management mechanism appropriate to your build tool: Maven Gradle Maven users should add an entry to the <dependencyManagement> section of their POM file: <dependencyManagement> <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> <version>1.3.5</version> </dependency> </dependencyManagement> Gradle users should refer to the Using a platform to control transitive versions of the Gradle documentation. Are you using Commons Logging 1.2.0 or earlier? You need to install the following dependency instead: Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-jcl</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-jcl' Installing JBoss Logging-to-Log4j bridge JBoss Logging is shipped with an integrated bridge to Log4j API and requires no steps on your part. See Supported Log Managers for more information. Installing Log4j Core for Spring Boot applications Spring Boot users should replace the spring-boot-starter-logging dependency with spring-boot-starter-log4j2 : Maven Gradle <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter</artifactId> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-logging</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-log4j2</artifactId> <scope>runtime</scope> </dependency> </dependencies> configurations { all.exclude group: 'org.springframework.boot', module: 'spring-boot-starter-logging' } dependencies { runtimeOnly group: 'org.springframework.boot', module: 'spring-boot-starter-log4j2' } The spring-boot-starter-log4j2 artifact will automatically install Log4j Core, JUL-to-Log4j bridge , and configure them. You don’t need to add any other dependency or configure JUL anymore. See Spring Boot Logging documentation for further information. Installing Log4j Core for GraalVM applications See Using Log4j Core in our GraalVM guide for more details on how to create GraalVM native applications that use Log4j Core. Configuring Log4j Core As with any other logging implementation, Log4j Core needs to be properly configured. Log4j Core supports many different configuration formats: JSON, XML, YAML, and Java properties. To configure Log4j Core, see Configuration file . A basic configuration can be obtained by adding one of these files to your application’s classpath: log4j2.xml log4j2.json log4j2.yaml log4j2.properties <?xml version="1.0" encoding="UTF-8"?> <Configuration xmlns="https://logging.apache.org/xml/ns" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://logging.apache.org/xml/ns https://logging.apache.org/xml/ns/log4j-config-2.xsd"> <Appenders> <Console name="CONSOLE"> <PatternLayout pattern="%d [%t] %5p %c{1.} - %m%n"/> (1) </Console> </Appenders> <Loggers> <Root level="INFO"> <AppenderRef ref="CONSOLE"/> </Root> </Loggers> </Configuration> { "Configuration": { "Appenders": { "Console": { "name": "CONSOLE", "PatternLayout": { "pattern": "%d [%t] %5p %c{1.} - %m%n" (1) } } }, "Loggers": { "Root": { "level": "INFO", "AppenderRef": { "ref": "CONSOLE" } } } } } Configuration: Appenders: Console: name: CONSOLE PatternLayout: pattern: "%d [%t] %5p %c{1.} - %m%n" (1) Loggers: Root: level: INFO AppenderRef: ref: CONSOLE appender.0.type = Console appender.0.name = CONSOLE appender.0.layout.type = PatternLayout (1) appender.0.layout.pattern = %d [%t] %5p %c{1.} - %m%n rootLogger.level = INFO rootLogger.appenderRef.0.ref = CONSOLE 1 While Pattern Layout is a good first choice and preferable for tests, we recommend using a structured format such as JSON Template Layout for production deployments. To use these formats, the following additional dependencies are required: log4j2.xml log4j2.json log4j2.yaml log4j2.properties JPMS users need to add: module foo.bar { requires java.xml; } to their module-info.java descriptor. Maven Gradle <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> <version>2.19.1</version> <scope>runtime</scope> </dependency> runtimeOnly 'com.fasterxml.jackson.core:jackson-databind:2.19.1' Maven Gradle <dependency> <groupId>com.fasterxml.jackson.dataformat</groupId> <artifactId>jackson-dataformat-yaml</artifactId> <version>2.19.1</version> <scope>runtime</scope> </dependency> runtimeOnly 'com.fasterxml.jackson.dataformat:jackson-dataformat-yaml:2.19.1' No dependency required. Installing JUL Are you using JBoss Log Manager as your JUL implementation? You can skip this section and use the log4j2-jboss-logmanager and slf4j-jboss-logmanager bridges from the JBoss Logging project instead. Java Platform contains a very simple logging API and its implementation called JUL (Java Logging) . Since it is embedded in the platform, it only requires the addition of bridges from Log4j API and SLF4J: Maven Gradle <dependencies> <!-- Log4j-to-JUL bridge --> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-to-jul</artifactId> <scope>runtime</scope> </dependency> <!-- SLF4J-to-JUL bridge --> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-jdk14</artifactId> <version>2.0.17</version> <scope>runtime</scope> </dependency> <!-- ... --> </dependencies> runtimeOnly 'org.apache.logging.log4j:log4j-to-jul' // Log4j-to-JUL bridge runtimeOnly 'org.slf4j:slf4j-jdk14:2.0.17' // SLF4J-to-JUL bridge See also: java.util.logging.LogManager , to learn more about JUL configuration, Log4j-to-JUL bridge to learn more about the log4j-to-jul artifact. Installing JUL for GraalVM applications See Using JUL in our GraalVM guide for more details on how to create GraalVM native applications that use JUL. Installing Logback To install Logback as the logging implementation, you only need to add a Log4j-to-SLF4J bridge: Maven Gradle <dependencies> <!-- Logging implementation (Logback) --> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>{logback-version}</version> <scope>runtime</scope> </dependency> <!-- Log4j-to-SLF4J bridge --> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-to-slf4j</artifactId> <scope>runtime</scope> </dependency> </dependencies> runtimeOnly 'ch.qos.logback:logback-classic:1.3.15' runtimeOnly 'org.apache.logging.log4j:log4j-to-slf4j' // Log4j-to-SLF4J bridge To configure Logback, see Logback’s configuration documentation . Installing Logback for GraalVM applications See Using Logback in our GraalVM guide for more details on how to create GraalVM native applications that use Logback. Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://docs.aws.amazon.com/de_de/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-permissions.html | Einrichten von IAM-Berechtigungen und -Rollen für Lambda@Edge - Amazon CloudFront Einrichten von IAM-Berechtigungen und -Rollen für Lambda@Edge - Amazon CloudFront Dokumentation Amazon CloudFront Entwicklerhandbuch IAM-Berechtigungen sind erforderlich, um Lambda @Edge -Funktionen mit Distributionen zu verknüpfen CloudFront Funktionsausführungsrolle für Service-Prinzipale Serviceverknüpfte Rollen für Lambda@Edge Die vorliegende Übersetzung wurde maschinell erstellt. Im Falle eines Konflikts oder eines Widerspruchs zwischen dieser übersetzten Fassung und der englischen Fassung (einschließlich infolge von Verzögerungen bei der Übersetzung) ist die englische Fassung maßgeblich. Einrichten von IAM-Berechtigungen und -Rollen für Lambda@Edge Um Lambda@Edge zu konfigurieren, benötigen Sie die folgenden IAM-Berechtigungen und -Rollen für AWS Lambda: IAM-Berechtigungen — Mit diesen Berechtigungen können Sie Ihre Lambda-Funktion erstellen und sie Ihrer CloudFront Distribution zuordnen. Eine Ausführungsrolle für die Lambda-Funktion (IAM-Rolle) – Die Lambda-Service-Prinzipale übernehmen diese Rolle, um die Funktion auszuführen. Dienstgebundene Rollen für Lambda @Edge — Die dienstverknüpften Rollen ermöglichen es bestimmten, Lambda-Funktionen in Protokolldateien AWS-Services zu replizieren AWS-Regionen und deren Verwendung zu ermöglichen CloudWatch. CloudFront IAM-Berechtigungen sind erforderlich, um Lambda @Edge -Funktionen mit Distributionen zu verknüpfen CloudFront Zusätzlich zu den IAM-Berechtigungen, die Sie für Lambda benötigen, benötigen Sie die folgenden Berechtigungen, um Lambda-Funktionen Distributionen zuzuordnen: CloudFront lambda:GetFunction – Gewährt die Berechtigung, Konfigurationsinformationen für die Lambda-Funktion und eine vorsignierte URL zum Herunterladen einer .zip -Datei abzurufen, die die Funktion enthält. lambda:EnableReplication* – Gewährt die Berechtigung für die Ressourcenrichtlinie, sodass der Lambda-Replikationsservice den Code und die Konfiguration der Funktion abrufen kann. lambda:DisableReplication* – Gewährt die Berechtigung für die Ressourcenrichtlinie, sodass der Lambda-Replikationsservice die Funktion löschen kann. Wichtig Sie müssen das Sternchen ( * ) am Ende der Aktionen lambda:EnableReplication * und lambda:DisableReplication * hinzufügen. Geben Sie für die Ressource den ARN der Funktionsversion an, die Sie ausführen möchten, wenn ein CloudFront Ereignis eintritt, wie im folgenden Beispiel: arn:aws:lambda:us-east-1:123456789012:function: TestFunction :2 iam:CreateServiceLinkedRole — Erteilt die Erlaubnis, eine serviceverknüpfte Rolle zu erstellen, in der Lambda @Edge Lambda-Funktionen repliziert. CloudFront Nachdem Sie Lambda@Edge zum ersten Mal konfiguriert haben, wird die serviceverknüpfte Rolle automatisch für Sie erstellt. Sie müssen diese Berechtigung nicht zu anderen Distributionen hinzufügen, die Lambda@Edge verwenden. cloudfront:UpdateDistribution oder cloudfront:CreateDistribution – Gewährt die Berechtigung, eine Distribution zu aktualisieren oder zu erstellen. Weitere Informationen finden Sie unter den folgenden Themen: Identity and Access Management für Amazon CloudFront Zugriffsberechtigungen für Lambda-Ressourcen im Entwicklerhandbuch für AWS Lambda Funktionsausführungsrolle für Service-Prinzipale Sie müssen eine IAM-Rolle erstellen, die die Service-Prinzipale lambda.amazonaws.com und edgelambda.amazonaws.com übernehmen können, wenn sie Ihre Funktion ausführen. Tipp Wenn Sie Ihre Funktion in der Lambda-Konsole erstellen, können Sie wählen, ob Sie mithilfe einer AWS Richtlinienvorlage eine neue Ausführungsrolle erstellen möchten. Dieser Schritt fügt automatisch die erforderlichen Lambda@Edge-Berechtigungen hinzu, um Ihre Funktion auszuführen. Siehe Schritt 5 im Tutorial: Erstellen einer einfachen Lambda@Edge-Funktion . Weitere Informationen zum manuellen Erstellen einer IAM-Rolle finden Sie unter Erstellen von Rollen und Anfügen von Richtlinien (Konsole) im IAM-Benutzerhandbuch . Beispiel: Rollenvertrauensrichtlinie Diese Rolle können Sie unter der Registerkarte Vertrauensstellung in der IAM-Konsole hinzufügen. Fügen Sie diese Richtlinie nicht unter der Registerkarte Berechtigungen hinzu. JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "lambda.amazonaws.com", "edgelambda.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] } Weitere Informationen zu den Berechtigungen, die Sie der Ausführungsrolle erteilen müssen, finden Sie unter Zugriffsberechtigungen für Lambda-Ressourcen im Entwicklerhandbuch für AWS Lambda . Hinweise Standardmäßig werden Daten in CloudWatch Logs geschrieben, wenn ein CloudFront Ereignis eine Lambda-Funktion auslöst. Wenn Sie diese Protokolle verwenden möchten, benötigt die Ausführungsrolle die Berechtigung, Daten in CloudWatch Logs zu schreiben. Sie können die vordefinierte AWSLambdaBasicExecutionRole verwenden, um der Ausführungsrolle die Berechtigung zu erteilen. Weitere Hinweise zu CloudWatch Protokollen finden Sie unter Protokolle für Edge-Funktionen . Wenn Ihr Lambda-Funktionscode auf andere AWS Ressourcen zugreift, z. B. auf das Lesen eines Objekts aus einem S3-Bucket, benötigt die Ausführungsrolle die Erlaubnis, diese Aktion auszuführen. Serviceverknüpfte Rollen für Lambda@Edge Lambda@Edge verwendet serviceverknüpfte IAM-Rollen . Eine serviceverknüpfte Rolle ist ein spezieller Typ von IAM-Rolle, der direkt mit einem Service verknüpft ist. Serviceverknüpfte Rollen werden vom Service vorab definiert und beinhalten alle Berechtigungen, die dieser zum Aufrufen anderer AWS -Services in Ihrem Namen benötigt. Lambda@Edge verwendet die folgenden serviceverknüpften IAM-Rollen: AWSServiceRoleForLambdaReplicator – Lambda@Edge verwendet diese Rolle, um es Lambda@Edge zu ermöglichen, Funktionen in AWS-Regionen zu replizieren. Wenn Sie zum ersten Mal einen Lambda @Edge -Trigger hinzufügen CloudFront, AWSServiceRoleForLambdaReplicator wird automatisch eine Rolle mit dem Namen erstellt, damit Lambda @Edge Funktionen replizieren kann. AWS-Regionen Diese Rolle ist für die Verwendung von Lambda@Edge-Funktionen erforderlich. Die ARN für die Rolle AWSServiceRoleForLambdaReplicator sieht wie im folgenden Beispiel aus: arn:aws:iam::123456789012:role/aws-service-role/replicator.lambda.amazonaws.com/AWSServiceRoleForLambdaReplicator AWSServiceRoleForCloudFrontLogger — CloudFront verwendet diese Rolle, um Protokolldateien in die Datei zu übertragen. CloudWatch Sie können Protokolldateien verwenden, um Lambda@Edge-Validierungsfehler zu beheben. Die AWSServiceRoleForCloudFrontLogger Rolle wird automatisch erstellt, wenn Sie eine Lambda @Edge -Funktionsassoziation hinzufügen, an die Lambda @Edge -Fehlerprotokolldateien übertragen werden können CloudFront . CloudWatch Der ARN für die AWSServiceRoleForCloudFrontLogger-Rolle sieht so aus: arn:aws:iam::account_number:role/aws-service-role/logger.cloudfront.amazonaws.com/AWSServiceRoleForCloudFrontLogger Eine serviceverknüpfte Rolle vereinfacht das Einrichten und Verwenden von Lambda@Edge, da Sie die erforderlichen Berechtigungen nicht manuell hinzufügen müssen. Lambda@Edge definiert die Berechtigungen seiner servicegebundenen Rollen. Nur Lambda@Edge kann die Rollen übernehmen. Die definierten Berechtigungen umfassen die Vertrauens- und Berechtigungsrichtlinie. Sie können die Berechtigungsrichtlinie keiner anderen IAM-Entität zuordnen. Sie müssen alle zugehörigen Ressourcen CloudFront oder Lambda @Edge -Ressourcen entfernen, bevor Sie eine serviceverknüpfte Rolle löschen können. Dies trägt zum Schutz Ihrer Lambda@Edge-Ressourcen bei, sodass Sie keine serviceverknüpfte Rolle entfernen, die noch für den Zugriff auf aktive Ressourcen erforderlich ist. Weitere Informationen zu serviceverknüpften Rollen finden Sie unter Dienstbezogene Rollen für CloudFront . Serviceverknüpfte Rollenberechtigungen für Lambda@Edge Lambda@Edge verwendet zwei servicegebundene Rollen. Diese heißen AWSServiceRoleForLambdaReplicator und AWSServiceRoleForCloudFrontLogger . In den folgenden Abschnitten werden die Berechtigungen für jede dieser Rollen beschrieben. Inhalt Serviceverknüpfte Rollenberechtigungen für Lambda Replicator Berechtigungen für dienstverknüpfte Rollen für den Logger CloudFront Serviceverknüpfte Rollenberechtigungen für Lambda Replicator Diese serviceverknüpfte Rolle ermöglicht Lambda das Replizieren von Lambda@Edge-Funktionen zu AWS-Regionen. Die serviceverknüpfte Rolle AWSServiceRoleForLambdaReplicator vertraut dem Service replicator.lambda.amazonaws.com , sodass dieser die Rolle annehmen kann. Die Rollenberechtigungsrichtlinie erlaubt Lambda@Edge die Durchführung der folgenden Aktionen für die angegebenen Ressourcen: lambda:CreateFunction auf arn:aws:lambda:*:*:function:* lambda:DeleteFunction auf arn:aws:lambda:*:*:function:* lambda:DisableReplication auf arn:aws:lambda:*:*:function:* iam:PassRole auf all AWS resources cloudfront:ListDistributionsByLambdaFunction auf all AWS resources Berechtigungen für dienstverknüpfte Rollen für den Logger CloudFront Diese dienstbezogene Rolle ermöglicht das CloudFront Pushen von Protokolldateien, CloudWatch sodass Sie Lambda @Edge -Validierungsfehler debuggen können. Die serviceverknüpfte Rolle AWSServiceRoleForCloudFrontLogger vertraut dem Service logger.cloudfront.amazonaws.com , sodass dieser die Rolle annehmen kann. Die Rollenberechtigungsrichtlinie erlaubt Lambda@Edge die Durchführung der folgenden Aktionen für die angegebene arn:aws:logs:*:*:log-group:/aws/cloudfront/* -Ressource: logs:CreateLogGroup logs:CreateLogStream logs:PutLogEvents Sie müssen Berechtigungen konfigurieren, damit eine IAM-Entität (z. B. ein Benutzer, eine Gruppe oder eine Rolle) die mit dem Lambda@Edge-Service verknüpften Rollen löschen kann. Weitere Informationen finden Sie unter serviceverknüpfte Rollenberechtigung im IAM-Benutzerhandbuch . Serviceverknüpfte Rollen für Lambda@Edge erstellen Servicegebundene Rollen für Lambda@Edge werden in der Regel nicht manuell erstellt. In den folgenden Szenarien legt der Service die Rollen für Sie automatisch an: Wenn Sie zum ersten Mal einen Auslöser erstellen, erstellt der Service die Rolle AWSServiceRoleForLambdaReplicator (sofern sie nicht bereits vorhanden ist). Diese Rolle ermöglicht Lambda das Replizieren von Lambda@Edge-Funktionen in AWS-Regionen. Wenn Sie die serviceverknüpfte Rolle löschen, wird die Rolle erneut erstellt, wenn Sie einen neuen Auslöser für Lambda@Edge in einer Verteilung hinzufügen. Wenn Sie eine CloudFront Distribution aktualisieren oder erstellen, die über eine Lambda @Edge -Zuordnung verfügt, erstellt der Dienst die AWSServiceRoleForCloudFrontLogger Rolle (sofern die Rolle noch nicht vorhanden ist). Diese Rolle ermöglicht es CloudFront , Ihre Protokolldateien per Push zu CloudWatch übertragen. Wenn Sie die dienstverknüpfte Rolle löschen, wird die Rolle erneut erstellt, wenn Sie eine CloudFront Distribution aktualisieren oder erstellen, die über eine Lambda @Edge -Zuordnung verfügt. Um diese dienstbezogenen Rollen manuell zu erstellen, können Sie die folgenden Befehle AWS Command Line Interface ()AWS CLI ausführen: So erstellen Sie die AWSServiceRoleForLambdaReplicator-Rolle Führen Sie den folgenden Befehl aus. aws iam create-service-linked-role --aws-service-name replicator.lambda.amazonaws.com So erstellen Sie die AWSServiceRoleForCloudFrontLogger-Rolle Führen Sie den folgenden Befehl aus. aws iam create-service-linked-role --aws-service-name logger.cloudfront.amazonaws.com Bearbeiten von serviceverknüpften Lambda@Edge-Rollen Lambda@Edge verhindert die Bearbeitung der serviceverknüpften Rollen AWSServiceRoleForLambdaReplicator und AWSServiceRoleForCloudFrontLogger. Da möglicherweise verschiedene Entitäten auf die Rolle verweisen, kann der Rollenname nach der Erstellung einer serviceverknüpften Rolle durch den Service nicht bearbeitet werden. Sie können mithilfe von IAM jedoch die Beschreibung der Rolle bearbeiten. Weitere Informationen finden Sie unter Bearbeiten einer serviceverknüpften Rolle im IAM-Benutzerhandbuch . Unterstützt AWS-Regionen für dienstverknüpfte Lambda @Edge -Rollen CloudFront unterstützt die Verwendung von dienstverknüpften Rollen für Lambda @Edge im Folgenden: AWS-Regionen USA Ost (Nord-Virginia) – us-east-1 USA Ost (Ohio) – us-east-2 USA West (Nordkalifornien) – us-west-1 USA West (Oregon) – us-west-2 Asia Pacific (Mumbai) – ap-south-1 Asien-Pazifik (Seoul) – ap-northeast-2 Asia Pacific (Singapore) – ap-southeast-1 Asien-Pazifik (Sydney) – ap-southeast-2 Asien-Pazifik (Tokio) – ap-northeast-1 Europa (Frankfurt) – eu-central-1 Europa (Ireland) – eu-west-1 Europa (London) – eu-west-2 South America (São Paulo) – sa-east-1 JavaScript ist in Ihrem Browser nicht verfügbar oder deaktiviert. Zur Nutzung der AWS-Dokumentation muss JavaScript aktiviert sein. Weitere Informationen finden auf den Hilfe-Seiten Ihres Browsers. Dokumentkonventionen Tutorial: Grundlegende Lambda@Edge-Funktion Schreiben und Erstellen einer Lambda@Edge-Funktion Hat Ihnen diese Seite geholfen? – Ja Vielen Dank, dass Sie uns mitgeteilt haben, dass wir gute Arbeit geleistet haben! Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, was wir richtig gemacht haben, damit wir noch besser werden? Hat Ihnen diese Seite geholfen? – Nein Vielen Dank, dass Sie uns mitgeteilt haben, dass diese Seite überarbeitet werden muss. Es tut uns Leid, dass wir Ihnen nicht weiterhelfen konnten. Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, wie wir die Dokumentation verbessern können? | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/manual/extending.html | Extending :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Manual Implementation Extending Edit this Page Extending Log4j provides numerous extension points to adapt it for custom needs. Several of such extension points are covered in the page of the associated component: Log4j API Extending levels Extending markers Extending messages Extending thread context Log4j Core Extending appenders Extending filters Extending layouts Extending JSON Template Layout Extending Pattern Layout Extending lookups This section guides you on the rest of the Log4j extension points. Extension mechanisms Log4j allows extensions primarily using following mechanisms: Plugins Log4j plugin system is the de facto extension mechanism embraced by various Log4j components. Plugins provide extension points to components, that can be used to implement new features, without modifying the original component. It is analogous to a dependency injection framework, but curated for Log4j-specific needs. In a nutshell, you annotate your classes with @Plugin and their ( static ) factory methods with @PluginFactory . Last, you inform the Log4j plugin system to discover these custom classes. This is done using running the PluginProcessor annotation processor while building your project. Refer to Plugins for details. ServiceLoader s ServiceLoader is a simple service-provider loading facility baked into the Java platform itself. Log4j uses ServiceLoader s for extending places where The service needs to be implementation agnostic. As a result, the Log4j plugin system cannot be used, since it is provided by the logging implementation, i.e., Log4j Core. For instance, this is why extending Thread Context , which is a Log4j API component, works using ServiceLoader s. The service needs to be loaded before the Log4j plugin system . For instance, this is why extending Provider works using ServiceLoader s. Refer to the ServiceLoader documentation for details. System properties Log4j uses system properties to determine the fully-qualified class name (FQCN) to load for extending a certain functionality. For instance, extending MessageFactory2 works using system properties. Loading a class using only its FQCN can result in unexpected behaviour when there are multiple class loaders. Extension points In this section we will guide you on certain Log4j extension points that are not covered elsewhere. Provider Provider is the anchor contract binding Log4j API to an implementation. For instance, it has been implemented by Log4j Core, Log4j-to-JUL bridge, and Log4j-to-SLF4J bridge modules. Under the hood, LogManager locates a Provider implementation using the ServiceLoader mechanism , and delegates invocations to it. Hence, you can extend it by providing a org.apache.logging.log4j.spi.Provider implementation in the form of a ServiceLoader . Having multiple Provider s in the classpath is strongly discouraged. Yet when this happens, you can use the log4j2.provider property to explicitly select one. LoggerContextFactory LoggerContextFactory is the factory class used by Log4j API implementations to create LoggerContext s. If you are using Log4j Core, you can use ContextSelector s to influence the way its LoggerContextFactory implementation works. If you are creating a new Log4j API implementation, you should provide a custom Provider to introduce your custom LoggerContextFactory implementation. ContextSelector Log4jContextFactory , the Log4j Core implementation of LoggerContextFactory , delegates the actual work to a ContextSelector . It can be configured using the log4j2.contextSelector property . ConfigurationFactory ConfigurationFactory is the factory class used by Log4j Core to create Configuration instances given a LoggerContext and a ConfigurationSource . You can provide a custom ConfigurationFactory in the form of a plugin . For example, see XmlConfigurationFactory.java and XmlConfiguration.java of Log4j Core. You can use the log4j2.configurationFactory property to explicitly set a ConfigurationFactory to be used before any other factory implementation. LoggerConfig LoggerConfig denotes the Logger configurations in a Configuration . A custom LoggerConfig needs to satisfy the following conditions: It needs to extend from LoggerConfig class It needs to be declared as a plugin Its plugin category should be set to Node.CATEGORY For example, see RootLogger definition in LoggerConfig.java . LogEventFactory Log4j Core uses LogEventFactory to create LogEvent s. You can replace the default LogEventFactory implementation with a custom one of yours by using the log4j2.logEventFactory property . Asynchronous loggers discard LogEventFactory and any configuration related with it. MessageFactory2 Log4j Core uses MessageFactory2 to create Message s. You can replace the default MessageFactory2 implementation with a custom one of yours by using the log4j2.messageFactory property . In the case of Flow Tracing , Log4j Core uses FlowMessageFactory . You can replace the default FlowMessageFactory implementation with a custom one of yours by using the log4j2.flowMessageFactory property . Message factory implementations are expected to interpret formatting patterns containing placeholders denoted with {} . For instance, the default message factory chooses between a SimpleMessage and a ParameterizedMessage depending on the presence of placeholders in the formatting pattern. If you want to change the placeholder style (e.g., switching from {} to %s ), you should not replace the default message factory. Because this will break the existing Log4j API calls using the standard placeholder style. Instead, you can use LogManager methods accepting a message factory to create Logger s with your custom message factory implementations. Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://docs.aws.amazon.com/ja_jp/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-permissions.html | Lambda@Edge 用の IAM アクセス許可とロールのセットアップ - Amazon CloudFront Lambda@Edge 用の IAM アクセス許可とロールのセットアップ - Amazon CloudFront ドキュメント Amazon CloudFront デベロッパーガイド Lambda@Edge 関数を CloudFront ディストリビューションに関連付けるために必要な IAM アクセス許可 サービスプリンシパルの関数実行ロール Lambda@Edge 用のサービスにリンクされたロール Lambda@Edge 用の IAM アクセス許可とロールのセットアップ Lambda@Edge を設定するには、AWS Lambda に対する以下の IAM アクセス許可およびロールが必要です。 IAM アクセス許可 – これらのアクセス許可により、Lambda 関数を作成して CloudFront ディストリビューションに関連付けることができます。 Lambda 関数実行ロール (IAM ロール) – Lambda サービスプリンシパルは、このロールを引き受けて関数を実行します。 Lambda@Edge のサービスリンクロール – サービスリンクロールにより、特定の AWS のサービス が Lambda 関数を AWS リージョン にレプリケートし、CloudWatch が CloudFront ログファイルを使用できるようになります。 Lambda@Edge 関数を CloudFront ディストリビューションに関連付けるために必要な IAM アクセス許可 Lambda に必要な IAM アクセス許可に加え、ユーザーは、Lambda 関数を CloudFront ディストリビューションに関連付けるための以下の IAM アクセス許可が必要です。 lambda:GetFunction – Lambda 関数の設定情報を取得するためのアクセス許可、およびその関数を含む .zip ファイルをダウンロードするための署名付き URL を取得するアクセス許可を付与します。 lambda:EnableReplication* – Lambda レプリケーションサービスが関数コードと設定を取得するためのアクセス許可をリソースポリシーに付与します。 lambda:DisableReplication* – Lambda レプリケーションサービスが関数を削除するためのアクセス許可をリソースポリシーに付与します。 重要 lambda:EnableReplication * および lambda:DisableReplication * アクションの最後にアスタリスク ( * ) を追加する必要があります。 リソースに対して、次の例のように、CloudFront イベントが発生した場合に実行する関数バージョンの ARN を指定します。 arn:aws:lambda:us-east-1:123456789012:function: TestFunction :2 iam:CreateServiceLinkedRole – Lambda@Edge が CloudFront で Lambda 関数をレプリケートするために使用するサービスリンクロールを作成するアクセス許可を付与します。Lambda@Edge を初めて設定すると、サービスリンクロールが自動的に作成されます。Lambda@Edge を使用する他のディストリビューションにこのアクセス許可を追加する必要はありません。 cloudfront:UpdateDistribution または cloudfront:CreateDistribution – ディストリビューションを更新または作成するアクセス許可を付与します。 詳細については、以下の各トピックを参照してください。 Amazon CloudFront のアイデンティティとアクセス管理 「 AWS Lambda デベロッパーガイド 」の「 Lambda リソースのアクセス許可 」 サービスプリンシパルの関数実行ロール ユーザーの関数を実行するときに lambda.amazonaws.com と edgelambda.amazonaws.com サービスプリンシパル が引き受けることができる IAM ロールを作成する必要があります。 ヒント Lambda コンソールで関数を作成する場合、AWS ポリシーテンプレートを使用して新しい実行ロールを作成することを選択できます。このステップでは、関数を実行するために必要な Lambda@Edge アクセス許可が 自動的に 追加されます。 チュートリアル: シンプルな Lambda@Edge 関数の作成のステップ 5 を参照してください。 IAM ロールを手動で作成する詳細については、「 IAM ユーザーガイド 」の「 ロールの作成とポリシーのアタッチ (コンソール) 」を参照してください。 例: ロール信頼ポリシー IAM コンソールの [信頼関係] タブで、このロールを追加できます。このポリシーは [アクセス許可] タブには追加しないでください。 JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "lambda.amazonaws.com", "edgelambda.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] } 実行ロールに付与する必要がある許可の詳細については、「 AWS Lambda デベロッパーガイド 」の「 Lambda リソースのアクセス許可 」を参照してください。 メモ デフォルトでは、CloudFront イベントが Lambda 関数をトリガーするたびに、データが CloudWatch Logs に書き込まれます。これらのログを使用する場合は、CloudWatch Logs にデータを書き込むためのアクセス権限が実行ロールに必要です。事前定義された AWSLambdaBasicExecutionRole を使用して、実行ロールにアクセス許可を付与できます。 CloudWatch Logs の詳細については、「 エッジ関数のログ 」を参照してください。 S3 バケットからのオブジェクトの読み取りなど、Lambda 関数コードが他の AWS リソースにアクセスする場合、そのアクションを実行するためのアクセス許可が実行ロールに必要です。 Lambda@Edge 用のサービスにリンクされたロール Lambda@Edge は IAM サービスリンクロール を使用します。サービスにリンクされたロールは、サービスに直接リンクされた一意のタイプの IAM ロールです。サービスにリンクされたロールは、サービスによって事前定義されており、お客様の代わりにサービスから他の AWS サービスを呼び出す必要のあるアクセス許可がすべて含まれています。 Lambda@Edge は、以下の IAM サービスリンクロールを使用します。 AWSServiceRoleForLambdaReplicator - Lambda@Edge はこのロールを使用して、Lambda@Edge が関数を AWS リージョン にレプリケートできるようにします。 CloudFront で Lambda@Edge トリガーを初めて追加すると、AWSServiceRoleForLambdaReplicator という名前のロールが自動的に作成され、Lambda@Edge が関数を AWS リージョン にレプリケートできるようになります。このロールは、Lambda@Edge 関数を使用するために必要です。AWSServiceRoleForLambdaReplicator ロールの ARN は次の例のようになります。 arn:aws:iam::123456789012:role/aws-service-role/replicator.lambda.amazonaws.com/AWSServiceRoleForLambdaReplicator AWSServiceRoleForCloudFrontLogger – CloudFront はこのロールを使用してログファイルを CloudWatch にプッシュします。ログファイルを使用して Lambda@Edge 検証エラーをデバッグできます。 Lambda@Edge 関数の関連付けを追加すると、AWSServiceRoleForCloudFrontLogger ロールが自動的に作成され、CloudFront が Lambda@Edge エラーログファイルを CloudWatch にプッシュできるようになります。AWSServiceRoleForCloudFrontLogger の ARN は次のようになります。 arn:aws:iam::account_number:role/aws-service-role/logger.cloudfront.amazonaws.com/AWSServiceRoleForCloudFrontLogger サービスリンクロールを使用することで、必要なアクセス許可を手動で追加する必要がなくなるため、Lambda@Edge のセットアップと使用が簡単になります。Lambda@Edge はそのサービスリンクロールのアクセス許可を定義し、Lambda@Edge のみがそのロールを引き受けることができます。定義されたアクセス権限には、信頼ポリシーとアクセス権限ポリシーが含まれます。その他の IAM エンティティにアクセス許可ポリシーをアタッチすることはできません。 サービスにリンクされたロールを削除するには、その前に、それらのロールに関連付けられている CloudFront または Lambda@Edge のリソースを削除する必要があります。このようにして、アクティブなリソースにアクセスするためにまだ必要な、サービスリンクロールを削除しないようにすることで、Lambda@Edge リソースが保護されます。 サービスにリンクされたロールの詳細については、「 CloudFront のサービスにリンクされたロール 」を参照してください。 Lambda@Edge 用のサービスにリンクされたロールのアクセス許可 Lambda@Edge は、 AWSServiceRoleForLambdaReplicator および AWSServiceRoleForCloudFrontLogger という名前の 2 つのサービスにリンクされたロールを使用します。以下のセクションでは、それらの各ロールのアクセス許可を管理する方法について説明します。 目次 Lambda Replicator 用のサービスにリンクされたロールのアクセス許可 CloudFront ロガー用のサービスにリンクされたロールのアクセス許可 Lambda Replicator 用のサービスにリンクされたロールのアクセス許可 このサービスにリンクされたロールにより、Lambda が Lambda@Edge 関数を AWS リージョン にレプリケートできるようになります。 AWSServiceRoleForLambdaReplicator サービスにリンクされたロールは、ロールを継承するために replicator.lambda.amazonaws.com のサービスを信頼します。 このロールのアクセス権限ポリシーは、Lambda@Edge が以下のアクションを指定されたリソースに対して実行することを許可します。 lambda:CreateFunction の。 arn:aws:lambda:*:*:function:* lambda:DeleteFunction の。 arn:aws:lambda:*:*:function:* lambda:DisableReplication の。 arn:aws:lambda:*:*:function:* iam:PassRole の。 all AWS resources cloudfront:ListDistributionsByLambdaFunction の。 all AWS resources CloudFront ロガー用のサービスにリンクされたロールのアクセス許可 このサービスリンクロールでは、Lambda@Edge の検証エラーをデバッグするのに役立つように CloudFront が CloudWatch にログファイルをプッシュすることが許可されます。 AWSServiceRoleForCloudFrontLogger サービスにリンクされたロールは、ロールを継承するために logger.cloudfront.amazonaws.com のサービスを信頼します。 このロールのアクセス権限ポリシーは、Lambda@Edge が以下のアクションを指定された arn:aws:logs:*:*:log-group:/aws/cloudfront/* リソースに対して実行することを許可します。 logs:CreateLogGroup logs:CreateLogStream logs:PutLogEvents IAM エンティティ (ユーザー、グループ、ロールなど) で Lambda@Edge のサービスにリンクされたロールを削除できるように、アクセス許可を設定する必要があります。詳細については IAM ユーザーガイド の「 サービスにリンクされた役割のアクセス許可 」を参照してください。 Lambda@Edge 用のサービスにリンクされたロールの作成 通常、Lambda@Edge のサービスにリンクされたロールを手動で作成することはありません。以下のシナリオで、サービスによってロールが自動的に作成されます。 トリガーを初めて作成するとき、サービスは AWSServiceRoleForLambdaReplicator ロールを作成します (まだ存在しない場合)。このロールにより、Lambda が Lambda@Edge 関数を AWS リージョン にレプリケートできるようになります。 このサービスにリンクされたロールを削除した場合、Lambda@Edge の新しいトリガーをディストリビューションに追加すると、そのロールは再び作成されます。 Lambda@Edge が関連付けられた CloudFront ディストリビューションを更新または作成すると、サービスによって AWSServiceRoleForCloudFrontLogger ロールが作成されます (まだ存在しない場合)。このロールにより、CloudFront が CloudWatch にログファイルをプッシュできるようになります。 このサービスリンクロールを削除した場合は、Lambda@Edge の関連付けがある CloudFront ディストリビューションを更新または作成すると、そのロールが再び作成されます。 これらのサービスリンクロールを手動で作成する必要がある場合は、次の AWS Command Line Interface (AWS CLI) コマンドを実行します。 AWSServiceRoleForLambdaReplicator ロールを作成するには 以下のコマンドを実行してください。 aws iam create-service-linked-role --aws-service-name replicator.lambda.amazonaws.com AWSServiceRoleForCloudFrontLogger ロールを作成するには 以下のコマンドを実行してください。 aws iam create-service-linked-role --aws-service-name logger.cloudfront.amazonaws.com Lambda@Edge のサービスにリンクされたロールの編集 Lambda@Edge のサービスリンクロール AWSServiceRoleForLambdaReplicator または AWSServiceRoleForCloudFrontLogger を編集することはできません。サービスによってサービスリンクロールが作成された後は、多くのエンティティでそのロールが参照されるため、そのロール名は変更できません。ただし、IAM を使用してロールの説明を編集することはできます。詳細については、「 IAM ユーザーガイド 」の「 サービスリンクロールの編集 」を参照してください。 Lambda@Edge サービスリンクロールでサポートされている AWS リージョン CloudFront は、次の AWS リージョン で Lambda@Edge 用のサービスにリンクされたロールの使用をサポートしています。 米国東部 (バージニア北部) – us-east-1 米国東部 (オハイオ) – us-east-2 米国西部 (北カリフォルニア) – us-west-1 米国西部 (オレゴン) – us-west-2 アジアパシフィック (ムンバイ) – ap-south-1 アジアパシフィック (ソウル) – ap-northeast-2 アジアパシフィック (シンガポール) – ap-southeast-1 アジアパシフィック (シドニー) – ap-southeast-2 アジアパシフィック (東京) – ap-northeast-1 欧州 (フランクフルト) – eu-central-1 欧州 (アイルランド) – eu-west-1 欧州 (ロンドン) – eu-west-2 南米 (サンパウロ) – sa-east-1 ブラウザで JavaScript が無効になっているか、使用できません。 AWS ドキュメントを使用するには、JavaScript を有効にする必要があります。手順については、使用するブラウザのヘルプページを参照してください。 ドキュメントの表記規則 チュートリアル: 基本的な Lambda@Edge 関数 Lambda@Edge 関数を記述および作成する このページは役に立ちましたか? - はい ページが役に立ったことをお知らせいただき、ありがとうございます。 お時間がある場合は、何が良かったかお知らせください。今後の参考にさせていただきます。 このページは役に立ちましたか? - いいえ このページは修正が必要なことをお知らせいただき、ありがとうございます。ご期待に沿うことができず申し訳ありません。 お時間がある場合は、ドキュメントを改善する方法についてお知らせください。 | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/568 | LLVM Weekly - #568, November 18th 2024 LLVM Weekly - #568, November 18th 2024 Welcome to the five hundred and sixty-eighth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events If you’re a Mastodon person then hopefully you’re already following @llvmweekly@fosstodon.org . LLVM Weekly is now also on Bluesky and I don’t intend to update Twitter going forwards. MaskRay blogged about removing global state from LLD . Neil Henning wrote up work to add custom runtime togglable asserts in an otherwise unmodified LLVM codebase . Stefanos Baziotis wrote a high level overview of some common compiler optimisations . The MLIR and LLVM related activities at SC'24 are summarised here . According to the LLVM calendar in the coming week there will be the following: Office hours with the following hosts: Phoebe Wang, Johannes Doerfert. Online sync-ups on the following topics: pointer authentication, vectorizer improvements, security group, new contributors, Clang C/C++ language working group, Flang, floating point, SPIR-V, RISC-V, LLVM libc, MLIR. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums David Spickett shared news of the major usability improvement now implemented through better test result reporting of Buildkite pre-commit checks . Thank you David! James Y Knight kicked off a discussion on C calling convention lowering to LLVM IR . Tobias Hieta shared an update on the 19.1.4 release which is now planned for 19th November. Alex Bradbury shared a PSA on recent improvements to llvm-zorg buildbot configuration testing . Specifically, there’s now a documented and working flow that allows you to test a builder locally before submitting to llvm-zorg, and llvm-zorg now has precommit checks via GitHub Actions using this. Daniel Thornburgh discussed code size optimisation for printf in LLVM’s libc, and Simon Tatham contributed a summary for how the Arm toolchain managed printf optimisations . Marek Sedláček started a discussion on IR2Builder, a convert from LLVM IR to equivalent C++ IRBuilder calls . As noted by some of the replies, LLVM had something along these lines a long time ago - the “C++ backend”. Justin Bogner suggested LLVM_EXPERIMENTAL_TARGETS_TO_BUILD isn’t helpful and LLVM should have one class of target, detailing some of the issues. It’s early stages for the discussion, but there’s some opposing views being presented currently. Félix Cloutier started an RFC thread on introducing __attribute__((format_like)) with the goal of making -Wformat-nonliteral more useful. Raghesh Aloor proposed the vector data dependence graph visualisation tool for inclusion in LLVM. Renato Golin posted an RFC to move tensor.path and tensor.unpack into the linalg MLIR dialect . Alexander Richardson suggested supporting fine-grained non-integral pointer properties . Max would like to extend LLD’s MachO linker’s balanced partitioning feature to ELF LLD . Rahul Joshi ponders whether - C++ - is still useful in header files to help text editors determine the header is C++ rather than C . Markus Böck started an MLIR RFC discussion on making LLVMStructType immutable . LLVM commits A global function merging pass was implemented. d23c5c2 . The new IRNomalizer pass transforms LLVM modules into a normalised form by reordering and renaming instructions while preserving the same semantics. 2e9f869 . Guidance on merging locations in debuginfo was updated. 6d23ac1 . LLVM learned to emit a prologue_end in as suitable as place as it can manage for pathological inputs. b468ed4 . Profile data is no longer used to flip branch conditions when using optsize, or to direct MachineSink. b8d6659 , 57c33ac . The CodeExtractor saw a large refactoring f6795e6 . There is now a working and documented flow for testing a new buildbot builder configuration locally (e.g. if adding a new builder, or modifying an existing one). 8da61a3 . lit gained a --report-failures-only option to only list failures in the XUnit XML test report. c63e83f . The DXILFlattenArrays pass was added to flatten arrays for the DirectX backend. 5ac624c8 . The new -emit-func-debug-line-table-offsets option can be used to enable per-function line table offsets and end sequences in DWARF. This allows tools to attribute line number information to their corresponding functions even if functions are merged. f407dff . llvm.experimental.vector.match and llvm.experimental.vector.extract.last.active intrinsics were added. e52238b , ed5aadd . The high bits of AArch64 FPR and GPR registers are now defined, as a step towards enabling subregister liveness tracking. c1c68ba . A llvm.experimental.memset_pattern intrinsic was added. 298127d . Clang commits [[clang::lifetime_capture_by(X)]] is now supported, which can be used to specify when a reference to a function parameter is captured by ‘X’. 8c4331c . Support was introduced for diagnostics suppression mapping files. e.g. suppressing warnings for certain headers. 41e3919 . webkit.UncountedLambdaCapturesCheker learned to ignore trivial functions and those passed with [[clang::noescape]] , lowering the false positive rate. 2c6424e . clangd’s ModulesBuilder was rearchitected to improve performance. e385e0d . Other project commits Work to remove global state from the LLD ELF linker was completed. 73bb022 . compiler-rt gained a libcall for fp128 to bf16 conversion. 28e4aad . LLDB can now be built against either Lua 5.3 or Lua 5.4 (rather than just 5.3). e19d740 . MLIR’s Python bindings now allow converting boolean numpy arrays to and from MLIR attributes. 1824e45 . LLVM Offload now has minimal support for riscv64 in its host plugin. b6bd747 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://docs.aws.amazon.com/id_id/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-testing-debugging.html#lambda-edge-identifying-function-errors | Uji dan debug fungsi Lambda @Edge - Amazon CloudFront Uji dan debug fungsi Lambda @Edge - Amazon CloudFront Dokumentasi Amazon CloudFront Panduan Developerr Uji fungsi Lambda @Edge Anda Identifikasi kesalahan fungsi Lambda @Edge di CloudFront Memecahkan masalah respons fungsi Lambda @Edge yang tidak valid (kesalahan validasi) Memecahkan masalah kesalahan eksekusi fungsi Lambda @Edge Tentukan Wilayah Lambda @Edge Tentukan apakah akun Anda mendorong log ke CloudWatch Terjemahan disediakan oleh mesin penerjemah. Jika konten terjemahan yang diberikan bertentangan dengan versi bahasa Inggris aslinya, utamakan versi bahasa Inggris. Uji dan debug fungsi Lambda @Edge Penting untuk menguji kode fungsi Lambda @Edge Anda secara mandiri, untuk memastikan bahwa itu menyelesaikan tugas yang dimaksudkan, dan untuk melakukan pengujian integrasi, untuk memastikan bahwa fungsi berfungsi dengan benar. CloudFront Selama pengujian integrasi atau setelah fungsi Anda di-deploy, Anda mungkin perlu men-debug CloudFront kesalahan, seperti kesalahan HTTP 5xx. Kesalahan dapat menjadi respons tidak valid yang dikembalikan dari fungsi Lambda, kesalahan eksekusi saat fungsi dipicu, atau kesalahan akibat perotasian eksekusi oleh layanan Lambda. Bagian-bagian dalam topik ini membagikan strategi untuk menentukan jenis kegagalan mana yang menjadi masalahnya, kemudian langkah-langkah yang dapat Anda ambil untuk memperbaiki masalah. catatan Saat Anda meninjau file CloudWatch log atau metrik saat Anda memecahkan masalah kesalahan, ketahuilah bahwa kesalahan tersebut ditampilkan atau disimpan di lokasi Wilayah AWS terdekat dengan lokasi di mana fungsi dijalankan. Jadi, jika Anda memiliki situs web atau aplikasi web dengan pengguna di Britania Raya, dan Anda memiliki fungsi Lambda yang terkait dengan distribusi Anda, misalnya, Anda harus mengubah Wilayah untuk CloudWatch melihat metrik atau file log untuk London. Wilayah AWS Untuk informasi selengkapnya, lihat Tentukan Wilayah Lambda @Edge . Topik Uji fungsi Lambda @Edge Anda Identifikasi kesalahan fungsi Lambda @Edge di CloudFront Memecahkan masalah respons fungsi Lambda @Edge yang tidak valid (kesalahan validasi) Memecahkan masalah kesalahan eksekusi fungsi Lambda @Edge Tentukan Wilayah Lambda @Edge Tentukan apakah akun Anda mendorong log ke CloudWatch Uji fungsi Lambda @Edge Anda Terdapat dua langkah untuk menguji fungsi Lambda Anda: pengujian mandiri dan pengujian integrasi. Uji fungsionalitas mandiri Sebelum Anda menambahkan fungsi Lambda CloudFront, pastikan untuk menguji fungsionalitas terlebih dahulu dengan menggunakan kemampuan pengujian di konsol Lambda atau dengan menggunakan metode lain. Untuk informasi selengkapnya tentang pengujian di konsol Lambda, lihat Memanggil fungsi Lambda menggunakan konsol di Panduan Pengembang .AWS Lambda Uji operasi fungsi Anda di CloudFront Penting untuk menyelesaikan pengujian integrasi, di mana fungsi Anda dikaitkan dengan distribusi dan berjalan berdasarkan CloudFront peristiwa. Pastikan bahwa fungsi dipicu untuk acara yang tepat, dan mengembalikan respons yang valid dan benar untuk CloudFront. Misalnya, pastikan bahwa struktur acara sudah benar, bahwa hanya header yang valid yang disertakan, dan sebagainya. Saat Anda mengulangi pengujian integrasi dengan fungsi Anda di konsol Lambda, lihat langkah-langkah dalam tutorial Lambda @Edge saat Anda memodifikasi kode atau mengubah CloudFront pemicu yang memanggil fungsi Anda. Misalnya, pastikan bahwa Anda bekerja dalam versi bernomor dari fungsi Anda, seperti yang dijelaskan dalam langkah tutorial ini: Langkah 4: Tambahkan CloudFront pemicu untuk menjalankan fungsi . Saat Anda membuat perubahan dan menerapkannya, ketahuilah bahwa fungsi dan CloudFront pemicu Anda yang diperbarui akan memakan waktu beberapa menit untuk mereplikasi di semua Wilayah. Ini biasanya memerlukan waktu beberapa menit, tetapi dapat memakan waktu hingga 15 menit. Anda dapat memeriksa untuk melihat apakah replikasi selesai dengan membuka CloudFront konsol dan melihat distribusi Anda. Untuk memeriksa apakah replikasi Anda telah selesai digunakan Buka CloudFront konsol di https://console.aws.amazon.com/cloudfront/v4/home . Pilih nama distribusi. Periksa status distribusi yang akan diubah dari Sedang Berlangsung kembali ke Diterapkan , yang berarti fungsi Anda telah direplikasi. Kemudian ikuti langkah-langkah di bagian berikutnya untuk memverifikasi bahwa fungsi berfungsi. Ketahuilah bahwa pengujian di konsol hanya memvalidasi logika fungsi Anda, dan tidak menerapkan kuota layanan apa pun (sebelumnya dikenal sebagai batas) yang khusus untuk Lambda @Edge. Identifikasi kesalahan fungsi Lambda @Edge di CloudFront Setelah Anda memverifikasi bahwa logika fungsi Anda berfungsi dengan benar, Anda mungkin masih melihat kesalahan HTTP 5xx saat fungsi Anda berjalan. CloudFront Kesalahan HTTP 5xx dapat dikembalikan karena berbagai alasan, yang dapat mencakup kesalahan fungsi Lambda atau masalah lain di dalamnya. CloudFront Jika Anda menggunakan fungsi Lambda @Edge, Anda dapat menggunakan grafik di CloudFront konsol untuk membantu melacak penyebab kesalahan, dan kemudian bekerja untuk memperbaikinya. Misalnya, Anda dapat melihat apakah kesalahan HTTP 5xx disebabkan oleh CloudFront atau oleh fungsi Lambda, dan kemudian, untuk fungsi tertentu, Anda dapat melihat file log terkait untuk menyelidiki masalah tersebut. Untuk memecahkan masalah kesalahan HTTP secara umum di CloudFront, lihat langkah-langkah pemecahan masalah dalam topik berikut:. Memecahkan masalah kode status respons kesalahan di CloudFront Apa yang menyebabkan kesalahan fungsi Lambda @Edge di CloudFront Ada beberapa alasan mengapa fungsi Lambda dapat menyebabkan kesalahan HTTP 5xx, dan langkah-langkah pemecahan masalah yang harus Anda ambil bergantung pada jenis kesalahan. Kesalahan dapat dikategorikan sebagai berikut: Kesalahan eksekusi fungsi Lambda Kesalahan eksekusi terjadi ketika CloudFront tidak mendapatkan respons dari Lambda karena ada pengecualian yang tidak tertangani dalam fungsi atau ada kesalahan dalam kode. Misalnya, jika kode menyertakan callback(Kesalahan). Respons fungsi Lambda yang tidak valid dikembalikan ke CloudFront Setelah fungsi berjalan, CloudFront menerima respons dari Lambda. Kesalahan dikembalikan jika struktur objek tanggapan tidak sesuai dengan Struktur acara Lambda @Edge , atau respons berisi header yang tidak valid atau kolom tidak valid lainnya. Eksekusi di CloudFront dibatasi karena kuota layanan Lambda (sebelumnya dikenal sebagai batas) Eksekusi throttle layanan Lambda di setiap Wilayah, dan menghasilkan kesalahan jika Anda melebihi kuota. Untuk informasi selengkapnya, lihat Kuotas di Lambda@Edge . Cara menentukan jenis kegagalan Untuk membantu Anda memutuskan di mana harus fokus saat Anda men-debug dan bekerja untuk menyelesaikan kesalahan yang dikembalikan oleh CloudFront, akan sangat membantu untuk mengidentifikasi CloudFront mengapa mengembalikan kesalahan HTTP. Untuk memulai, Anda dapat menggunakan grafik yang disediakan di bagian Pemantauan CloudFront konsol di Konsol Manajemen AWS. Untuk informasi selengkapnya tentang melihat grafik di bagian Pemantauan CloudFront konsol, lihat Pantau CloudFront metrik dengan Amazon CloudWatch . Grafik berikut akan sangat membantu ketika Anda ingin melacak apakah kesalahan dikembalikan oleh asal atau fungsi Lambda, dan untuk mempersempit jenis masalah ketika itu adalah kesalahan dari fungsi Lambda. Grafik harga kesalahan Salah satu grafik yang dapat Anda lihat pada Ikhtisar untuk setiap distribusi Anda adalah Tingkat kesalahan grafik. Grafik ini menampilkan tingkat kesalahan sebagai persentase dari total permintaan yang datang ke distribusi Anda. Grafik menunjukkan tingkat kesalahan total, total 4xx kesalahan, total 5xx kesalahan, dan total 5xx kesalahan dari fungsi Lambda. Berdasarkan jenis dan volume kesalahan, Anda dapat mengambil langkah untuk menyelidiki dan memecahkan masalah penyebab. Jika Anda melihat kesalahan Lambda, Anda dapat menyelidiki lebih lanjut dengan melihat jenis kesalahan tertentu yang dikembalikan oleh fungsi tersebut. Kesalahan Lambda@Edge tab menyertakan grafik yang mengategorikan kesalahan fungsi berdasarkan jenis untuk membantu Anda menemukan masalah dari fungsi tertentu. Jika Anda melihat CloudFront kesalahan, Anda dapat memecahkan masalah dan bekerja untuk memperbaiki kesalahan asal atau mengubah konfigurasi Anda CloudFront . Untuk informasi selengkapnya, lihat Memecahkan masalah kode status respons kesalahan di CloudFront . Grafik kesalahan pelaksanaan dan respons fungsi tidak valid Kesalahan Lambda@Edge tab mencakup grafik yang mengkategorikan kesalahan Lambda@Edge untuk distribusi tertentu, berdasarkan jenis. Misalnya, satu grafik menunjukkan semua kesalahan eksekusi oleh Wilayah AWS. Untuk mempermudah pemecahan masalah, Anda dapat mencari masalah tertentu dengan membuka dan memeriksa file log untuk fungsi tertentu berdasarkan Wilayah. Untuk melihat file log untuk fungsi tertentu menurut Wilayah Pada tab kesalahan Lambda @Edge , di bawah fungsi Lambda @Edge Terkait, pilih nama fungsi, lalu pilih Lihat metrik. Selanjutnya, pada halaman dengan nama fungsi Anda, di sudut kanan atas, pilih Lihat log fungsi , lalu pilih Region. Misalnya, jika Anda melihat masalah dalam grafik Kesalahan untuk Wilayah AS Barat (Oregon), pilih Wilayah itu dari daftar tarik-turun. Ini membuka CloudWatch konsol Amazon. Di CloudWatch konsol untuk Wilayah itu, di bawah Aliran log , pilih aliran log untuk melihat peristiwa untuk fungsi tersebut. Selain itu, baca bagian berikut dalam bab ini untuk rekomendasi lebih lanjut tentang pemecahan masalah dan memperbaiki kesalahan. Grafik trotel Kesalahan Lambda@Edge juga mencakup Trotel grafik. Terkadang, layanan Lambda merombak invokasi fungsi Anda dengan basis per Wilayah, jika Anda mencapai kuota konkurensi regional (sebelumnya disebut batas). Jika Anda melihat kesalahan yang melebihi , fungsi Anda telah mencapai kuota yang dikenakan layanan Lambda pada eksekusi di Wilayah. Untuk informasi lebih lanjut, termasuk cara meminta peningkatan kuota, lihat Kuotas di Lambda@Edge . Sebagai contoh tentang cara menggunakan informasi ini dalam mengatasi masalah kesalahan HTTP, lihat Empat langkah untuk melakukan debug pengiriman konten Anda di AWS . Memecahkan masalah respons fungsi Lambda @Edge yang tidak valid (kesalahan validasi) Jika Anda mengidentifikasi bahwa masalah Anda adalah kesalahan validasi Lambda, itu berarti bahwa fungsi Lambda Anda mengembalikan respons yang tidak valid. CloudFront Ikuti panduan di bagian ini untuk mengambil langkah-langkah untuk meninjau fungsi Anda dan memastikan bahwa respons Anda sesuai dengan CloudFront persyaratan. CloudFront memvalidasi respons dari fungsi Lambda dengan dua cara: Respon Lambda harus sesuai dengan struktur objek yang diperlukan. Contoh struktur objek yang buruk mencakup hal berikut: JSON yang tidak dapat dipisahkan, kolom wajib yang hilang, dan objek tidak valid dalam respons. Untuk informasi lebih lanjut, lihat Struktur acara Lambda @Edge . Respons harus menyertakan hanya nilai objek yang valid. Kesalahan akan terjadi jika respons mencakup objek valid tetapi memiliki nilai yang tidak didukung. Contohnya meliputi yang berikut ini: menambahkan atau memperbarui header yang masuk daftar tidak diizinkan atau hanya baca (lihat Pembatasan pada fungsi edge ), melebihi ukuran izi maksimum (lihat dalam Pembatasan Ukuran Respons yang Dihasilkan dalam topic Kesalahan Lambda@Edge) dan karakter atau nilai tidak valid (lihat Struktur acara Lambda @Edge ). Ketika Lambda mengembalikan respons yang tidak valid CloudFront, pesan kesalahan ditulis ke file log yang CloudFront mendorong ke CloudWatch Wilayah tempat fungsi Lambda dijalankan. Ini adalah perilaku default untuk mengirim file log CloudWatch ketika ada respons yang tidak valid. Namun, jika Anda mengaitkan fungsi Lambda CloudFront sebelum fungsionalitas dirilis, fungsi tersebut mungkin tidak diaktifkan untuk fungsi Anda. Untuk informasi lebih lanjut, lihat Tentukan apakah Akun Anda Mendorong Log ke CloudWatch nanti dalam topik. CloudFront mendorong file log ke Wilayah yang sesuai dengan tempat fungsi Anda dijalankan, di grup log yang terkait dengan distribusi Anda. Grup log memiliki format berikut: /aws/cloudfront/LambdaEdge/ DistributionId , di DistributionId mana ID distribusi Anda. Untuk menentukan Wilayah tempat Anda dapat menemukan file CloudWatch log, lihat Menentukan Wilayah Lambda @Edge nanti dalam topik ini. Jika kesalahan dapat direproduksi, Anda dapat membuat permintaan baru yang menghasilkan kesalahan dan kemudian menemukan id permintaan dalam CloudFront respons gagal ( X-Amz-Cf-Id header) untuk menemukan satu kegagalan dalam file log. Entri file log mencakup informasi yang dapat membantu Anda mengidentifikasi mengapa kesalahan dikembalikan, dan juga mencantumkan id permintaan Lambda yang sesuai sehingga Anda dapat menganalisis akar masalah dalam konteks permintaan tunggal. Jika kesalahan terputus-putus, Anda dapat menggunakan log CloudFront akses untuk menemukan id permintaan untuk permintaan yang gagal, dan kemudian mencari CloudWatch log untuk pesan kesalahan yang sesuai. Untuk informasi lebih lanjut, lihat bagian sebelumnya, Menentukan Jenis Kegagalan . Memecahkan masalah kesalahan eksekusi fungsi Lambda @Edge Jika masalahnya adalah kesalahan eksekusi Lambda, akan sangat membantu untuk membuat pernyataan logging untuk fungsi Lambda, untuk menulis pesan ke file CloudWatch log yang memantau eksekusi fungsi Anda CloudFront dan menentukan apakah berfungsi seperti yang diharapkan. Kemudian Anda dapat mencari pernyataan tersebut di file CloudWatch log untuk memverifikasi bahwa fungsi Anda berfungsi. catatan Bahkan jika Anda belum mengubah fungsi Lambda@Edge Anda, pembaruan pada lingkungan pelaksanaan fungsi Lambda dapat memengaruhinya dan dapat mengembalikan kesalahan pelaksanaan. Untuk informasi tentang pengujian dan migrasi ke versi yang lebih baru, lihat Pembaruan mendatang untuk lingkungan eksekusi AWS Lambda dan AWS Lambda @Edge . Tentukan Wilayah Lambda @Edge Untuk melihat Wilayah tempat fungsi Lambda @Edge Anda menerima lalu lintas, lihat metrik untuk fungsi di CloudFront konsol di. Konsol Manajemen AWS Metrik ditampilkan untuk setiap AWS Wilayah. Di halaman yang sama, Anda dapat memilih Wilayah dan melihat file log untuk Wilayah tersebut sehingga Anda dapat menyelidiki masalah. Anda harus meninjau file CloudWatch log di AWS Wilayah yang benar untuk melihat file log yang dibuat saat CloudFront menjalankan fungsi Lambda Anda. Untuk informasi selengkapnya tentang melihat grafik di bagian Pemantauan CloudFront konsol, lihat Pantau CloudFront metrik dengan Amazon CloudWatch . Tentukan apakah akun Anda mendorong log ke CloudWatch Secara default, CloudFront memungkinkan pencatatan respons fungsi Lambda yang tidak valid, dan mendorong file log ke CloudWatch dengan menggunakan salah satu file. Peran terkait layanan untuk Lambda @Edge Jika Anda memiliki fungsi Lambda @Edge yang Anda tambahkan CloudFront sebelum fitur log respons fungsi Lambda yang tidak valid dirilis, logging diaktifkan saat Anda memperbarui konfigurasi Lambda @Edge Anda, misalnya, dengan menambahkan pemicu. CloudFront Anda dapat memverifikasi bahwa mendorong file log ke CloudWatch diaktifkan untuk akun Anda dengan melakukan hal berikut: Periksa untuk melihat apakah log muncul CloudWatch — Pastikan Anda melihat di Wilayah tempat fungsi Lambda @Edge dijalankan. Untuk informasi selengkapnya, lihat Tentukan Wilayah Lambda @Edge . Tentukan apakah peran terkait layanan terkait ada di akun Anda di IAM — Anda harus memiliki peran AWSServiceRoleForCloudFrontLogger IAM di akun Anda. Untuk informasi selengkapnya tentang peran ini, silakan lihat Peran terkait layanan untuk Lambda @Edge . Javascript dinonaktifkan atau tidak tersedia di browser Anda. Untuk menggunakan Dokumentasi AWS, Javascript harus diaktifkan. Lihat halaman Bantuan browser Anda untuk petunjuk. Konvensi Dokumen Tambahkan pemicu ke fungsi Lambda @Edge Hapus fungsi dan replika Apakah halaman ini membantu Anda? - Ya Terima kasih telah memberitahukan bahwa hasil pekerjaan kami sudah baik. Jika Anda memiliki waktu luang, beri tahu kami aspek apa saja yang sudah bagus, agar kami dapat menerapkannya secara lebih luas. Apakah halaman ini membantu Anda? - Tidak Terima kasih telah memberi tahu kami bahwa halaman ini perlu ditingkatkan. Maaf karena telah mengecewakan Anda. Jika Anda memiliki waktu luang, beri tahu kami bagaimana dokumentasi ini dapat ditingkatkan. | 2026-01-13T09:30:34 |
http://docs.buildbot.net/current/manual/configuration/global.html | 2.5.2. Global Configuration — Buildbot 4.3.0 documentation Buildbot 1. Buildbot Tutorial 2. Buildbot Manual 2.1. Introduction 2.2. Installation 2.3. Concepts 2.4. Secret Management 2.5. Configuration 2.5.1. Configuring Buildbot 2.5.2. Global Configuration 2.5.2.1. Database Specification 2.5.2.2. MQ Specification 2.5.2.3. Multi-master mode 2.5.2.4. Site Definitions 2.5.2.5. Log Handling 2.5.2.6. Data Lifetime 2.5.2.7. Merging Build Requests 2.5.2.8. Prioritizing Builders 2.5.2.9. Prioritizing Workers 2.5.2.10. Configuring worker protocols 2.5.2.11. Defining Global Properties 2.5.2.12. Manhole 2.5.2.13. Metrics Options 2.5.2.14. Statistics Service 2.5.2.15. secretsProviders 2.5.2.16. BuildbotNetUsageData 2.5.2.17. Users Options 2.5.2.18. Input Validation 2.5.2.19. Revision Links 2.5.2.20. Codebase Generator 2.5.3. Change Sources and Changes 2.5.4. Changes 2.5.5. Schedulers 2.5.6. Workers 2.5.7. Builder Configuration 2.5.8. Projects 2.5.9. Codebases 2.5.10. Build Factories 2.5.11. Build Sets 2.5.12. Properties 2.5.13. Build Steps 2.5.14. Interlocks 2.5.15. Report Generators 2.5.16. Reporters 2.5.17. Web Server 2.5.18. Change Hooks 2.5.19. Custom Services 2.5.20. DbConfig 2.5.21. Configurators 2.5.22. Manhole 2.5.23. Multimaster 2.5.24. Multiple-Codebase Builds 2.5.25. Miscellaneous Configuration 2.5.26. Testing Utilities 2.6. Customization 2.7. Command-line Tool 2.8. Resources 2.9. Optimization 2.10. Plugin Infrastructure in Buildbot 2.11. Deployment 2.12. Upgrading 3. Buildbot Development 4. Release Notes 5. Older Release Notes 6. API Indices Buildbot 2. Buildbot Manual 2.5. Configuration 2.5.2. Global Configuration View page source 2.5.2. Global Configuration The keys in this section affect the operations of the buildmaster globally. Database Specification MQ Specification Multi-master mode Site Definitions Log Handling Data Lifetime Merging Build Requests Prioritizing Builders Prioritizing Workers Configuring worker protocols Defining Global Properties Manhole Metrics Options Statistics Service secretsProviders BuildbotNetUsageData Users Options Input Validation Revision Links Codebase Generator 2.5.2.1. Database Specification Buildbot requires a connection to a database to maintain certain state information, such as tracking pending build requests. In the default configuration Buildbot uses a file-based SQLite database, stored in the state.sqlite file of the master’s base directory. Important SQLite3 is perfectly suitable for small setups with a few users. However, it does not scale well with large numbers of builders, workers and users. If you expect your Buildbot to grow over time, it is strongly advised to use a real database server (e.g., MySQL or Postgres). A SQLite3 database may be migrated to a real database server using buildbot copy-db script. See the Using A Database Server section for more details. Override this configuration with the db_url parameter. Buildbot accepts a database configuration in a dictionary named db . All keys are optional: c [ 'db' ] = { 'db_url' : 'sqlite:///state.sqlite' , } The db_url key indicates the database engine to use. The format of this parameter is completely documented at http://www.sqlalchemy.org/docs/dialects/ , but is generally of the form: "driver://[username:password@]host:port/database[?args]" This parameter can be specified directly in the configuration dictionary, as c['db_url'] , although this method is deprecated. Buildbot also accepts a dictionary at c['db']['engine_kwargs'] , this dictionary eventually ends up being passed to SQLAlchemy’s create_engine function, see https://docs.sqlalchemy.org/en/latest/core/engines.html#sqlalchemy.create_engine for details. As an example, one may configure the amount of connections opened to PostgreSQL like so: c [ 'db' ] = { 'db_url' : "postgresql://username:password@hostname/dbname" , 'engine_kwargs' : { 'pool_size' : 512 , 'max_overflow' : 0 , }, } The following sections give additional information for particular database backends: SQLite For sqlite databases, since there is no host and port, relative paths are specified with sqlite:/// and absolute paths with sqlite://// . For example: c [ 'db_url' ] = "sqlite:///state.sqlite" SQLite requires no special configuration. MySQL c [ 'db_url' ] = "mysql://username:password@example.com/database_name?max_idle=300" The max_idle argument for MySQL connections is unique to Buildbot and should be set to something less than the wait_timeout configured for your server. This controls the SQLAlchemy pool_recycle parameter, which defaults to no timeout. Setting this parameter ensures that connections are closed and re-opened after the configured amount of idle time. If you see errors such as _mysql_exceptions.OperationalError: (2006, 'MySQL server has gone away') , this means your max_idle setting is probably too high. show global variables like 'wait_timeout'; will show what the currently configured wait_timeout is on your MySQL server. Buildbot requires use_unique=True and charset=utf8 , and will add them automatically, so they do not need to be specified in db_url . MySQL defaults to the MyISAM storage engine, but this can be overridden with the storage_engine URL argument. Postgres c [ 'db_url' ] = "postgresql://username:password@hostname/dbname" PosgreSQL requires no special configuration. 2.5.2.2. MQ Specification Buildbot uses a message-queueing system to handle communication within the master. Messages are used to indicate events within the master, and components that are interested in those events arrange to receive them. The message queueing implementation is configured as a dictionary in the mq option. The type key describes the type of MQ implementation to be used. Note that the implementation type cannot be changed in a reconfig. The available implementation types are described in the following sections. Simple c [ 'mq' ] = { 'type' : 'simple' , 'debug' : False , } This is the default MQ implementation. Similar to SQLite, it has no additional software dependencies, but does not support multi-master mode. Note that this implementation also does not support message persistence across a restart of the master. For example, if a change is received, but the master shuts down before the schedulers can create build requests for it, then those schedulers will not be notified of the change when the master starts again. The debug key, which defaults to False, can be used to enable logging of every message produced on this master. Wamp Note At the moment, wamp is the only message queue implementation for multimaster. It has been privileged as this is the only message queue that has very solid support for Twisted. Other more common message queue systems like RabbitMQ (using the AMQP protocol) do not have a convincing driver for twisted, and this would require to run on threads, which will add an important performance overhead. c [ 'mq' ] = { 'type' : 'wamp' , 'router_url' : 'ws://localhost:8080/ws' , 'realm' : 'realm1' , # valid are: none, critical, error, warn, info, debug, trace 'wamp_debug_level' : 'error' } This is a MQ implementation using the wamp protocol. This implementation uses Python Autobahn wamp client library, and is fully asynchronous (no use of threads). To use this implementation, you need a wamp router like Crossbar. The implementation does not yet support wamp authentication. This MQ allows buildbot to run in multi-master mode. Note that this implementation also does not support message persistence across a restart of the master. For example, if a change is received, but the master shuts down before the schedulers can create build requests for it, then those schedulers will not be notified of the change when the master starts again. router_url (mandatory): points to your router websocket url. Buildbot is only supporting wamp over websocket, which is a sub-protocol of http. SSL is supported using wss:// instead of ws:// . realm (optional, defaults to buildbot ): defines the wamp realm to use for your buildbot messages. wamp_debug_level (optional, defaults to error ): defines the log level of autobahn. You must use a router with very reliable connection to the master. If for some reason, the wamp connection is lost, then the master will stop, and should be restarted via a process manager. Crossbar The default Crossbar setup will just work with Buildbot, provided you use the example mq configuration below, and start Crossbar with: # of course, you should work in a virtualenv... pip install crossbar crossbar init crossbar start .crossbar/config.json: { "version" : 2 , "controller" : {} , "workers" : [ { "type" : "router" , "realms" : [ { "name" : "test_realm" , "roles" : [ { "name" : "anonymous" , "permissions" : [ { "uri" : "" , "match" : "prefix" , "allow" : { "call" : true, "register" : true, "publish" : true, "subscribe" : true } , "disclose" : { "caller" : false, "publisher" : false } , "cache" : true } ] } ] } ] , "transports" : [ { "type" : "web" , "endpoint" : { "type" : "tcp" , "port" : 1245 } , "paths" : { "ws" : { "type" : "websocket" } } } ] } ] } Buildbot can be configured to use Crossbar by the following: c [ "mq" ] = { "type" : "wamp" , "router_url" : "ws://localhost:1245/ws" , "realm" : "test_realm" , "wamp_debug_level" : "warn" } Please refer to Crossbar documentation for more details. 2.5.2.3. Multi-master mode See Multimaster for details on the multi-master mode in Buildbot Nine. By default, Buildbot makes coherency checks that prevent typos in your master.cfg . It makes sure schedulers are not referencing unknown builders, and enforces there is at least one builder. In the case of an asymmetric multimaster, those coherency checks can be harmful and prevent you to implement what you want. For example, you might want to have one master dedicated to the UI, so that a big load generated by builds will not impact page load times. To enable multi-master mode in this configuration, you will need to set the multiMaster option so that buildbot doesn’t warn about missing schedulers or builders. # Enable multiMaster mode; disables warnings about unknown builders and # schedulers c [ 'multiMaster' ] = True c [ 'db' ] = { 'db_url' : 'mysql://...' , } c [ 'mq' ] = { # Need to enable multimaster aware mq. Wamp is the only option for now. 'type' : 'wamp' , 'router_url' : 'ws://localhost:8080' , 'realm' : 'realm1' , # valid are: none, critical, error, warn, info, debug, trace 'wamp_debug_level' : 'error' } 2.5.2.4. Site Definitions Three basic settings describe the buildmaster in status reports: c [ 'title' ] = "Buildbot" c [ 'titleURL' ] = "http://buildbot.sourceforge.net/" title is a short string that will appear at the top of this buildbot installation’s home page (linked to the titleURL ). titleURL is a URL string. HTML status displays will show title as a link to titleURL . This URL is often used to provide a link from buildbot HTML pages to your project’s home page. The buildbotURL string should point to the location where the buildbot’s internal web server is visible. When status notices are sent to users (e.g., by email or over IRC), buildbotURL will be used to create a URL to the specific build or problem that they are being notified about. 2.5.2.5. Log Handling c [ 'logCompressionMethod' ] = 'gz' c [ 'logMaxSize' ] = 1024 * 1024 # 1M c [ 'logMaxTailSize' ] = 32768 c [ 'logEncoding' ] = 'utf-8' The logCompressionLimit enables compression of build logs on disk for logs that are bigger than the given size, or disables that completely if set to False . The default value is 4096, which should be a reasonable default on most file systems. This setting has no impact on status plugins, and merely affects the required disk space on the master for build logs. The logCompressionMethod controls what type of compression is used for build logs. Valid option are ‘raw’ (no compression), ‘gz’, ‘lz4’ (required lz4 package), ‘br’ (requires buildbot[brotli] extra) or ‘zstd’ (requires buildbot[zstd] extra). The default is ‘zstd’ if the buildbot[zstd] is installed, otherwise defaults to ‘gz’. Please find below some stats extracted from 50x “trial Pyflakes” runs (results may differ according to log type). Space saving details compression raw log size compressed log size space saving compression speed bz2 2.981 MB 0.603 MB 79.77% 3.433 MB/s gz 2.981 MB 0.568 MB 80.95% 6.604 MB/s lz4 2.981 MB 0.844 MB 71.68% 77.668 MB/s The logMaxSize parameter sets an upper limit (in bytes) to how large logs from an individual build step can be. The default value is None, meaning no upper limit to the log size. Any output exceeding logMaxSize will be truncated, and a message to this effect will be added to the log’s HEADER channel. If logMaxSize is set, and the output from a step exceeds the maximum, the logMaxTailSize parameter controls how much of the end of the build log will be kept. The effect of setting this parameter is that the log will contain the first logMaxSize bytes and the last logMaxTailSize bytes of output. Don’t set this value too high, as the tail of the log is kept in memory. The logEncoding parameter specifies the character encoding to use to decode bytestrings provided as logs. It defaults to utf-8 , which should work in most cases, but can be overridden if necessary. In extreme cases, a callable can be specified for this parameter. It will be called with byte strings, and should return the corresponding Unicode string. This setting can be overridden for a single build step with the logEncoding step parameter. It can also be overridden for a single log file by passing the logEncoding parameter to addLog . 2.5.2.6. Data Lifetime Horizons Previously Buildbot implemented a global configuration for horizons. Now it is implemented as a utility Builder, and shall be configured via the JanitorConfigurator . Caches c [ 'caches' ] = { 'Changes' : 100 , # formerly c['changeCacheSize'] 'Builds' : 500 , # formerly c['buildCacheSize'] 'chdicts' : 100 , 'BuildRequests' : 10 , 'SourceStamps' : 20 , 'ssdicts' : 20 , 'objectids' : 10 , 'usdicts' : 100 , } The caches configuration key contains the configuration for Buildbot’s in-memory caches. These caches keep frequently-used objects in memory to avoid unnecessary trips to the database. Caches are divided by object type, and each has a configurable maximum size. The default size for each cache is 1, except where noted below. A value of 1 allows Buildbot to make a number of optimizations without consuming much memory. Larger, busier installations will likely want to increase these values. The available caches are: Changes the number of change objects to cache in memory. This should be larger than the number of changes that typically arrive in the span of a few minutes, otherwise your schedulers will be reloading changes from the database every time they run. For distributed version control systems, like Git or Hg, several thousand changes may arrive at once, so setting this parameter to something like 10000 isn’t unreasonable. This parameter is the same as the deprecated global parameter changeCacheSize . Its default value is 10. Builds The buildCacheSize parameter gives the number of builds for each builder which are cached in memory. This number should be larger than the number of builds required for commonly-used status displays (the waterfall or grid views), so that those displays do not miss the cache on a refresh. This parameter is the same as the deprecated global parameter buildCacheSize . Its default value is 15. chdicts The number of rows from the changes table to cache in memory. This value should be similar to the value for Changes . BuildRequests The number of BuildRequest objects kept in memory. This number should be higher than the typical number of outstanding build requests. If the master ordinarily finds jobs for BuildRequests immediately, you may set a lower value. SourceStamps the number of SourceStamp objects kept in memory. This number should generally be similar to the number BuildRequesets . ssdicts The number of rows from the sourcestamps table to cache in memory. This value should be similar to the value for SourceStamps . objectids The number of object IDs - a means to correlate an object in the Buildbot configuration with an identity in the database–to cache. In this version, object IDs are not looked up often during runtime, so a relatively low value such as 10 is fine. usdicts The number of rows from the users table to cache in memory. Note that for a given user there will be a row for each attribute that user has. c[‘buildCacheSize’] = 15 2.5.2.7. Merging Build Requests c [ 'collapseRequests' ] = True This is a global default value for builders’ collapseRequests parameter, and controls the merging of build requests. This parameter can be overridden on a per-builder basis. See Collapsing Build Requests for the allowed values for this parameter. 2.5.2.8. Prioritizing Builders def prioritizeBuilders ( buildmaster , builders ): ... c [ 'prioritizeBuilders' ] = prioritizeBuilders By default, buildbot will attempt to start builds on builders in order, beginning with the builder with the oldest pending request. Customize this behavior with the prioritizeBuilders configuration key, which takes a callable. See Builder Priority Functions for details on this callable. This parameter controls the order that the buildmaster can start builds, and is useful in situations where there is resource contention between builders, e.g., for a test database. It does not affect the order in which a builder processes the build requests in its queue. For that purpose, see Prioritizing Builds . 2.5.2.9. Prioritizing Workers By default Buildbot will select worker for a build randomly from available workers. This can be adjusted by select_next_worker function in global master configuration and additionally by nextWorker per-builder configuration parameter. These two functions work exactly the same: The function is passed three arguments, the Builder object which is assigning a new job, a list of WorkerForBuilder objects and the BuildRequest . The function should return one of the WorkerForBuilder objects, or None if none of the available workers should be used. The function can optionally return a Deferred, which should fire with the same results. def select_next_worker ( builder , workers , buildrequest ): ... c [ "select_next_worker" ] = select_next_worker 2.5.2.10. Configuring worker protocols The protocols key defines how buildmaster listens to connections from workers. The value of the key is dictionary with keys being protocol names and values being per-protocol configuration. The following protocols are supported: pb - Perspective Broker protocol. This protocol supports not only connections from workers, but also remote Change Sources, status clients and debug tools. It supports the following configuration: port - specifies the listening port configuration. This may be a numeric port, or a connection string , as defined in the ConnectionStrings guide. msgpack_experimental_v7 - (experimental) MessagePack-based protocol. It supports the following configuration: port - specifies the listening port configuration. This may be a numeric port, or a connection string , as defined in the ConnectionStrings guide. Note Note, that the master host must be visible to all workers that would attempt to connect to it. The firewall (if any) must be configured to allow external connections. Additionally, the configured listen port must be larger than 1024 in most cases, as lower ports are usually restricted to root processes only. The following is a minimal example of protocol configuration: c [ 'protocols' ] = { "pb" : { "port" : 10000 }} The following example only allows connections from localhost. This might be useful in cases workers are run on the same machine as master (e.g. in very small Buildbot installations). The workers would need to be configured to contact the buildmaster at localhost:10000 . c [ 'protocols' ] = { "pb" : { "port" : "tcp:10000:interface=127.0.0.1" }} The following example shows how to configure worker connections via TLS: c [ 'protocols' ] = { "pb" : { "port" : "ssl:9989:privateKey=master.key:certKey=master.crt" }} Please note that IPv6 addresses with : must be escaped with as well as : in paths and in paths. Read more about the connection strings format in ConnectionStrings documentation. See also Worker TLS Configuration 2.5.2.11. Defining Global Properties The properties configuration key defines a dictionary of properties that will be available to all builds started by the buildmaster: c [ 'properties' ] = { 'Widget-version' : '1.2' , 'release-stage' : 'alpha' } 2.5.2.12. Manhole Manhole is an interactive Python shell which allows full access to the Buildbot master instance. It is probably only useful for buildbot developers. See documentation on Manhole implementations for available authentication and connection methods. The manhole configuration key accepts a single instance of a Manhole class. For example: from buildbot import manhole c [ 'manhole' ] = manhole . PasswordManhole ( "tcp:1234:interface=127.0.0.1" , "admin" , "passwd" , ssh_hostkey_dir = "data/ssh_host_keys" ) 2.5.2.13. Metrics Options c [ 'metrics' ] = { "log_interval" : 10 , "periodic_interval" : 10 } metrics can be a dictionary that configures various aspects of the metrics subsystem. If metrics is None , then metrics collection, logging and reporting will be disabled. log_interval determines how often metrics should be logged to twistd.log. It defaults to 60s. If set to 0 or None , then logging of metrics will be disabled. This value can be changed via a reconfig. periodic_interval determines how often various non-event based metrics are collected, such as memory usage, uncollectable garbage, reactor delay. This defaults to 10s. If set to 0 or None , then periodic collection of this data is disabled. This value can also be changed via a reconfig. Read more about metrics in the Metrics section in the developer documentation. 2.5.2.14. Statistics Service The Statistics Service (stats service for short) supports the collection of arbitrary data from within a running Buildbot instance and the export to a number of storage backends. Currently, only InfluxDB is supported as a storage backend. Also, InfluxDB (or any other storage backend) is not a mandatory dependency. Buildbot can run without it, although StatsService will be of no use in such a case. At present, StatsService can keep track of build properties, build times (start, end, duration) and arbitrary data produced inside Buildbot (more on this later). Example usage: captures = [ stats . CaptureProperty ( 'Builder1' , 'tree-size-KiB' ), stats . CaptureBuildDuration ( 'Builder2' )] c [ 'services' ] = [] c [ 'services' ] . append ( stats . StatsService ( storage_backends = [ stats . InfluxStorageService ( 'localhost' , 8086 , 'root' , 'root' , 'test' , captures ) ], name = "StatsService" )) The services configuration value should be initialized as a list and a StatsService instance should be appended to it as shown in the example above. Statistics Service class buildbot.statistics.stats_service. StatsService This is the main class for statistics services. It is initialized in the master configuration as shown in the example above. It takes two arguments: storage_backends A list of storage backends (see Storage Backends ). In the example above, stats.InfluxStorageService is an instance of a storage backend. Each storage backend is an instance of subclasses of statsStorageBase . name The name of this service. yieldMetricsValue : This method can be used to send arbitrary data for storage. (See Using StatsService.yieldMetricsValue for more information.) Capture Classes class buildbot.statistics.capture. CaptureProperty Instance of this class declares which properties must be captured and sent to the Storage Backends . It takes the following arguments: builder_name The name of builder in which the property is recorded. property_name The name of property needed to be recorded as a statistic. callback=None (Optional) A custom callback function for this class. This callback function should take in two arguments - build_properties (dict) and property_name (str) and return a string that will be sent for storage in the storage backends. regex=False If this is set to True , then the property name can be a regular expression. All properties matching this regular expression will be sent for storage. class buildbot.statistics.capture. CapturePropertyAllBuilders Instance of this class declares which properties must be captured on all builders and sent to the Storage Backends . It takes the following arguments: property_name The name of property needed to be recorded as a statistic. callback=None (Optional) A custom callback function for this class. This callback function should take in two arguments - build_properties (dict) and property_name (str) and return a string that will be sent for storage in the storage backends. regex=False If this is set to True , then the property name can be a regular expression. All properties matching this regular expression will be sent for storage. class buildbot.statistics.capture. CaptureBuildStartTime Instance of this class declares which builders’ start times are to be captured and sent to Storage Backends . It takes the following arguments: builder_name The name of builder whose times are to be recorded. callback=None (Optional) A custom callback function for this class. This callback function should take in a Python datetime object and return a string that will be sent for storage in the storage backends. class buildbot.statistics.capture. CaptureBuildStartTimeAllBuilders Instance of this class declares start times of all builders to be captured and sent to Storage Backends . It takes the following arguments: callback=None (Optional) A custom callback function for this class. This callback function should take in a Python datetime object and return a string that will be sent for storage in the storage backends. class buildbot.statistics.capture. CaptureBuildEndTime Exactly like CaptureBuildStartTime except it declares the builders whose end time is to be recorded. The arguments are same as CaptureBuildStartTime . class buildbot.statistics.capture. CaptureBuildEndTimeAllBuilders Exactly like CaptureBuildStartTimeAllBuilders except it declares all builders’ end time to be recorded. The arguments are same as CaptureBuildStartTimeAllBuilders . class buildbot.statistics.capture. CaptureBuildDuration Instance of this class declares the builders whose build durations are to be recorded. It takes the following arguments: builder_name The name of builder whose times are to be recorded. report_in='seconds' Can be one of three: 'seconds' , 'minutes' , or 'hours' . This is the units in which the build time will be reported. callback=None (Optional) A custom callback function for this class. This callback function should take in two Python datetime objects - a start_time and an end_time and return a string that will be sent for storage in the storage backends. class buildbot.statistics.capture. CaptureBuildDurationAllBuilders Instance of this class declares build durations to be recorded for all builders. It takes the following arguments: report_in='seconds' Can be one of three: 'seconds' , 'minutes' , or 'hours' . This is the units in which the build time will be reported. callback=None (Optional) A custom callback function for this class. This callback function should take in two Python datetime objects - a start_time and an end_time and return a string that will be sent for storage in the storage backends. class buildbot.statistics.capture. CaptureData Instance of this capture class is for capturing arbitrary data that is not stored as build-data. Needs to be used in combination with yieldMetricsValue (see Using StatsService.yieldMetricsValue ). Takes the following arguments: data_name The name of data to be captured. Same as in yieldMetricsValue . builder_name The name of builder whose times are to be recorded. callback=None The callback function for this class. This callback receives the data sent to yieldMetricsValue as post_data (see Using StatsService.yieldMetricsValue ). It must return a string that is to be sent to the storage backends for storage. class buildbot.statistics.capture. CaptureDataAllBuilders Instance of this capture class for capturing arbitrary data that is not stored as build-data on all builders. Needs to be used in combination with yieldMetricsValue (see Using StatsService.yieldMetricsValue ). Takes the following arguments: data_name The name of data to be captured. Same as in yieldMetricsValue . callback=None The callback function for this class. This callback receives the data sent to yieldMetricsValue as post_data (see Using StatsService.yieldMetricsValue ). It must return a string that is to be sent to the storage backends for storage. Using StatsService.yieldMetricsValue Advanced users can modify BuildSteps to use StatsService.yieldMetricsValue which will send arbitrary data for storage to the StatsService . It takes the following arguments: data_name The name of the data being sent or storage. post_data A dictionary of key value pair that is sent for storage. The keys will act as columns in a database and the value is stored under that column. buildid The integer build id of the current build. Obtainable in all BuildSteps . Along with using yieldMetricsValue , the user will also need to use the CaptureData capture class. As an example, we can add the following to a build step: yieldMetricsValue ( 'test_data_name' , { 'some_data' : 'some_value' }, buildid ) Then, we can add in the master configuration a capture class like this: captures = [ CaptureBuildData ( 'test_data_name' , 'Builder1' )] Pass this captures list to a storage backend (as shown in the example at the top of this section) for capturing this data. Storage Backends Storage backends are responsible for storing any statistics data sent to them. A storage backend will generally be some sort of a database-server running on a machine. ( Note : This machine may be different from the one running BuildMaster ) Currently, only InfluxDB is supported as a storage backend. class buildbot.statistics.storage_backends.influxdb_client. InfluxStorageService This class is a Buildbot client to the InfluxDB storage backend. InfluxDB is a distributed, time series database that employs a key-value pair storage system. It requires the following arguments: url The URL where the service is running. port The port on which the service is listening. user Username of a InfluxDB user. password Password for user . db The name of database to be used. captures A list of objects of Capture Classes . This tells which statistics are to be stored in this storage backend. name=None (Optional) The name of this storage backend. 2.5.2.15. secretsProviders See Secret Management for details on secret concepts. Example usage: c [ 'secretsProviders' ] = [ .. ] secretsProviders is a list of secrets storage. See Secret Management to configure a secret storage provider. 2.5.2.16. BuildbotNetUsageData Since buildbot 0.9.0, buildbot has a simple feature which sends usage analysis info to buildbot.net. This is very important for buildbot developers to understand how the community is using the tools. This allows to better prioritize issues, and understand what plugins are actually being used. This will also be a tool to decide whether to keep support for very old tools. For example buildbot contains support for the venerable CVS, but we have no information whether it actually works beyond the unit tests. We rely on the community to test and report issues with the old features. With BuildbotNetUsageData, we can know exactly what combination of plugins are working together, how much people are customizing plugins, what versions of the main dependencies people run. We take your privacy very seriously. BuildbotNetUsageData will never send information specific to your Code or Intellectual Property. No repository url, shell command values, host names, ip address or custom class names. If it does, then this is a bug, please report. We still need to track unique number for installation. This is done via doing a sha1 hash of master’s hostname, installation path and fqdn. Using a secure hash means there is no way of knowing hostname, path and fqdn given the hash, but still there is a different hash for each master. You can see exactly what is sent in the master’s twisted.log. Usage data is sent every time the master is started. BuildbotNetUsageData can be configured with 4 values: c['buildbotNetUsageData'] = None disables the feature c['buildbotNetUsageData'] = 'basic' sends the basic information to buildbot including: versions of buildbot, python and twisted platform information (CPU, OS, distribution, python flavor (i.e CPython vs PyPy)) mq and database type (mysql or sqlite?) www plugins usage Plugins usages: This counts the number of time each class of buildbot is used in your configuration. This counts workers, builders, steps, schedulers, change sources. If the plugin is subclassed, then it will be prefixed with a > example of basic report (for the metabuildbot): { 'versions' : { 'Python' : '2.7.6' , 'Twisted' : '15.5.0' , 'Buildbot' : '0.9.0rc2-176-g5fa9dbf' }, 'platform' : { 'machine' : 'x86_64' , 'python_implementation' : 'CPython' , 'version' : '#140-Ubuntu SMP Mon Jul' , 'processor' : 'x86_64' , 'distro:' : ( 'Ubuntu' , '14.04' , 'trusty' ) }, 'db' : 'sqlite' , 'mq' : 'simple' , 'plugins' : { 'buildbot.schedulers.forcesched.ForceScheduler' : 2 , 'buildbot.schedulers.triggerable.Triggerable' : 1 , 'buildbot.config.BuilderConfig' : 4 , 'buildbot.schedulers.basic.AnyBranchScheduler' : 2 , 'buildbot.steps.source.git.Git' : 4 , '>>buildbot.steps.trigger.Trigger' : 2 , '>>>buildbot.worker.base.Worker' : 4 , 'buildbot.reporters.irc.IRC' : 1 }, 'www_plugins' : [ 'buildbot_travis' , 'waterfall_view' ] } c['buildbotNetUsageData'] = 'full' sends the basic information plus additional information: configuration of each builders: how the steps are arranged together. for example: { 'builders' : [ [ 'buildbot.steps.source.git.Git' , '>>>buildbot.process.buildstep.BuildStep' ], [ 'buildbot.steps.source.git.Git' , '>>buildbot.steps.trigger.Trigger' ], [ 'buildbot.steps.source.git.Git' , '>>>buildbot.process.buildstep.BuildStep' ], [ 'buildbot.steps.source.git.Git' , '>>buildbot.steps.trigger.Trigger' ] ] } c['buildbotNetUsageData'] = myCustomFunction declares a callback to use to specify exactly what to send. This custom function takes the generated data from full report in the form of a dictionary, and returns a customized report as a jsonable dictionary. You can use this to filter any information you don’t want to disclose. You can also use a custom http_proxy environment variable in order to not send any data while developing your callback. 2.5.2.17. Users Options from buildbot.plugins import util c [ 'user_managers' ] = [] c [ 'user_managers' ] . append ( util . CommandlineUserManager ( username = "user" , passwd = "userpw" , port = 9990 )) user_managers contains a list of ways to manually manage User Objects within Buildbot (see User Objects ). Currently implemented is a commandline tool buildbot user , described at length in user . In the future, a web client will also be able to manage User Objects and their attributes. As shown above, to enable the buildbot user tool, you must initialize a CommandlineUserManager instance in your master.cfg . CommandlineUserManager instances require the following arguments: username This is the username that will be registered on the PB connection and need to be used when calling buildbot user . passwd This is the passwd that will be registered on the PB connection and need to be used when calling buildbot user . port The PB connection port must be different than c[‘protocols’][‘pb’][‘port’] and be specified when calling buildbot user 2.5.2.18. Input Validation import re c [ 'validation' ] = { 'branch' : re . compile ( r '^[\w.+/~-]*$' ), 'revision' : re . compile ( r '^[ \w\.\-\/]*$' ), 'property_name' : re . compile ( r '^[\w\.\-\/\~:]*$' ), 'property_value' : re . compile ( r '^[\w\.\-\/\~:]*$' ), } This option configures the validation applied to user inputs of various types. This validation is important since these values are often included in command-line arguments executed on workers. Allowing arbitrary input from untrusted users may raise security concerns. The keys describe the type of input validated; the values are compiled regular expressions against which the input will be matched. The defaults for each type of input are those given in the example, above. 2.5.2.19. Revision Links The revlink parameter is used to create links from revision IDs in the web status to a web-view of your source control system. The parameter’s value must be a callable. By default, Buildbot is configured to generate revlinks for a number of open source hosting platforms ( https://github.com , https://sourceforge.net and https://bitbucket.org ). The callable takes the revision id and repository argument, and should return a URL to the revision. Note that the revision id may not always be in the form you expect, so code defensively. In particular, a revision of “??” may be supplied when no other information is available. Note that SourceStamp s that are not created from version-control changes (e.g., those created by a Nightly or Periodic scheduler) may have an empty repository string if the repository is not known to the scheduler. Revision Link Helpers Buildbot provides two helpers for generating revision links. buildbot.revlinks.RevlinkMatcher takes a list of regular expressions and a replacement text. The regular expressions should all have the same number of capture groups. The replacement text should have sed-style references to that capture groups (i.e. ‘1’ for the first capture group), and a single ‘%s’ reference for the revision ID. The repository given is tried against each regular expression in turn. The results are then substituted into the replacement text, along with the revision ID, to obtain the revision link. from buildbot.plugins import util c [ 'revlink' ] = util . RevlinkMatch ([ r 'git://notmuchmail.org/git/(.*)' ], r 'http://git.notmuchmail.org/git/\1/commit/ %s ' ) buildbot.revlinks.RevlinkMultiplexer takes a list of revision link callables, and tries each in turn, returning the first successful match. 2.5.2.20. Codebase Generator all_repositories = { r 'https://hg/hg/mailsuite/mailclient' : 'mailexe' , r 'https://hg/hg/mailsuite/mapilib' : 'mapilib' , r 'https://hg/hg/mailsuite/imaplib' : 'imaplib' , r 'https://github.com/mailinc/mailsuite/mailclient' : 'mailexe' , r 'https://github.com/mailinc/mailsuite/mapilib' : 'mapilib' , r 'https://github.com/mailinc/mailsuite/imaplib' : 'imaplib' , } def codebaseGenerator ( chdict ): return all_repositories [ chdict [ 'repository' ]] c [ 'codebaseGenerator' ] = codebaseGenerator For any incoming change, the codebase is set to ‘’. This codebase value is sufficient if all changes come from the same repository (or clones). If changes come from different repositories, extra processing will be needed to determine the codebase for the incoming change. This codebase will then be a logical name for the combination of repository and or branch etc. The codebaseGenerator accepts a change dictionary as produced by the buildbot.db.changes.ChangesConnectorComponent , with a changeid equal to None . Previous Next © Copyright Buildbot Team Members. Built with Sphinx using a theme provided by Read the Docs . | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/javadoc.html | Javadoc API Documentation :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home References Java API reference Edit this Page Javadoc API Documentation The table below contains links to the Javadoc API Documentation for the components you are most likely to use directly in code. Component Description API The logging interface (i.e., Log4j API) that applications should use and code against. Implementation The logging implementation (i.e., Log4j Core) that contains appenders, layouts, filters, and more. Log4j Web Tools to use Log4j Core in Jakarta EE applications. Log4j JPA Tools to use Log4j Core with the Java Persistence API. Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/components.html | Components :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Components Edit this Page Components The Log4j 2 distribution contains the following artifacts: log4j-bom A public Bill-of-Materials that manages all the versions of Log4j artifacts. You can import the BOM in your build tool of preference: Maven Gradle <dependencyManagement> <dependencies> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-bom</artifactId> <version>2.25.3</version> <scope>import</scope> <type>pom</type> </dependency> </dependencies> </dependencyManagement> dependencies { implementation platform('org.apache.logging.log4j:log4j-bom:2.25.3') } log4j A private Bill-of-Materials used during the compilation and testing of the project. Do not use this artifact, since it also manages versions of third-party projects. Use log4j-bom instead. log4j-1.2-api JPMS module org.apache.log4j The log4j-1.2-api artifact contains several tools to help users migrate from Log4j 1 to Log4j 2. See Log4j 1 to Log4j 2 Bridge for details. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-1.2-api</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-1.2-api' log4j-api JPMS module org.apache.logging.log4j The log4j-api artifact contains the Log4j API . See Log4j API for more details. Maven Gradle <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api</artifactId> <version>${log4j-api.version}</version> </dependency> implementation 'org.apache.logging.log4j:log4j-api:${log4j-api.version}' log4j-api-test JPMS module org.apache.logging.log4j.test The log4j-api-test artifact contains test fixtures useful to test Log4j API implementations. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api-test</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-api-test' log4j-appserver JPMS module org.apache.logging.log4j.appserver The log4j-appserver artifact contains: A bridge from Tomcat JULI to the Log4j API. See Replacing Tomcat logging system for more information. A bridge from Jetty 9 logging API to the Log4j API. See Replacing Jetty logging system for more information Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-appserver</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-appserver' log4j-cassandra JPMS module org.apache.logging.log4j.cassandra The log4j-cassandra artifact contains an appender for the Apache Cassandra database. See Cassandra Appender for more information. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-cassandra</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-cassandra' log4j-core JPMS module org.apache.logging.log4j.core The log4j-core artifact contains the reference implementation of the Log4j API . See Reference implementation for more details. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-core' log4j-core-test JPMS module org.apache.logging.log4j.core.test The log4j-core-test artifact contains test fixtures useful to extend the reference implementation . Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core-test</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-core-test' log4j-couchdb JPMS module org.apache.logging.log4j.couchdb The log4j-couchdb artifact contains a provider to connect the NoSQL Appender with the Apache CouchDB database. See CouchDB provider for more information. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-couchdb</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-couchdb' log4j-docker JPMS module org.apache.logging.log4j.docker The log4j-docker artifact contains a lookup for applications running in a Docker container See Docker lookup for more information. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-docker</artifactId> <version>2.25.3</version> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-docker:2.25.3' log4j-flume-ng JPMS module org.apache.logging.log4j.flume The log4j-flume-ng artifact contains an appender for the Apache Flume log data collection service. See Flume Appender for more information. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-flume-ng</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-flume-ng' log4j-iostreams JPMS module org.apache.logging.log4j.iostreams The log4j-iostreams artifact is an extension of the Log4j API to connect with legacy stream-based logging methods. See Log4j IOStreams for more information. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-iostreams</artifactId> </dependency> We assume you use log4j-bom for dependency management. implementation 'org.apache.logging.log4j:log4j-iostreams' log4j-jakarta-smtp JPMS module org.apache.logging.log4j.jakarta.smtp The log4j-jakarta-smtp contains an appender for the Jakarta Mail 2.0 API and later versions. See SMTP Appender for more information. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-jakarta-smtp</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-jakarta-smtp' log4j-jakarta-web JPMS module org.apache.logging.log4j.jakarta.web The log4j-jakarta-web contains multiple utils to run your applications in a Jakarta Servlet 5.0 or later environment: It synchronizes the lifecycle of Log4j Core and your application. See Integrating with web applications for more details. It contains a lookup for the data contained in a Servlet context. See Web Lookup for more details. It contains an appender to forward log event to a Servlet. See Servlet Appender for more details. Don’t deploy this artifact together with log4j-web . Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-jakarta-web</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-jakarta-web' log4j-jcl JPMS module org.apache.logging.log4j.jcl The log4j-jcl artifact contains a bridge from Apache Commons Logging and the Log4j API . See Installing JCL-to-Log4j API bridge for more details. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-jcl</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-jcl' log4j-jdbc-dbcp2 JPMS module org.apache.logging.log4j.jdbc.dbcp2 The log4j-jdbc-dbcp2 artifact contains a data source for the JDBC Appender that uses Apache Commons DBCP . See PoolingDriver connection source for more details. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-jdbc-dbcp2</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-jdbc-dbcp2' log4j-jpa JPMS module org.apache.logging.log4j.jpa The log4j-jpa artifact contains an appender for the Jakarta Persistence 2.2 API or Java Persistence API. See JPA Appender for more details. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-jpa</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-jpa' log4j-jpl JPMS module org.apache.logging.log4j.jpl The log4j-jpl artifact contains a bridge from System.Logger to the Log4j API . See Installing the JPL-to-Log4j API bridge for more details. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-jpl</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-jpl' log4j-jul JPMS module org.apache.logging.log4j.jul The log4j-jul artifact contains a bridge from java.util.logging to the Log4j API . See Installing the JUL-to-Log4j API bridge for more details. Don’t deploy this artifact together with log4j-to-jul . Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-jul</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-jul' log4j-layout-template-json JPMS module org.apache.logging.log4j.json.template.layout The log4j-layout-template-json contains a highly extensible and configurable layout to format log events as JSON. See JSON Template Layout for details. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-layout-template-json</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-layout-template-json' log4j-mongodb JPMS module org.apache.logging.log4j.mongodb The log4j-mongodb artifact contains a provider to connect the NoSQL Appender with the MongoDB database. It is based on the latest version of the Java driver. See MongoDb provider for more information. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-mongodb</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-mongodb4' log4j-mongodb4 JPMS module org.apache.logging.log4j.mongodb4 The log4j-mongodb artifact contains a provider to connect the NoSQL Appender with the MongoDB database. It is based on version 4.x of the Java driver. See MongoDb4 provider for more information. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-mongodb4</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-mongodb4' log4j-slf4j2-impl JPMS module org.apache.logging.log4j.slf4j2.impl The log4j-slf4j2-impl artifact contains a bridge from SLF4J 2 API to the Log4j API . See Installing the SLF4J-to-Log4j API bridge for more details. Don’t deploy this artifact together with either log4j-slf4j-impl or log4j-to-slf4j . Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-slf4j2-impl</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-slf4j2-impl' log4j-slf4j-impl JPMS module org.apache.logging.log4j.slf4j.impl The log4j-slf4j-impl artifact contains a bridge from SLF4J 1 API to the Log4j API . See Installing the SLF4J-to-Log4j API bridge for more details. Don’t deploy this artifact together with either log4j-slf4j2-impl or log4j-to-slf4j . Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-slf4j-impl</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-slf4j-impl' log4j-spring-boot JPMS module org.apache.logging.log4j.spring.boot The log4j-spring-boot artifact contains multiple utils to integrate with Spring Framework 5.x or earlier versions and Spring Boot 2.x or earlier versions. It provides a property source . See Spring Property source for more details. It provides a lookup . See Spring lookup for more details. It provides an arbiter . See Spring arbiter for more details. It provides an alternative LoggingSystem implementation. See Log4j Spring Boot Support for more details. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-spring-boot</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-spring-boot' log4j-spring-cloud-config-client JPMS module org.apache.logging.log4j.spring.cloud.config.client The log4j-spring-cloud-config-client provides utils to integrate with Spring Cloud Config 3.x or earlier versions. See Log4j Spring Cloud Configuration for more details. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-spring-cloud-config-client</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-spring-cloud-config-client' log4j-taglib JPMS module org.apache.logging.log4j.taglib The log4j-taglib provides a Jakarta Servlet Pages 2.3 or earlier library that logs to the Log4j API . See Log4j Taglib for more details. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-taglib</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-taglib' log4j-to-jul JPMS module org.apache.logging.log4j.to.jul The log4j-jul artifact contains an implementation of the Log4j API that logs to java.util.logging . See Installing JUL for more details. Don’t deploy this artifact together with log4j-jul . Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-to-jul</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-to-jul' log4j-to-slf4j JPMS module org.apache.logging.log4j.to.slf4j The log4j-jul artifact contains an implementation of the Log4j API that logs to SLF4J API . See Installing Logback for more details. Don’t deploy this artifact together with either log4j-slf4j-impl or log4j-slf4j2-impl . Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-to-slf4j</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-to-slf4j' log4j-web JPMS module org.apache.logging.log4j.web The log4j-jakarta-web contains multiple utils to run your applications in a Jakarta Servlet 4.0 or Java EE Servlet environment: It synchronizes the lifecycle of Log4j Core and your application. See Integrating with web applications for more details. It contains a lookup for the data contained in a Servlet context. See Web Lookup for more details. It contains an appender to forward log event to a Servlet. See Servlet Appender for more details. Don’t deploy this artifact together with log4j-jakarta-web . Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-web</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-web' Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/manual/garbagefree.html | Garbage-free logging :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Manual Implementation Performance Garbage-free logging Edit this Page Garbage-free logging Garbage collection pauses are a common cause of latency spikes and for many systems significant effort is spent on controlling these pauses. Log4j allocates temporary LogEvent , String , char[] , byte[] , etc. objects during steady state logging. This contributes to pressure on the garbage collector and increases the frequency with which garbage collection pauses occur. In garbage-free mode , Log4j buffers and reuses objects to lessen this pressure. Extra tuning of any application will deviate you away from defaults and add up to the maintenance load. You are strongly advised to measure your application’s overall performance and then, if Log4j is found to be an important bottleneck factor, tune it carefully. When this happens, we also recommend you to evaluate your assumptions on a regular basis to check if they still hold. Remember, premature optimization is the root of all evil . The act of logging is an interplay between the logging API (i.e., Log4j API) where the programmer publishes logs and a logging implementation (i.e., Log4j Core) where published logs get consumed; filtered, enriched, encoded, and written to files, databases, network sockets, etc. Both parties contain different features with different memory allocation characteristics. To achieve an end-to-end garbage-free logging system, they need to work hand in hand. Hence, we will discuss both: Log4j Core configuration Log4j API usage Garbage-free logging is currently implemented for Log4j API and its reference implementation, Log4j Core. If you use another setup (e.g., a different logging API or implementation) this promise might not hold. Quick start If you want to have a garbage-free Log4j setup, but don’t want to spend time with the associated details, you can quickly get started with the following instructions: Set the following system properties to true : log4j2.enableThreadlocals log4j2.garbagefreeThreadContextMap Use garbage-free Layouts Appenders Filters This should be sufficient for a majority of use cases. If not for yours, keep on reading. Log4j Core configuration In order to have a garbage-free Log4j Core, you need to configure it using properties , and employ garbage-free Layouts , Appenders , and Filters . Properties Garbage-free logging can be configured for Log4j Core using properties listed below. (See Configuration file on details how you can set these properties.) log4j2.isWebapp Env. variable LOG4J_IS_WEBAPP Type boolean Default value true if the Servlet class on classpath, false otherwise Setting this property to true switches Log4j Core into "Web application mode" ( "Web-app mode" ). In this mode Log4j is optimized to work in a Servlet container. This mode is incompatible with log4j2.enableThreadlocals . log4j2.enableThreadlocals Env. variable LOG4J_ENABLE_THREADLOCALS Type boolean Default value false if Web-app mode is enabled, true otherwise Setting this property to true switches Log4j Core into "garbage-free mode" ( "GC-free mode" ). In this mode Log4j uses ThreadLocal s for object pooling to prevent object allocations. ThreadLocal fields holding non-JDK classes can cause memory leaks in web applications when the application server’s thread pool continues to reference these fields after the web application is undeployed. Hence, to avoid causing memory leaks, log4j2.enableThreadlocals by default reflects the opposite of log4j2.isWebapp . log4j2.enableDirectEncoders Env. variable LOG4J_ENABLE_DIRECT_ENCODERS Type boolean Default value true If true , garbage-aware layouts will directly encode log events into ByteBuffer s provided by appenders. This prevents allocating temporary String and char[] instances. log4j2.encoderByteBufferSize Env. variable LOG4J_ENCODER_BYTE_BUFFER_SIZE Type int Default value 8192 The size in bytes of the ByteBuffer s stored in ThreadLocal fields by layouts and StringBuilderEncoder s. This setting is only used if log4j2.enableDirectEncoders is set to true . log4j2.encoderCharBufferSize Env. variable LOG4J_ENCODER_CHAR_BUFFER_SIZE Type int Default value 4096 The size in char s of the ByteBuffer s stored in ThreadLocal fields StringBuilderEncoder s. This setting is only used if log4j2.enableDirectEncoders is set to true . log4j2.initialReusableMsgSize Env. variable LOG4J_INITIAL_REUSABLE_MSG_SIZE Type int Default value 128 In GC-free mode, this property determines the initial size of the reusable StringBuilder s used by ReusableMessages for formatting purposes. log4j2.maxReusableMsgSize Env. variable LOG4J_MAX_REUSABLE_MSG_SIZE Type int Default value 518 In GC-free mode, this property determines the maximum size of the reusable StringBuilder s used by ReusableMessages for formatting purposes. The default value allows is equal to 2 × (2 × log4j.initialReusableMsgSize + 2) + 2 and allows the StringBuilder to be resized twice by the current JVM resize algorithm. log4j2.layoutStringBuilderMaxSize Env. variable LOG4J_LAYOUT_STRING_BUILDER_MAX_SIZE Type int Default value 2048 This property determines the maximum size of the reusable StringBuilder s used to format LogEvent s. log4j2.unboxRingbufferSize Env. variable LOG4J_UNBOX_RINGBUFFER_SIZE Type int Default value 32 The Unbox utility class can be used by users to format primitive values without incurring in the boxing allocation cost. This property specifies the maximum number of primitive arguments to a log message that will be cached and usually does not need to be changed. log4j2.threadContextMap Env. variable LOG4J_THREAD_CONTEXT_MAP Type Class<? extends ThreadContextMap> or predefined constant Default value WebApp Fully specified class name of a custom ThreadContextMap implementation class or (since version 2.24.0 ) one of the predefined constants: NoOp to disable the thread context, WebApp a web application-safe implementation, that only binds JRE classes to ThreadLocal to prevent memory leaks, GarbageFree a garbage-free implementation. log4j2.garbagefreeThreadContextMap Env. variable LOG4J_GARBAGEFREE_THREAD_CONTEXT_MAP Default value false If set to true selects a garbage-free thread context map implementation. log4j2.clock Env. variable LOG4J_CLOCK Type Class<? extends Clock> or predefined constant Default value SystemClock It specifies the Clock implementation used to timestamp log events. This must be the fully qualified class name of the implementation or one of these predefined constants: SystemClock It uses the best available system time source. See Clock#systemDefaultZone() for details. Depending on the version of the JRE, this implementation might not be garbage-free or might only become garbage-free when the code is hot enough. If you don’t require a nanosecond precision, and you need a garbage-free implementation, use SystemMillisClock . SystemMillisClock It is similar to SystemClock , but truncates the result to a millisecond. This implementation is garbage-free. CachedClock It uses a separate thread to update the timestamp value. See CachedClock for details. CoarseCachedClock This is an alternative implementation of CachedClock with a slightly lower precision. See CoarseCachedClock for details. Layouts The following layouts can be configured to run garbage-free during steady-state logging. To understand which configuration knobs exhibit what kind of allocation behaviour, see their dedicated pages. GelfLayout JsonTemplateLayout PatternLayout Implementation notes Garbage-free layouts need to implement the Encoder<LogEvent> interface. StringBuilderEncoder helps with encoding text to bytes in a garbage-free manner. Appenders The following appenders are garbage-free during steady-state logging: ConsoleAppender FileAppender MemoryMappedFileAppender RandomAccessFileAppender RollingFileAppender (except during rollover) RollingRandomAccessFileAppender (except during rollover) Any other appender not shared in the above list (including AsyncAppender ) is not garbage-free. Implementation notes Garbage-free appenders need to provide their layout with a ByteBufferDestination implementation that the layout can directly write into. AbstractOutputStreamAppender has been modified to make the following appenders garbage-free: ConsoleAppender (Rolling)FileAppender (Rolling)RandomAccessFileAppender MemoryMappedFileAppender An effort has been made to minimize impact on custom appenders that extend AbstractOutputStreamAppender , but it is impossible to guarantee that changing the superclass will not impact any and all subclasses. Custom appenders that extend AbstractOutputStreamAppender should verify that they still function correctly. In case there is a problem, the log4j2.enableDirectEncoders system property can be set to false to revert to the pre-Log4j 2.6 behaviour. Filters The following filters are garbage-free during steady-state logging: CompositeFilter (adding and removing element filters creates temporary objects for thread safety) DynamicThresholdFilter LevelRangeFilter (garbage-free since 2.8 ) MapFilter (garbage-free since 2.8 ) MarkerFilter (garbage-free since 2.8 ) StructuredDataFilter (garbage-free since 2.8 ) ThreadContextMapFilter (garbage-free since 2.8 ) ThresholdFilter (garbage-free since 2.8 ) TimeFilter (garbage-free since 2.8 except when range must be recalculated once per day) Any other filter not shared in the above list is not garbage-free. Limitations There are certain caveats associated with the configuration of garbage-free logging: Property substitutions Some property substitutions (e.g., ones using Date Lookup ) might result in temporary objects being created during steady-state logging. Asynchronous logger wait strategies As of version 2.18.0 , the default asynchronous logger wait strategy (i.e., Timeout ) is garbage-free while running against both LMAX Disruptor 3 and 4. See log4j2.asyncLoggerWaitStrategy for details on predefined wait strategies. Log4j API usage Log4j API contains several features to facilitate garbage-free logging: Parameterized message arguments The Logger interface contains methods for parameterized messages up to 10 arguments. Logging more than 10 parameters creates vararg arrays . Encoding custom objects When a message parameter contains an unknown type by the layout, it will encode by calling toString() on these objects. Most objects don’t have garbage-free toString() methods. Objects themselves can implement their own garbage-free encoders by either extending from Java’s CharSequence or Log4j’s StringBuilderFormattable . Avoiding autoboxing We made an effort to make logging garbage-free without requiring code changes in existing applications, but there is one area where this was not possible. When logging primitive values (i.e. int, double, boolean, etc.) the JVM autoboxes these primitive values to their Object wrapper equivalents, creating garbage. Log4j provides an Unbox utility to prevent autoboxing of primitive parameters. This utility contains a thread-local pool of reused StringBuilder`s. The `Unbox.box(primitive) methods write directly into a StringBuilder, and the resulting text will be copied into the final log message text without creating temporary objects. import static org.apache.logging.log4j.util.Unbox.box; LOGGER.debug("Prevent primitive autoboxing {} {}", box(10L), box(2.6d)); This utility contains a ThreadLocal pool of reused StringBuilder s. The pool size is configured by the log4j2.unboxRingbufferSize system property . The Unbox.box(primitive) methods write directly into a StringBuilder , and the resulting text will be copied into the final log message text without creating temporary objects. Limitations Not all Log4j API feature set is garbage-free, specifically: The ThreadContext map (aka. MDC) is not garbage-free by default, but can be configured to be garbage-free by setting the log4j2.garbagefreeThreadContextMap system property to true . The ThreadContext stack (aka. NDC) is not garbage-free. Logging very large messages (i.e., more than log4j2.maxReusableMsgSize characters, which defaults to 518), when all loggers are asynchronous loggers , will cause the internal StringBuilder in the RingBuffer to be trimmed back to their configured maximum size. Logging messages containing ${variable} substitutions creates temporary objects. Logging a lambda as a parameter: LOGGER.info("lambda value is {}", () -> callExpensiveMethod()); creates a vararg array. Logging a lambda expression by itself: LOGGER.debug(() -> callExpensiveMethod()); is garbage-free. The traceEntry() and traceExit() methods create temporary objects. Time calculations are not garbage-free when the log4j2.usePreciseClock system property (defaults to false ) is set to true . Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/migrate-from-logback.html | Migrating from Logback :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Resources Migrating from Logback Edit this Page Migrating from Logback Logback is a logging implementation for the SLF4J logging API, just like Log4j Core is a logging implementation for the Log4j API . In this page we will guide you through migrating from Logback to Log4j Core as your logging implementation. Instead of migrating your logging implementation, Logback, are you looking for migrating your logging API, SLF4J? Please refer to Migrating from SLF4J . Struggling with the logging API, implementation, and bridge concepts? Click for an introduction. Logging API A logging API is an interface your code or your dependencies directly logs against. It is required at compile-time. It is implementation agnostic to ensure that your application can write logs, but is not tied to a specific logging implementation. Log4j API, SLF4J , JUL (Java Logging) , JCL (Apache Commons Logging) , JPL (Java Platform Logging) and JBoss Logging are major logging APIs. Logging implementation A logging implementation is only required at runtime and can be changed without the need to recompile your software. Log4j Core, JUL (Java Logging) , Logback are the most well-known logging implementations. Logging bridge Logging implementations accept input from a single logging API of their preference; Log4j Core from Log4j API, Logback from SLF4J, etc. A logging bridge is a simple logging implementation of a logging API that forwards all messages to a foreign logging API. Logging bridges allow a logging implementation to accept input from other logging APIs that are not their primary logging API. For instance, log4j-slf4j2-impl bridges SLF4J calls to Log4 API and effectively enables Log4j Core to accept input from SLF4J. To make things a little bit more tangible, consider the following visualization of a typical Log4j Core installation with bridges for an application: Figure 1. Visualization of a typical Log4j Core installation with SLF4J, JUL, and JPL bridges Migrating You either have an application using Logback at runtime, or have a library using Logback for tests. In either case, you can replace Logback with Log4j Core as follows: Remove ch.qos.logback:logback-classic dependency Remove logback.xml and logback-test.xml files Follow the instructions shared in the "Getting started" page for applications for libraries Next you need to re-organize your logging API bridges such that all foreign APIs are bridged to Log4j API, the logging API implemented by Log4j Core. This is explained in the next section. Bridges It is highly likely that you were bridging all logging APIs (including Log4j API!) to SLF4J, the logging API implemented by Logback. There are two particular approaches you can take here to ensure all logging APIs are instead bridged to Log4j API, the logging API implemented by Log4j Core: Bridge all logging APIs to Log4j API We strongly advise you to bridge all foreign logging APIs directly to Log4j API. You can use the cheat sheet shared below to implement that. Table 1. Dependency migration cheat sheet If dependency present replace with org.apache.logging.log4j:log4j-to-slf4j org.apache.logging.log4j:log4j-slf4j2-impl org.slf4j:jcl-over-slf4j commons-logging:commons-logging version >=1.3.0 org.slf4j:jul-to-slf4j org.apache.logging.log4j:log4j-jul org.slf4j:log4j-over-slf4j org.apache.logging.log4j:log4j-1.2-api org.springframework:spring-boot-starter-logging org.springframework:spring-boot-starter-log4j2 Bridge all logging APIs to SLF4J, and bridge SLF4J to Log4j API You can implement this by replacing org.apache.logging.log4j:log4j-to-slf4j dependency with org.apache.logging.log4j:log4j-slf4j2-impl . This approach is not recommended! It incurs certain drawbacks since some logging API calls will need to cross multiple bridges. For instance, a call to JUL will first be bridged to SLF4J, and then from there to Log4j API. Configuration It might not always be trivial to match the contents of the newly created log4j2.xml and log4j2-test.xml files with your old logback.xml and logback-test.xml files. While all Logback components have corresponding equivalents in Log4j Core, they might not be sharing the same name or configuration. To assist with migrating Logback configuration components to Log4j Core, see the following pages: Appenders Layouts Filters For the complete list of all Log4j configuration knobs, see the Configuration page . Parameterized logging A common mistake in parameterized logging is to add a {} placeholder for the exception associated with a log event: } catch (Exception exception) { logger.error("The foo process exited with an error: {}", exception); } Log4j Core and Logback differ in the way they treat this statement: Logback Logback interprets the exception argument as throwable and removes it from the list of parameters. We end up with a parameterized statement with one placeholder, but zero parameters. The placeholder therefore remains as is: The foo process exited with and error: {} java.lang.RuntimeException: Message at example.MigrateFromLogback.doLogWrong(MigrateFromLogback.java:10) ... Log4j Core Log4j Core first looks for the parameters of the message. Since the format string has one placeholder, the exception argument is interpreted as a parameter of the log message. The throwable associated to the log event is null , which results in a missing stack trace: The foo process exited with and error: java.lang.RuntimeException: Message To fix this problem and get the same output in both backends, you should remove the placeholder from the format string: } catch (Exception exception) { logger.error("The foo process exited with an error.", exception); } After the change, the output will look us: The foo process exited with and error. java.lang.RuntimeException: Message at example.MigrateFromLogback.doLogWrong(MigrateFromLogback.java:10) ... As a temporary solution, the SLF4J-to-Log4j API bridges contain a special MessageFactory that classifies trailing Throwable arguments in the same way Logback does. To use it, you need to set the log4j2.messageFactory configuration property to org.apache.logging.slf4j.message.ThrowableConsumingMessageFactory . Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/download.html#versions | Download :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Download Edit this Page Download You can manually download all published Log4j distributions, verify them, and see their licensing information by following the instructions in the Download page of Logging Services . Are you looking for the Log4j installation instructions ? Proceed to Installation . Are you looking for the list of changes associated with a particular release? Proceed to Release notes . Source distribution You can download the source code of the latest Log4j release using the links below: Table 1. Source distribution files Sources apache-log4j-2.25.3-src.zip Checksum apache-log4j-2.25.3-src.zip.sha512 Signature apache-log4j-2.25.3-src.zip.asc Signing keys KEYS Binary distribution A set of binaries of Log4j is available through two main distribution channels: ASF Nexus Repository All the binary artifacts are available on the Apache Software Foundation repository.apache.org Nexus repository . Its content is mirrored to the Maven Central repository . See Components for more information on the GAV coordinates of the artifacts. Binary distribution archive All the artifacts in the ASF Nexus repository are also available in a single ZIP archive: Table 2. Binary distribution files Binaries apache-log4j-2.25.3-bin.zip Checksum apache-log4j-2.25.3-bin.zip.sha512 Signature apache-log4j-2.25.3-bin.zip.asc Signing keys KEYS The authenticity of the Log4j binary release is independently verified by the Reproducible Builds for Maven Central Repository project. You can check the reproducibility status of the artifacts on their org.apache.logging.log4j:log4j RB check page. Software Bill of Materials (SBOM) Each Log4j artifact is accompanied by a Software Bill of Materials (SBOM). See the Download page of Logging Services page for details. Available versions Below you can find the list of available Log4j versions and their associated maintenance status; Active Development, Active Maintenance, End-of-Maintenance, and End-of-Life. Refer to Versioning and maintenance policy for details. Table 3. Maintenance status of selected Log4j versions Version Status Latest release First stable release EOM EOL Notes 3.0.x AD 3.0.0-beta3 2.26.x AD 2.25.x AM 2.25.3 2025-12-15 2.24.x EOM 2.24.3 2024-09-03 2025-06-13 2.12.x EOM 2.12.4 2019-06-23 2021-12-29 Last release supporting Java 7 2.3.x EOM 2.3.2 2015-05-09 2021-12-29 Last release supporting Java 6 1.x EOL 1.2.17 2000-01-08 2014-07-12 2015-08-05 Last release supporting Java 1.4 Click to see all past versions Table 4. Maintenance status of all Log4j versions Version Status Latest release First release EOM EOL 3.0.x AD 3.0.0-beta3 2.26.x AD 2.25.x AM 2.25.2 2025-06-13 2.24.x EOM 2.24.3 2024-09-03 2025-06-13 2.23.x EOM 2.23.1 2024-02-17 2024-09-03 2.22.x EOM 2.22.1 2023-11-17 2024-02-17 2.21.x EOM 2.21.1 2023-10-12 2023-11-17 2.20.x EOM 2.20.0 2023-02-17 2023-10-12 2.19.x EOM 2.19.0 2022-09-09 2023-02-17 2.18.x EOM 2.18.0 2022-06-28 2022-09-09 2.17.x EOM 2.17.2 2021-12-17 2022-06-28 2.16.x EOM 2.16.0 2021-12-13 2021-12-17 2.15.x EOM 2.15.0 2021-12-06 2021-12-13 2.14.x EOM 2.14.1 2020-11-06 2021-12-06 2.13.x EOM 2.13.3 2019-12-11 2020-11-06 2.12.x EOM 2.12.4 2019-06-23 2021-12-29 2.11.x EOM 2.11.2 2018-03-11 2019-06-23 2.10.x EOM 2.10.0 2017-11-18 2018-03-11 2.9.x EOM 2.9.1 2017-08-26 2017-11-18 2.8.x EOM 2.8.2 2017-01-21 2017-08-26 2.7.x EOM 2.7 2016-10-02 2017-01-21 2.6.x EOM 2.6.2 2016-05-25 2016-10-02 2.5.x EOM 2.5 2015-12-06 2016-05-25 2.4.x EOM 2.4.1 2015-09-20 2015-12-06 2.3.x EOM 2.3.2 2015-05-09 2021-12-29 2.2.x EOM 2.2 2015-02-22 2015-05-09 2.1.x EOM 2.1 2014-10-19 2015-02-22 2.0.x EOM 2.0.2 2014-07-12 2014-10-19 1.x EOL 1.2.17 2000-01-08 2014-07-12 2015-08-05 Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
http://docs.buildbot.net/current/manual/configuration/www.html | 2.5.17. Web Server — Buildbot 4.3.0 documentation Buildbot 1. Buildbot Tutorial 2. Buildbot Manual 2.1. Introduction 2.2. Installation 2.3. Concepts 2.4. Secret Management 2.5. Configuration 2.5.1. Configuring Buildbot 2.5.2. Global Configuration 2.5.3. Change Sources and Changes 2.5.4. Changes 2.5.5. Schedulers 2.5.6. Workers 2.5.7. Builder Configuration 2.5.8. Projects 2.5.9. Codebases 2.5.10. Build Factories 2.5.11. Build Sets 2.5.12. Properties 2.5.13. Build Steps 2.5.14. Interlocks 2.5.15. Report Generators 2.5.16. Reporters 2.5.17. Web Server AvatarGitHub 2.5.17.1. UI plugins 2.5.17.2. Authentication plugins 2.5.17.3. User Information 2.5.17.4. Reverse Proxy Configuration 2.5.17.5. Authorization rules 2.5.18. Change Hooks 2.5.19. Custom Services 2.5.20. DbConfig 2.5.21. Configurators 2.5.22. Manhole 2.5.23. Multimaster 2.5.24. Multiple-Codebase Builds 2.5.25. Miscellaneous Configuration 2.5.26. Testing Utilities 2.6. Customization 2.7. Command-line Tool 2.8. Resources 2.9. Optimization 2.10. Plugin Infrastructure in Buildbot 2.11. Deployment 2.12. Upgrading 3. Buildbot Development 4. Release Notes 5. Older Release Notes 6. API Indices Buildbot 2. Buildbot Manual 2.5. Configuration 2.5.17. Web Server View page source 2.5.17. Web Server Note As of Buildbot 0.9.0, the built-in web server replaces the old WebStatus plugin. Buildbot contains a built-in web server. This server is configured with the www configuration key, which specifies a dictionary with the following keys: port The TCP port on which to serve requests. It might be an integer or any string accepted by serverFromString (ex: “tcp:8010:interface=127.0.0.1” to listen on another interface). Note that using twisted’s SSL endpoint is discouraged. Use a reverse proxy that offers proper SSL hardening instead (see Reverse Proxy Configuration ). If this is None (the default), then the master will not implement a web server. json_cache_seconds The number of seconds into the future at which an HTTP API response should expire. rest_minimum_version The minimum supported REST API version. Any versions less than this value will not be available. This can be used to ensure that no clients are depending on API versions that will soon be removed from Buildbot. plugins This key gives a dictionary of additional UI plugins to load, along with configuration for those plugins. These plugins must be separately installed in the Python environment, e.g., pip install buildbot-waterfall-view . See UI plugins . For example: c [ 'www' ] = { 'plugins' : { 'waterfall_view' : True } } default_page Configure the default landing page of the web server, for example, to forward directly to another plugin. For example: c [ 'www' ][ 'default_page' ] = 'console' debug If true, then debugging information will be output to the browser. This is best set to false (the default) on production systems, to avoid the possibility of information leakage. allowed_origins This gives a list of origins which are allowed to access the Buildbot API (including control via JSONRPC 2.0). It implements cross-origin request sharing (CORS), allowing pages at origins other than the Buildbot UI to use the API. Each origin is interpreted as filename match expression, with ? matching one character and * matching anything. Thus ['*'] will match all origins, and ['https://*.buildbot.net'] will match secure sites under buildbot.net . The Buildbot UI will operate correctly without this parameter; it is only useful for allowing access from other web applications. auth Authentication module to use for the web server. See Authentication plugins . avatar_methods List of methods that can be used to get avatar pictures to use for the web server. By default, Buildbot uses Gravatar to get images associated with each users, if you want to disable this you can just specify empty list: c [ 'www' ] = { 'avatar_methods' : [] } You could also use the GitHub user avatar if GitHub authentication is enabled: c [ 'www' ] = { 'avatar_methods' : [ util . AvatarGitHub ()] } class AvatarGitHub ( github_api_endpoint = None , token = None , debug = False , verify = True ) Parameters : github_api_endpoint ( string ) – specify the github api endpoint if you work with GitHub Enterprise token ( string ) – a GitHub API token to execute all requests to the API authenticated. It is strongly recommended to use a API token since it increases GitHub API rate limits significantly client_id ( string ) – a GitHub OAuth client ID to use with client secret to execute all requests to the API authenticated in place of token client_secret ( string ) – a GitHub OAuth client secret to use with client ID above debug ( boolean ) – logs every requests and their response verify ( boolean ) – disable ssl verification for the case you use temporary self signed certificates on a GitHub Enterprise installation This class requires txrequests package to allow interaction with GitHub REST API. For use of corporate pictures, you can use LdapUserInfo, which can also act as an avatar provider. See Authentication plugins . logfileName Filename used for HTTP access logs, relative to the master directory. If set to None or the empty string, the content of the logs will land in the main twisted.log log file. (Defaults to http.log ) logRotateLength The amount of bytes after which the http.log file will be rotated. (Defaults to the same value as for the twisted.log file, set in buildbot.tac ) maxRotatedFiles The amount of log files that will be kept when rotating (Defaults to the same value as for the twisted.log file, set in buildbot.tac ) versions Custom component versions that you’d like to display on the About page. Buildbot will automatically prepend the versions of Python, twisted and Buildbot itself to the list. versions should be a list of tuples. For example: c [ 'www' ] = { # ... 'versions' : [ ( 'master.cfg' , '0.1' ), ( 'OS' , 'Ubuntu 14.04' ), ] } The first element of a tuple stands for the name of the component, the second stands for the corresponding version. custom_templates_dir This directory will be parsed for custom angularJS templates to replace the one of the original website templates. You can use this to slightly customize buildbot look for your project, but to add any logic, you will need to create a full-blown plugin. If the directory string is relative, it will be joined to the master’s basedir. Buildbot uses the jade file format natively (which has been renamed to ‘pug’ in the nodejs ecosystem), but you can also use HTML format if you prefer. Either *.jade files or *.html files can be used to override templates with the same name in the UI. On the regular nodejs UI build system, we use nodejs’s pug module to compile jade into html. For custom_templates, we use the pypugjs interpreter to parse the jade templates, before sending them to the UI. pip install pypugjs is required to use jade templates. You can also override plugin’s directives, but they have to be in another directory, corresponding to the plugin’s name in its package.json . For example: # replace the template whose source is in: # www/base/src/app/builders/build/build.tpl.jade build.jade # here we use a jade (aka pug) file # replace the template whose source is in # www/console_view/src/module/view/builders-header/console.tpl.jade console_view/console.html # here we use html format Known differences between nodejs’s pug and pyjade: quotes in attributes are not quoted ( https://github.com/syrusakbary/pyjade/issues/132 ). This means you should use double quotes for attributes, e.g.: tr(ng-repeat="br in buildrequests | orderBy:'-submitted_at'") pypugjs may have some differences but it is a maintained fork of pyjade. https://github.com/kakulukia/pypugjs change_hook_dialects See Change Hooks . cookie_expiration_time This allows to define the timeout of the session cookie. Should be a datetime.timedelta . Default is one week. import datetime c [ 'www' ] = { # ... 'cookie_expiration_time' : datetime . timedelta ( weeks = 2 ) } ui_default_config Settings in the settings page are stored per browser. This configuration parameter allows to override the default settings for all your users. If a user already has changed a value from the default, this will have no effect to them. The settings page in the UI will tell you what to insert in your master.cfg to reproduce the configuration you have in your own browser. For example: c [ 'www' ][ 'ui_default_config' ] = { 'Builders.buildFetchLimit' : 500 , 'Workers.showWorkerBuilders' : True , } ws_ping_interval Send websocket pings every ws_ping_interval seconds. This is useful to avoid websocket timeouts when using reverse proxies or CDNs. If the value is 0 (the default), pings are disabled. theme Allows configuring certain properties of the web frontend, such as colors. The configuration value is a dictionary. The keys correspond to certain CSS variable names that are used throughout web frontend and made configurable. The values correspond to CSS values of these variables. The keys and values are not sanitized, so using data derived from user-supplied information is a security risk. The default is the following: c [ "www" ][ "theme" ] = { "bb-sidebar-background-color" : "#30426a" , "bb-sidebar-header-background-color" : "#273759" , "bb-sidebar-header-text-color" : "#fff" , "bb-sidebar-title-text-color" : "#627cb7" , "bb-sidebar-footer-background-color" : "#273759" , "bb-sidebar-button-text-color" : "#b2bfdc" , "bb-sidebar-button-hover-background-color" : "#1b263d" , "bb-sidebar-button-hover-text-color" : "#fff" , "bb-sidebar-button-current-background-color" : "#273759" , "bb-sidebar-button-current-text-color" : "#b2bfdc" , "bb-sidebar-stripe-hover-color" : "#e99d1a" , "bb-sidebar-stripe-current-color" : "#8c5e10" , } Note The buildbotURL configuration value gives the base URL that all masters will use to generate links. The www configuration gives the settings for the webserver. In simple cases, the buildbotURL contains the hostname and port of the master, e.g., http://master.example.com:8010/ . In more complex cases, with multiple masters, web proxies, or load balancers, the correspondence may be less obvious. 2.5.17.1. UI plugins Waterfall View Waterfall shows the whole Buildbot activity in a vertical time line. Builds are represented with boxes whose height vary according to their duration. Builds are sorted by builders in the horizontal axes, which allows you to see how builders are scheduled together. pip install buildbot-waterfall-view c [ 'www' ] = { 'plugins' : { 'waterfall_view' : True } } Note Waterfall is the emblematic view of Buildbot Eight. It allowed to see the whole Buildbot activity very quickly. Waterfall however had big scalability issues, and larger installs had to disable the page in order to avoid tens of seconds master hang because of a big waterfall page rendering. The whole Buildbot Eight internal status API has been tailored in order to make Waterfall possible. This is not the case anymore with Buildbot Nine, which has a more generic and scalable Data API and REST API . This is the reason why Waterfall does not display the steps details anymore. However nothing is impossible. We could make a specific REST api available to generate all the data needed for waterfall on the server. Please step-in if you want to help improve the Waterfall view. Console View Console view shows the whole Buildbot activity arranged by changes as discovered by Change Sources and Changes vertically and builders horizontally. If a builder has no build in the current time range, it will not be displayed. If no change is available for a build, then it will generate a fake change according to the got_revision property. Console view will also group the builders by tags. When there are several tags defined per builders, it will first group the builders by the tag that is defined for most builders. Then given those builders, it will group them again in another tag cluster. In order to keep the UI usable, you have to keep your tags short! pip install buildbot-console-view c [ 'www' ] = { 'plugins' : { 'console_view' : True } } Note Nine’s Console View is the equivalent of Buildbot Eight’s Console and tgrid views. Unlike Waterfall, we think it is now feature equivalent and even better, with its live update capabilities. Please submit an issue if you think there is an issue displaying your data, with screen shots of what happen and suggestion on what to improve. Grid View Grid view shows the whole Buildbot activity arranged by builders vertically and changes horizontally. It is equivalent to Buildbot Eight’s grid view. By default, changes on all branches are displayed but only one branch may be filtered by the user. Builders can also be filtered by tags. This feature is similar to the one in the builder list. pip install buildbot-grid-view c [ 'www' ] = { 'plugins' : { 'grid_view' : True } } Badges Buildbot badges plugin produces an image in SVG or PNG format with information about the last build for the given builder name. PNG generation is based on the CAIRO SVG engine, it requires a bit more CPU to generate. pip install buildbot-badges c [ 'www' ] = { 'plugins' : { 'badges' : {}} } You can the access your builder’s badges using urls like http://<buildbotURL>/plugins/badges/<buildername>.svg . The default templates are very much configurable via the following options: { "left_pad" : 5 , "left_text" : "Build Status" , # text on the left part of the image "left_color" : "#555" , # color of the left part of the image "right_pad" : 5 , "border_radius" : 5 , # Border Radius on flat and plastic badges # style of the template availables are "flat", "flat-square", "plastic" "style" : "plastic" , "template_name" : " {style} .svg.j2" , # name of the template "font_face" : "DejaVu Sans" , "font_size" : 11 , "color_scheme" : { # color to be used for right part of the image "exception" : "#007ec6" , # blue "failure" : "#e05d44" , # red "retry" : "#007ec6" , # blue "running" : "#007ec6" , # blue "skipped" : "a4a61d" , # yellowgreen "success" : "#4c1" , # brightgreen "unknown" : "#9f9f9f" , # lightgrey "warnings" : "#dfb317" # yellow } } Those options can be configured either using the plugin configuration: c [ 'www' ] = { 'plugins' : { 'badges' : { "left_color" : "#222" }} } or via the URL arguments like http://<buildbotURL>/plugins/badges/<buildername>.svg?left_color=222 . Custom templates can also be specified in a template directory nearby the master.cfg . The badgeio template A badges template was developed to standardize upon a consistent “look and feel” across the usage of multiple CI/CD solutions, e.g.: use of Buildbot, Codecov.io, and Travis-CI. An example is shown below. To ensure the correct “look and feel”, the following Buildbot configuration is needed: c [ 'www' ] = { 'plugins' : { 'badges' : { "left_pad" : 0 , "right_pad" : 0 , "border_radius" : 3 , "style" : "badgeio" } } } Note It is highly recommended to use only with SVG. 2.5.17.2. Authentication plugins By default, Buildbot does not require people to authenticate in order to access control features in the web UI. To secure Buildbot, you will need to configure an authentication plugin. Note To secure the Buildbot web interface, authorization rules must be provided via the ‘authz’ configuration. If you simply wish to lock down a Buildbot instance so that only read only access is permitted, you can restrict access to control endpoints to an unpopulated ‘admin’ role. For example: c [ 'www' ][ 'authz' ] = util . Authz ( allowRules = [ util . AnyControlEndpointMatcher ( role = "admins" )], roleMatchers = []) Note As of Buildbot 0.9.4, user session is managed via a JWT token, using HS256 algorithm. The session secret is stored in the database in the object_state table with name column being session_secret . Please make sure appropriate access restriction is made to this database table. Authentication plugins are implemented as classes, and passed as the auth parameter to www . The available classes are described here: class buildbot.www.auth. NoAuth This class is the default authentication plugin, which disables authentication. class buildbot.www.auth. UserPasswordAuth ( users ) Parameters : users – list of ("user","password") tuples, or a dictionary of {"user": "password", ..} Simple username/password authentication using a list of user/password tuples provided in the configuration file. from buildbot.plugins import util c [ 'www' ] = { # ... 'auth' : util . UserPasswordAuth ({ "homer" : "doh!" }), } class buildbot.www.auth. CustomAuth This authentication class means to be overridden with a custom check_credentials method that gets username and password as arguments and check if the user can login. You may use it e.g. to check the credentials against an external database or file. from buildbot.plugins import util class MyAuth ( util . CustomAuth ): def check_credentials ( self , user , password ): if user == 'snow' and password == 'white' : return True else : return False from buildbot.plugins import util c [ 'www' ][ 'auth' ] = MyAuth () class buildbot.www.auth. HTPasswdAuth ( passwdFile ) Parameters : passwdFile – An .htpasswd file to read This class implements simple username/password authentication against a standard .htpasswd file. from buildbot.plugins import util c [ 'www' ] = { # ... 'auth' : util . HTPasswdAuth ( "my_htpasswd" ), } class buildbot.www.oauth2. GoogleAuth ( clientId , clientSecret ) Parameters : clientId – The client ID of your buildbot application clientSecret – The client secret of your buildbot application ssl_verify ( boolean ) – If False disables SSL certificate verification This class implements an authentication with Google single sign-on. You can look at the Google oauth2 documentation on how to register your Buildbot instance to the Google systems. The developer console will give you the two parameters you have to give to GoogleAuth . Register your Buildbot instance with the BUILDBOT_URL/auth/login URL as the allowed redirect URI. Example: from buildbot.plugins import util c [ 'www' ] = { # ... 'auth' : util . GoogleAuth ( "clientid" , "clientsecret" ), } In order to use this module, you need to install the Python requests module: pip install requests class buildbot.www.oauth2. GitHubAuth ( clientId , clientSecret ) param clientId : The client ID of your buildbot application param clientSecret : The client secret of your buildbot application param serverURL : The server URL if this is a GitHub Enterprise server param apiVersion : The GitHub API version to use. One of 3 or 4 (V3/REST or V4/GraphQL). Defaults to 3. param getTeamsMembership : When True fetch all team memberships for each of the organizations the user belongs to. The teams will be included in the user’s groups as org-name/team-name . param debug : When True and using apiVersion=4 show some additional log calls with the GraphQL queries and responses for debugging purposes. param boolean ssl_verify : If False disables SSL certificate verification This class implements an authentication with GitHub single sign-on. It functions almost identically to the GoogleAuth class. Register your Buildbot instance with the BUILDBOT_URL/auth/login url as the allowed redirect URI. The user’s email-address (for e.g. authorization) is set to the “primary” address set by the user in GitHub. When using group-based authorization, the user’s groups are equal to the names of the GitHub organizations the user is a member of. Example: from buildbot.plugins import util c [ 'www' ] = { # ... 'auth' : util . GitHubAuth ( "clientid" , "clientsecret" ), } Example for Enterprise GitHub: from buildbot.plugins import util c [ 'www' ] = { # ... 'auth' : util . GitHubAuth ( "clientid" , "clientsecret" , "https://git.corp.mycompany.com" ), } An example on fetching team membership could be: from buildbot.plugins import util c [ 'www' ] = { # ... 'auth' : util . GitHubAuth ( "clientid" , "clientsecret" , apiVersion = 4 , getTeamsMembership = True ), 'authz' : util . Authz ( allowRules = [ util . AnyControlEndpointMatcher ( role = "core-developers" ), ], roleMatchers = [ util . RolesFromGroups ( groupPrefix = 'buildbot/' ) ] ) } If the buildbot organization had two teams, for example, ‘core-developers’ and ‘contributors’, with the above example, any user belonging to those teams would be granted the roles matching those team names. In order to use this module, you need to install the Python requests module: pip install requests class buildbot.www.oauth2. GitLabAuth ( instanceUri , clientId , clientSecret ) Parameters : instanceUri – The URI of your GitLab instance clientId – The client ID of your buildbot application clientSecret – The client secret of your buildbot application ssl_verify ( boolean ) – If False disables SSL certificate verification This class implements an authentication with GitLab single sign-on. It functions almost identically to the GoogleAuth class. Register your Buildbot instance with the BUILDBOT_URL/auth/login URL as the allowed redirect URI. Example: from buildbot.plugins import util c [ 'www' ] = { # ... 'auth' : util . GitLabAuth ( "https://gitlab.com" , "clientid" , "clientsecret" ), } In order to use this module, you need to install the Python requests module: pip install requests class buildbot.www.oauth2. BitbucketAuth ( clientId , clientSecret ) Parameters : clientId – The client ID of your buildbot application clientSecret – The client secret of your buildbot application ssl_verify ( boolean ) – If False disables SSL certificate verification This class implements an authentication with Bitbucket single sign-on. It functions almost identically to the GoogleAuth class. Register your Buildbot instance with the BUILDBOT_URL/auth/login URL as the allowed redirect URI. Example: from buildbot.plugins import util c [ 'www' ] = { # ... 'auth' : util . BitbucketAuth ( "clientid" , "clientsecret" ), } In order to use this module, you need to install the Python requests module: pip install requests class buildbot.www.auth. RemoteUserAuth Parameters : header – header to use to get the username (defaults to REMOTE_USER ) headerRegex – regular expression to get the username from header value (defaults to "(?P<username>[^ @]+)@(?P<realm>[^ @]+)") . Note that you need at least to specify a ?P<username> regular expression named group. userInfoProvider – user info provider; see User Information If the Buildbot UI is served through a reverse proxy that supports HTTP-based authentication (like apache or lighttpd), it’s possible to tell Buildbot to trust the web server and get the username from the request headers. The administrator must make sure that it’s impossible to get access to Buildbot in any way other than through the frontend. Usually this means that Buildbot should listen for incoming connections only on localhost (or on some firewall-protected port). The reverse proxy must require HTTP authentication to access Buildbot pages (using any source for credentials, such as htpasswd, PAM, LDAP, Kerberos). Example: from buildbot.plugins import util c [ 'www' ] = { # ... 'auth' : util . RemoteUserAuth (), } A corresponding Apache configuration example: <Location "/"> AuthType Kerberos AuthName "Buildbot login via Kerberos" KrbMethodNegotiate On KrbMethodK5Passwd On KrbAuthRealms <<YOUR CORP REALMS>> KrbVerifyKDC off KrbServiceName Any Krb5KeyTab /etc/krb5/krb5.keytab KrbSaveCredentials Off require valid-user Order allow,deny Satisfy Any #] SSO RewriteEngine On RewriteCond %{LA-U:REMOTE_USER} (.+)$ RewriteRule . - [E=RU:%1,NS] RequestHeader set REMOTE_USER %{RU}e </Location> The advantage of this sort of authentication is that it is uses a proven and fast implementation for authentication. The problem is that the only information that is passed to Buildbot is the username, and there is no way to pass any other information like user email, user groups, etc. That information can be very useful to the mailstatus plugin, or for authorization processes. See User Information for a mechanism to supply that information. 2.5.17.3. User Information For authentication mechanisms which cannot provide complete information about a user, Buildbot needs another way to get user data. This is useful both for authentication (to fetch more data about the logged-in user) and for avatars (to fetch data about other users). This extra information is provided, appropriately enough, by user info providers. These can be passed to RemoteUserAuth and as an element of avatar_methods . This can also be passed to oauth2 authentication plugins. In this case the username provided by oauth2 will be used, and all other information will be taken from ldap (Full Name, email, and groups): Currently only one provider is available: class buildbot.ldapuserinfo. LdapUserInfo ( uri , bindUser , bindPw , accountBase , accountPattern , groupBase=None , groupMemberPattern=None , groupName=None , accountFullName , accountEmail , avatarPattern=None , avatarData=None , accountExtraFields=None , tls=None ) Parameters : uri – uri of the ldap server bindUser – username of the ldap account that is used to get the infos for other users (usually a “faceless” account) bindPw – password of the bindUser accountBase – the base dn (distinguished name)of the user database accountPattern – the pattern for searching in the account database. This must contain the %(username)s string, which is replaced by the searched username accountFullName – the name of the field in account ldap database where the full user name is to be found. accountEmail – the name of the field in account ldap database where the user email is to be found. groupBase – the base dn of the groups database groupMemberPattern – the pattern for searching in the group database. This must contain the %(dn)s string, which is replaced by the searched username’s dn groupName – the name of the field in groups ldap database where the group name is to be found. avatarPattern – the pattern for searching avatars from emails in the account database. This must contain the %(email)s string, which is replaced by the searched email avatarData – the name of the field in groups ldap database where the avatar picture is to be found. This field is supposed to contain the raw picture, format is automatically detected from jpeg, png or git. accountExtraFields – extra fields to extracts for use with the authorization policies tls – an instance of ldap.Tls that specifies TLS settings. If one of the three optional groups parameters is supplied, then all of them become mandatory. If none is supplied, the retrieved user info has an empty list of groups. Example: from buildbot.plugins import util # this configuration works for MS Active Directory ldap implementation # we use it for user info, and avatars userInfoProvider = util . LdapUserInfo ( uri = 'ldap://ldap.mycompany.com:3268' , bindUser = 'ldap_user' , bindPw = 'p4$$wd' , accountBase = 'dc=corp,dc=mycompany,dc=com' , groupBase = 'dc=corp,dc=mycompany,dc=com' , accountPattern = '(&(objectClass=person)(sAMAccountName= %(username)s ))' , accountFullName = 'displayName' , accountEmail = 'mail' , groupMemberPattern = '(&(objectClass=group)(member= %(dn)s ))' , groupName = 'cn' , avatarPattern = '(&(objectClass=person)(mail= %(email)s ))' , avatarData = 'thumbnailPhoto' , ) c [ 'www' ] = { "port" : PORT , "allowed_origins" : [ "*" ], "url" : c [ 'buildbotURL' ], "auth" : util . RemoteUserAuth ( userInfoProvider = userInfoProvider ), "avatar_methods" : [ userInfoProvider , util . AvatarGravatar () ] } Note In order to use this module, you need to install the ldap3 module: pip install ldap3 In the case of oauth2 authentications, you have to pass the userInfoProvider as keyword argument: from buildbot.plugins import util userInfoProvider = util . LdapUserInfo ( ... ) c [ 'www' ] = { # ... 'auth' : util . GoogleAuth ( "clientid" , "clientsecret" , userInfoProvider = userInfoProvider ), } 2.5.17.4. Reverse Proxy Configuration It is usually better to put Buildbot behind a reverse proxy in production. Provides automatic gzip compression Provides SSL support with a widely used implementation Provides support for http/2 or spdy for fast parallel REST api access from the browser Reverse proxy however might be problematic for websocket, you have to configure it specifically to pass web socket requests. Here is an nginx configuration that is known to work (nginx 1.6.2): server { # Enable SSL and http2 listen 443 ssl http2 default_server; server_name yourdomain.com; root html; index index.html index.htm; ssl on; ssl_certificate /etc/nginx/ssl/server.cer; ssl_certificate_key /etc/nginx/ssl/server.key; # put a one day session timeout for websockets to stay longer ssl_session_cache shared:SSL:10m; ssl_session_timeout 1440m; # please consult latest nginx documentation for current secure encryption settings ssl_protocols .. ssl_ciphers .. ssl_prefer_server_ciphers on; # # force https add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;"; spdy_headers_comp 5; proxy_set_header HOST $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-Host $host; # you could use / if you use domain based proxy instead of path based proxy location /buildbot/ { proxy_pass http://127.0.0.1:5000/; } location /buildbot/sse/ { # proxy buffering will prevent sse to work proxy_buffering off; proxy_pass http://127.0.0.1:5000/sse/; } # required for websocket location /buildbot/ws { proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_pass http://127.0.0.1:5000/ws; # raise the proxy timeout for the websocket proxy_read_timeout 6000s; } } To run with Apache2, you’ll need mod_proxy_wstunnel in addition to mod_proxy_http . Serving HTTPS ( mod_ssl ) is advised to prevent issues with enterprise proxies (see Server Sent Events ), even if you don’t need the encryption itself. Here is a configuration that is known to work (Apache 2.4.10 / Debian 8, Apache 2.4.25 / Debian 9, Apache 2.4.6 / CentOS 7), directly at the top of the domain. If you want to add access control directives, just put them in a <Location /> . <VirtualHost *:443> ServerName buildbot.example ServerAdmin webmaster@buildbot.example # replace with actual port of your Buildbot master ProxyPass /ws ws://127.0.0.1:8020/ws ProxyPassReverse /ws ws://127.0.0.1:8020/ws ProxyPass / http://127.0.0.1:8020/ ProxyPassReverse / http://127.0.0.1:8020/ SetEnvIf X-Url-Scheme https HTTPS=1 ProxyPreserveHost On SSLEngine on SSLCertificateFile /path/to/cert.pem SSLCertificateKeyFile /path/to/cert.key # check Apache2 documentation for current safe SSL settings # This is actually the Debian 8 default at the time of this writing: SSLProtocol all -SSLv3 </VirtualHost> 2.5.17.5. Authorization rules The authorization framework in Buildbot is very generic and flexible. The drawback is that it is not very obvious for newcomers. The ‘simple’ example will however allow you to easily start by implementing an admins-have-all-rights setup. Please carefully read the following documentation to understand how to setup authorization in Buildbot. Authorization framework is tightly coupled to the REST API. Authorization framework only works for HTTP, not for other means of interaction like IRC or try scheduler. It allows or denies access to the REST APIs according to rules. Roles is a label that you give to a user. It is similar but different to the usual notion of group: A user can have several roles, and a role can be given to several users. Role is an application specific notion, while group is more organization specific notion. Groups are given by the auth plugin, e.g ldap , github , and are not always in the precise control of the buildbot admins. Roles can be dynamically assigned, according to the context. For example, there is the owner role, which can be given to a user for a build that he is at the origin, so that he can stop or rebuild only builds of his own. Endpoint matchers associate role requirements to REST API endpoints. The default policy is allow in case no matcher matches (see below why). Role matchers associate authenticated users to roles. Restricting Read Access Please note that you can use this framework to deny read access to the REST API, but there is no access control in websocket or SSE APIs. Practically this means user will still see live updates from running builds in the UI, as those will come from websocket. The only resources that are only available for read in REST API are the log data (a.k.a logchunks ). From a strict security point of view you cannot really use Buildbot Authz framework to securely deny read access to your bot. The access control is rather designed to restrict control APIs which are only accessible through REST API. In order to reduce attack surface, we recommend to place Buildbot behind an access controlled reverse proxy like OAuth2Proxy . Authz Configuration class buildbot.www.authz. Authz ( allowRules = [] , roleMatcher = [] , stringsMatcher = util.fnmatchStrMatcher ) Parameters : allowRules – List of EndpointMatcherBase processed in order for each endpoint grant request. roleMatcher – List of RoleMatchers stringsMatcher – Selects algorithm used to make strings comparison (used to compare roles and builder names). Can be util.fnmatchStrMatcher or util.reStrMatcher from from buildbot.plugins import util Authz needs to be configured in c['www']['authz'] Endpoint matchers Endpoint matchers are responsible for creating rules to match REST endpoints, and requiring roles for them. Endpoint matchers are processed in the order they are configured. The first rule matching an endpoint will prevent further rules from being checked. To continue checking other rules when the result is deny , set defaultDeny=False . If no endpoint matcher matches, then access is granted. One can implement the default deny policy by putting an AnyEndpointMatcher with nonexistent role in the end of the list. Please note that this will deny all REST apis, and most of the UI do not implement proper access denied message in case of such error. The following sequence is implemented by each EndpointMatcher class: Check whether the requested endpoint is supported by this matcher Get necessary info from data API and decide whether it matches Look if the user has the required role Several endpoints matchers are currently implemented. If you need a very complex setup, you may need to implement your own endpoint matchers. In this case, you can look at the source code for detailed examples on how to write endpoint matchers. class buildbot.www.authz.endpointmatchers. EndpointMatcherBase ( role , defaultDeny = True ) Parameters : role – The role which grants access to this endpoint. List of roles is not supported, but a fnmatch expression can be provided to match several roles. defaultDeny – The role matcher algorithm will stop if this value is true and the endpoint matched. This is the base endpoint matcher. Its arguments are inherited by all the other endpoint matchers. class buildbot.www.authz.endpointmatchers. AnyEndpointMatcher ( role ) Parameters : role – The role which grants access to any endpoint. AnyEndpointMatcher grants all rights to people with given role (usually “admins”). class buildbot.www.authz.endpointmatchers. AnyControlEndpointMatcher ( role ) Parameters : role – The role which grants access to any control endpoint. AnyControlEndpointMatcher grants control rights to people with given role (usually “admins”). This endpoint matcher matches current and future control endpoints. You need to add this in the end of your configuration to make sure it is future proof. class buildbot.www.authz.endpointmatchers. ForceBuildEndpointMatcher ( builder , role ) Parameters : builder – Name of the builder. role – The role needed to get access to such endpoints. ForceBuildEndpointMatcher grants right to force builds. class buildbot.www.authz.endpointmatchers. StopBuildEndpointMatcher ( builder , role ) Parameters : builder – Name of the builder. role – The role needed to get access to such endpoints. StopBuildEndpointMatcher grants rights to stop builds. class buildbot.www.authz.endpointmatchers. RebuildBuildEndpointMatcher ( builder , role ) Parameters : builder – Name of the builder. role – The role needed to get access to such endpoints. RebuildBuildEndpointMatcher grants rights to rebuild builds. class buildbot.www.authz.endpointmatchers. EnableSchedulerEndpointMatcher ( builder , role ) Parameters : builder – Name of the builder. role – The role needed to get access to such endpoints. EnableSchedulerEndpointMatcher grants rights to enable and disable schedulers via the UI. Role matchers Role matchers are responsible for creating rules to match people and grant them roles. You can grant roles from groups information provided by the Auth plugins, or if you prefer directly to people’s email. class buildbot.www.authz.roles. RolesFromGroups ( groupPrefix ) Parameters : groupPrefix – Prefix to remove from each group RolesFromGroups grants roles from the groups of the user. If a user has group buildbot-admin , and groupPrefix is buildbot- , then user will be granted the role ‘admin’ ex: roleMatchers = [ util . RolesFromGroups ( groupPrefix = "buildbot-" ) ] class buildbot.www.authz.roles. RolesFromEmails ( roledict ) Parameters : roledict – Dictionary with key=role, and value=list of email strings RolesFromEmails grants roles to users according to the hardcoded emails. ex: roleMatchers = [ util . RolesFromEmails ( admins = [ "my@email.com" ]) ] class buildbot.www.authz.roles. RolesFromDomain ( roledict ) Parameters : roledict – Dictionary with key=role, and value=list of domain strings RolesFromDomain grants roles to users according to their email domains. If a user tried to login with email foo@gmail.com , then the user will be granted the role ‘admins’. ex: roleMatchers = [ util . RolesFromDomain ( admins = [ "gmail.com" ]) ] class buildbot.www.authz.roles. RolesFromOwner ( roledict ) Parameters : roledict – Dictionary with key=role, and value=list of email strings RolesFromOwner grants a given role when property owner matches the email of the user ex: roleMatchers = [ RolesFromOwner ( role = "owner" ) ] class buildbot.www.authz.roles. RolesFromUsername ( roles , usernames ) Parameters : roles – Roles to assign when the username matches. usernames – List of usernames that have the roles. RolesFromUsername grants the given roles when the username property is within the list of usernames. ex: roleMatchers = [ RolesFromUsername ( roles = [ "admins" ], usernames = [ "root" ]), RolesFromUsername ( roles = [ "developers" , "integrators" ], usernames = [ "Alice" , "Bob" ]) ] Example Configs Simple config which allows admin people to control everything, but allow anonymous to look at build results: from buildbot.plugins import * authz = util . Authz ( allowRules = [ util . AnyControlEndpointMatcher ( role = "admins" ), ], roleMatchers = [ util . RolesFromEmails ( admins = [ "my@email.com" ]) ] ) auth = util . UserPasswordAuth ({ 'my@email.com' : 'mypass' }) c [ 'www' ][ 'auth' ] = auth c [ 'www' ][ 'authz' ] = authz More complex config with separation per branch: from buildbot.plugins import * authz = util . Authz ( stringsMatcher = util . fnmatchStrMatcher , # simple matcher with '*' glob character # stringsMatcher = util.reStrMatcher, # if you prefer regular expressions allowRules = [ # admins can do anything, # defaultDeny=False: if user does not have the admin role, we continue parsing rules util . AnyEndpointMatcher ( role = "admins" , defaultDeny = False ), util . StopBuildEndpointMatcher ( role = "owner" ), # *-try groups can start "try" builds util . ForceBuildEndpointMatcher ( builder = "try" , role = "*-try" ), # *-mergers groups can start "merge" builds util . ForceBuildEndpointMatcher ( builder = "merge" , role = "*-mergers" ), # *-releasers groups can start "release" builds util . ForceBuildEndpointMatcher ( builder = "release" , role = "*-releasers" ), # if future Buildbot implement new control, we are safe with this last rule util . AnyControlEndpointMatcher ( role = "admins" ) ], roleMatchers = [ RolesFromGroups ( groupPrefix = "buildbot-" ), RolesFromEmails ( admins = [ "homer@springfieldplant.com" ], reaper - try = [ "007@mi6.uk" ]), # role owner is granted when property owner matches the email of the user RolesFromOwner ( role = "owner" ) ] ) c [ 'www' ][ 'authz' ] = authz Using GitHub authentication and allowing access to control endpoints for users in the “Buildbot” organization: from buildbot.plugins import * authz = util . Authz ( allowRules = [ util . AnyControlEndpointMatcher ( role = "BuildBot" ) ], roleMatchers = [ util . RolesFromGroups () ] ) auth = util . GitHubAuth ( 'CLIENT_ID' , 'CLIENT_SECRET' ) c [ 'www' ][ 'auth' ] = auth c [ 'www' ][ 'authz' ] = authz Previous Next © Copyright Buildbot Team Members. Built with Sphinx using a theme provided by Read the Docs . | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/586 | LLVM Weekly - #586, March 24th 2025 LLVM Weekly - #586, March 24th 2025 Welcome to the five hundred and eighty-sixth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events Min-Yih Hsu blogged about calculating throughput with LLVM’s scheduling model . According to the LLVM calendar in the coming week there will be the following: Office hours with the following hosts: Kristof Beyls, Amara Emerson, Johannes Doerfert. Online sync-ups on the following topics: ClangIR, pointer authentication, OpenMP, Flang, RISC-V backend, LLVM embedded toolchains. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Asher Mancinelli started an MLIR RFC discussion on an attribute interface for loop annotation metadata . The LLVM security response group 2024 transparency report was published . LLVM 20.1.1 was released . Maxwell Bland kicked off a discussion about ideas on heap canaries for Linux . Shilei Tian would like to introduce a sentinel pointer value to DataLayout . Maksim Levental proposed upstreaming CIRCT’s Verif and SMT dialects . Nikita Popov suggested that in the future the llvm.experimental prefix shouldn’t be used for intrinsics and they should just be marked as experimental in documentation. Tom Stellard has created an allow list of third-party GitHub Actions . Mark de Wever started an RFC thread on breaking basic_format_string ’s ABI for performance improvements . Matthias Springer would like to allow pointers as element type of MLIR’s VectorType . Asher Mancinelli suggested a volatile representation in Flang . Andrew Rogers shared a PSA on work to annotate LLVM public interfaces as part of efforts to build LLVM as a DLL on Windows. LLVM commits The DAGCombiner learned to avoid store merging across function calls if the spilling is unprofitable. Taken alongside previously committed tweaks to costing, this can make a big difference on some RISC-V inputs, such as a greater than 10% reduction in runtime for 544.nab_r from SPEC on the BananaPi-F3 in the tested configuration. f138e36 . A DXIL instruction legalizer pass was started. a2fbc9a . update_test_checks learned a new --filter-out-after option which stops the generation of any CHECK lines beyond the line that matches the filter. 194ecef . MC layer (assembler/disassembler) support was added for the RISC-V vendor extensions Xqcibi (branch immediate), Xqcisim (simulation hints), Xqcilb (long branch), and Xqcisync (sync delay) from Qualcomm. 036c6cb , 467e5a1 , 0744d49 , 3840f78 . LangRef documentation was added for llvm.readsteadycounter . cc2a86a . Assembler support was added for the RISC-V Zilsd and Zclsd extensions ((compressed) load/store pair instructions). Also experimental assembler support for the Zvqdotq (vector quad widening dot product) extension. 480202f , eb77061 . The 2024 security group transparency report was committed. 9a078a3 . The DirectX root signature binary representation was documented. d0d33d2 . GPU loader utilities were moved from libc to LLVM. bd6df0f . Support was added for inline SPIR-V types. 864a83d . LoopIdiomRecognize can now recognise loops implementing strlen/wcslen. ac9049d . Clang commits Clang Static Analyzer can now collect statistics per entry point rather than just per translation unit. 57e3641 . Clang adopted support for GCC’s ASM constexpr string extension. 911b200 . ClangIR upstreaming continues to progress, with support added for unary ops, CastOp, scalar conversions, and empty for loops. The cir-translate and cir-lsp-server tools and the cir-canonicalize pass was also upstreamed. 5f86666 , 27d8bd3 , 1ae307a , 39ce995 , f51e5f3 . The cplusplus.PureVirtualCall checker was documented. 9762b8e . The compilation time overhead of enabling -Wunsafe-buffer-usage was reduced by ~88% leaving compile time overhead of ~1.7% on the benchmarked inputs. f5ee105 . A bugprone-capturing-this-in-member-variable checker was added. 3b1e18c . Other project commits ASan learned to re-exec without ASLR enabled on 32-bit Linux, if necessary. 3b3f8c5 . std::flat_set was implemented in libcxx. 2f1416b . The MLIR sub-channel quantization RFC started to be implemented. 81d7eef . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/manual/api.html | Log4j API :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Manual API Edit this Page Log4j API Log4j is essentially composed of a logging API called Log4j API , and its reference implementation called Log4j Core . What is a logging API and a logging implementation? Logging API A logging API is an interface your code or your dependencies directly logs against. It is required at compile-time. It is implementation agnostic to ensure that your application can write logs, but is not tied to a specific logging implementation. Log4j API, SLF4J , JUL (Java Logging) , JCL (Apache Commons Logging) , JPL (Java Platform Logging) and JBoss Logging are major logging APIs. Logging implementation A logging implementation is only required at runtime and can be changed without the need to recompile your software. Log4j Core, JUL (Java Logging) , Logback are the most well-known logging implementations. Are you looking for a crash course on how to use Log4j in your application or library? See Getting started . You can also check out Installation for the complete installation instructions. Log4j API provides A logging API that libraries and applications can code to A minimal logging implementation (aka. Simple logger) Adapter components to create a logging implementation This page tries to cover the most prominent Log4j API features. Did you know that Log4j provides specialized APIs for Kotlin and Scala? Check out Log4j Kotlin and Log4j Scala projects for details. Introduction To log, you need a Logger instance which you will retrieve from the LogManager . These are all part of the log4j-api module, which you can install as follows: Maven Gradle <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api</artifactId> <version>${log4j-api.version}</version> </dependency> implementation 'org.apache.logging.log4j:log4j-api:${log4j-api.version}' You can use the Logger instance to log by using methods like info() , warn() , error() , etc. These methods are named after the log levels they represent, a way to categorize log events by severity. The log message can also contain placeholders written as {} that will be replaced by the arguments passed to the method. import org.apache.logging.log4j.Logger; import org.apache.logging.log4j.LogManager; public class DbTableService { private static final Logger LOGGER = LogManager.getLogger(); (1) public void truncateTable(String tableName) throws IOException { LOGGER.warn("truncating table `{}`", tableName); (2) db.truncate(tableName); } } 1 The returned Logger instance is thread-safe and reusable. Unless explicitly provided as an argument, getLogger() associates the returned Logger with the enclosing class, that is, DbTableService in this example. 2 The placeholder {} in the message will be replaced with the value of tableName The generated log event , which contain the user-provided log message and log level (i.e., WARN ), will be enriched with several other implicitly derived contextual information: timestamp, class & method name, line number, etc. What happens to the generated log event will vary significantly depending on the configuration used. It can be pretty-printed to the console, written to a file, or get totally ignored due to insufficient severity or some other filtering. Log levels are used to categorize log events by severity and control the verbosity of the logs. Log4j contains various predefined levels, but the most common are DEBUG , INFO , WARN , and ERROR . With them, you can filter out less important logs and focus on the most critical ones. Previously we used Logger#warn() to log a warning message, which could mean that something is not right, but the application can continue. Log levels have a priority, and WARN is less severe than ERROR . Exceptions are often also errors. In this case, we might use the ERROR log level. Make sure to log exceptions that have diagnostics value. This is simply done by passing the exception as the last argument to the log method: LOGGER.warn("truncating table `{}`", tableName); try { db.truncate(tableName); } catch (IOException exception) { LOGGER.error("failed truncating table `{}`", tableName, exception); (1) throw new IOException("failed truncating table: " + tableName, exception); } 1 By using error() instead of warn() , we signal that the operation failed. While there is only one placeholder in the message, we pass two arguments: tableName and exception . Log4j will attach the last extra argument of type Throwable in a separate field to the generated log event. Log messages are often used interchangeably with log events . While this simplification holds for several cases, it is not technically correct. A log event, capturing the logging context (level, logger name, instant, etc.) along with the log message, is generated by the logging implementation (e.g., Log4j Core) when a user issues a log using a logger , e.g., LOGGER.info("Hello, world!") . Hence, log events are compound objects containing log messages . Click for an introduction to log event fields Log events contain fields that can be classified into three categories: Some fields are provided explicitly, in a Logger method call. The most important are the log level and the log message, which is a description of what happened, and it is addressed to humans. Some fields are contextual (e.g., Thread Context ) and are either provided explicitly by developers of other parts of the application, or is injected by Java instrumentation. The last category of fields is those that are computed automatically by the logging implementation employed. For clarity’s sake let us look at a log event formatted as JSON: { (1) "log.level": "INFO", "message": "Unable to insert data into my_table.", "error.type": "java.lang.RuntimeException", "error.message": null, "error.stack_trace": [ { "class": "com.example.Main", "method": "doQuery", "file.name": "Main.java", "file.line": 36 }, { "class": "com.example.Main", "method": "main", "file.name": "Main.java", "file.line": 25 } ], "marker": "SQL", "log.logger": "com.example.Main", (2) "tags": [ "SQL query" ], "labels": { "span_id": "3df85580-f001-4fb2-9e6e-3066ed6ddbb1", "trace_id": "1b1f8fc9-1a0c-47b0-a06f-af3c1dd1edf9" }, (3) "@timestamp": "2024-05-23T09:32:24.163Z", "log.origin.class": "com.example.Main", "log.origin.method": "doQuery", "log.origin.file.name": "Main.java", "log.origin.file.line": 36, "process.thread.id": 1, "process.thread.name": "main", "process.thread.priority": 5 } 1 Explicitly supplied fields: log.level The level of the event, either explicitly provided as an argument to the logger call, or implied by the name of the logger method message The log message that describes what happened error.* An optional Throwable explicitly passed as an argument to the logger call marker An optional marker explicitly passed as an argument to the logger call log.logger The logger name provided explicitly to LogManager.getLogger() or inferred by Log4j API 2 Contextual fields: tags The Thread Context stack labels The Thread Context map 3 Logging backend specific fields. In case you are using Log4j Core, the following fields can be automatically generated: @timestamp The instant of the logger call log.origin.* The location of the logger call in the source code process.thread.* The name of the Java thread, where the logger is called Best practices There are several widespread bad practices while using Log4j API. Let’s try to walk through the most common ones and see how to fix them. Don’t use toString() Don’t use Object#toString() in arguments, it is redundant! /* BAD! */ LOGGER.info("userId: {}", userId.toString()); Underlying message type and layout will deal with arguments: /* GOOD */ LOGGER.info("userId: {}", userId); Pass exception as the last extra argument Don’t call Throwable#printStackTrace() ! This not only circumvents the logging but can also leak sensitive information! /* BAD! */ exception.printStackTrace(); Don’t use Throwable#getMessage() ! This prevents the log event from getting enriched with the exception. /* BAD! */ LOGGER.info("failed", exception.getMessage()); /* BAD! */ LOGGER.info("failed for user ID `{}`: {}", userId, exception.getMessage()); Don’t provide both Throwable#getMessage() and Throwable itself! This bloats the log message with a duplicate exception message. /* BAD! */ LOGGER.info("failed for user ID `{}`: {}", userId, exception.getMessage(), exception); Pass exception as the last extra argument: /* GOOD */ LOGGER.error("failed", exception); /* GOOD */ LOGGER.error("failed for user ID `{}`", userId, exception); Don’t use string concatenation If you are using String concatenation while logging, you are doing something very wrong and dangerous! Don’t use String concatenation to format arguments! This circumvents the handling of arguments by message type and layout. More importantly, this approach is prone to attacks! Imagine userId being provided by the user with the following content: placeholders for non-existing args to trigger failure: {} {} {dangerousLookup} /* BAD! */ LOGGER.info("failed for user ID: " + userId); Use message parameters /* GOOD */ LOGGER.info("failed for user ID `{}`", userId); Use Supplier s to pass computationally expensive arguments If one or more arguments of the log statement are computationally expensive, it is not wise to evaluate them knowing that their results can be discarded. Consider the following example: /* BAD! */ LOGGER.info("failed for user ID `{}` and role `{}`", userId, db.findUserRoleById(userId)); The database query (i.e., db.findUserNameById(userId) ) can be a significant bottleneck if the created the log event will be discarded anyway – maybe the INFO level is not accepted for this logger, or due to some other filtering. The old-school way of solving this problem is to level-guard the log statement: /* OKAY */ if (LOGGER.isInfoEnabled()) { LOGGER.info(...); } While this would work for cases where the message can be dropped due to insufficient level, this approach is still prone to other filtering cases; e.g., maybe the associated marker is not accepted. Use Supplier s to pass arguments containing computationally expensive items: /* GOOD */ LOGGER.info("failed for user ID `{}` and role `{}`", () -> userId, () -> db.findUserRoleById(userId)); Use a Supplier to pass the message and its arguments containing computationally expensive items: /* GOOD */ LOGGER.info(() -> new ParameterizedMessage("failed for user ID `{}` and role `{}`", userId, db.findUserRoleById(userId))); Loggers Logger s are the primary entry point for logging. In this section we will introduce you to further details about Logger s. Refer to Architecture to see where Logger s stand in the big picture. Logger names Most logging implementations use a hierarchical scheme for matching logger names with logging configuration. In this scheme, the logger name hierarchy is represented by . (dot) characters in the logger name, in a fashion very similar to the hierarchy used for Java package names. For example, org.apache.logging.appender and org.apache.logging.filter both have org.apache.logging as their parent. In most cases, applications name their loggers by passing the current class’s name to LogManager.getLogger(…​) . Because this usage is so common, Log4j provides that as the default when the logger name parameter is either omitted or is null. For example, all Logger -typed variables below will have a name of com.example.LoggerNameTest : public class LoggerNameTest { Logger logger1 = LogManager.getLogger(LoggerNameTest.class); Logger logger2 = LogManager.getLogger(LoggerNameTest.class.getName()); Logger logger3 = LogManager.getLogger(); } We suggest you to use LogManager.getLogger() without any arguments since it delivers the same functionality with less characters and is not prone to copy-paste errors. Logger message factories Loggers translate LOGGER.info("Hello, {}!", name); calls to the appropriate canonical logging method: LOGGER.log(Level.INFO, messageFactory.createMessage("Hello, {}!", new Object[] {name})); Note that how Hello, {}! should be encoded given the {name} array as argument completely depends on the MessageFactory employed. Log4j allows users to customize this behaviour in several getLogger() methods of LogManager : LogManager.getLogger() (1) .info("Hello, {}!", name); (2) LogManager.getLogger(StringFormatterMessageFactory.INSTANCE) (3) .info("Hello, %s!", name); (4) 1 Create a logger using the default message factory 2 Use default parameter placeholders, that is, {} style 3 Explicitly provide the message factory, that is, StringFormatterMessageFactory . Note that there are several other getLogger() methods accepting a MessageFactory . 4 Note the placeholder change from {} to %s ! Passed Hello, %s! and name arguments will be implicitly translated to a String.format("Hello, %s!", name) call due to the employed StringFormatterMessageFactory . Log4j bundles several predefined message factories . Some common ones are accessible through convenient factory methods, which we will cover below. Formatter logger The Logger instance returned by default replaces the occurrences of {} placeholders with the toString() output of the associated parameter. If you need more control over how the parameters are formatted, you can also use the java.util.Formatter format strings by obtaining your Logger using LogManager#getFormatterLogger() : Logger logger = LogManager.getFormatterLogger(); logger.debug("Logging in user %s with birthday %s", user.getName(), user.getBirthdayCalendar()); logger.debug( "Logging in user %1$s with birthday %2$tm %2$te,%2$tY", user.getName(), user.getBirthdayCalendar()); logger.debug("Integer.MAX_VALUE = %,d", Integer.MAX_VALUE); logger.debug("Long.MAX_VALUE = %,d", Long.MAX_VALUE); Loggers returned by getFormatterLogger() are referred as formatter loggers . printf() method Formatter loggers give fine-grained control over the output format, but have the drawback that the correct type must be specified. For example, passing anything other than a decimal integer for a %d format parameter gives an exception. If your main usage is to use {} -style parameters, but occasionally you need fine-grained control over the output format, you can use the Logger#printf() method: Logger logger = LogManager.getLogger("Foo"); logger.debug("Opening connection to {}...", someDataSource); logger.printf(Level.INFO, "Hello, %s!", userName); Formatter performance Keep in mind that, contrary to the formatter logger, the default Log4j logger (i.e., {} -style parameters) is heavily optimized for several use cases and can operate garbage-free when configured correctly. You might reconsider your formatter logger usages for latency sensitive applications. Event logger EventLogger is a convenience to log StructuredDataMessage s, which format their content in a way compliant with the Syslog message format described in RFC 5424 . Event Logger is deprecated for removal! We advise users to switch to plain Logger instead. Read more on event loggers…​ Simple logger Even though Log4j Core is the reference implementation of Log4j API, Log4j API itself also provides a very minimalist implementation: Simple Logger . This is a convenience for environments where either a fully-fledged logging implementation is missing, or cannot be included for other reasons. SimpleLogger is the fallback Log4j API implementation if no other is available in the classpath. Read more on the simple logger…​ Status logger Status Logger is a standalone, self-sufficient Logger implementation to record events that occur in the logging system (i.e., Log4j) itself. It is the logging system used by Log4j for reporting status of its internals. Users can use the status logger to either emit logs in their custom Log4j components, or troubleshoot a Log4j configuration. Read more on the status logger…​ Fluent API The fluent API allows you to log using a fluent interface: LOGGER.atInfo() .withMarker(marker) .withLocation() .withThrowable(exception) .log("Login for user `{}` failed", userId); Read more on the Fluent API…​ Fish tagging Just as a fish can be tagged and have its movement tracked (aka. fish tagging [ 1 ] ), stamping log events with a common tag or set of data elements allows the complete flow of a transaction or a request to be tracked. You can use them for several purposes, such as: Provide extra information while serializing the log event Allow filtering of information so that it does not overwhelm the system or the individuals who need to make use of it Log4j provides fish tagging in several flavors: Levels Log levels are used to categorize log events by severity. Log4j contains predefined levels, of which the most common are DEBUG , INFO , WARN , and ERROR . Log4j also allows you to introduce your own custom levels too. Read more on custom levels…​ Markers Markers are programmatic labels developers can associate to log statements: public class MyApp { private static final Logger LOGGER = LogManager.getLogger(); private static final Marker ACCOUNT_MARKER = MarkerManager.getMarker("ACCOUNT"); public void removeUser(String userId) { logger.debug(ACCOUNT_MARKER, "Removing user with ID `{}`", userId); // ... } } Read more on markers…​ Thread Context Just like Java’s ThreadLocal , Thread Context facilitates associating information with the executing thread and making this information accessible to the rest of the logging system. Thread Context offers both map-structured – referred to as Thread Context Map or Mapped Diagnostic Context (MDC) stack-structured – referred to as Thread Context Stack or Nested Diagnostic Context (NDC) storage: ThreadContext.put("ipAddress", request.getRemoteAddr()); (1) ThreadContext.put("hostName", request.getServerName()); (1) ThreadContext.put("loginId", session.getAttribute("loginId")); (1) void performWork() { ThreadContext.push("performWork()"); (2) LOGGER.debug("Performing work"); (3) // Perform the work ThreadContext.pop(); (4) } ThreadContext.clear(); (5) 1 Adding properties to the thread context map 2 Pushing properties to the thread context stack 3 Added properties can later on be used to, for instance, filter the log event, provide extra information in the layout, etc. 4 Popping the last pushed property from the thread context stack 5 Clearing the thread context (for both stack and map!) Read more on Thread Context …​ Messages Whereas almost every other logging API and implementation accepts only String -typed input as message, Log4j generalizes this concept with a Message contract. Customizability of the message type enables users to have complete control over how a message is encoded by Log4j. This liberal approach allows applications to choose the message type best fitting to their logging needs; they can log plain String s, or custom PurchaseOrder objects. Log4j provides several predefined message types to cater for common use cases: Simple String -typed messages: LOGGER.info("foo"); LOGGER.info(new SimpleMessage("foo")); String -typed parameterized messages: LOGGER.info("foo {} {}", "bar", "baz"); LOGGER.info(new ParameterizedMessage("foo {} {}", new Object[] {"bar", "baz"})); Map -typed messages: LOGGER.info(new StringMapMessage().with("key1", "val1").with("key2", "val2")); Read more on messages…​ Flow tracing The Logger class provides traceEntry() , traceExit() , catching() , throwing() methods that are quite useful for following the execution path of applications. These methods generate log events that can be filtered separately from other debug logging. Read more on flow tracing…​ 1 . Fish tagging is first described by Neil Harrison in the "Patterns for Logging Diagnostic Messages" chapter of "Pattern Languages of Program Design 3" edited by R. Martin, D. Riehle, and F. Buschmann in 1997 . Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-permissions.html | Set up IAM permissions and roles for Lambda@Edge - Amazon CloudFront Set up IAM permissions and roles for Lambda@Edge - Amazon CloudFront Documentation Amazon CloudFront Developer Guide IAM permissions required to associate Lambda@Edge functions with CloudFront distributions Function execution role for service principals Service-linked roles for Lambda@Edge Set up IAM permissions and roles for Lambda@Edge To configure Lambda@Edge, you must have the following IAM permissions and roles for AWS Lambda: IAM permissions – These permissions allow you to create your Lambda function and associate it with your CloudFront distribution. A Lambda function execution role (IAM role) – The Lambda service principals assume this role to execute your function. Service-linked roles for Lambda@Edge – The service-linked roles allow specific AWS services to replicate Lambda functions to AWS Regions and to enable CloudWatch to use CloudFront log files. IAM permissions required to associate Lambda@Edge functions with CloudFront distributions In addition to the IAM permissions that you need for Lambda, you need the following permissions to associate Lambda functions with CloudFront distributions: lambda:GetFunction – Grants permission to get configuration information for your Lambda function and a presigned URL to download a .zip file that contains the function. lambda:EnableReplication* – Grants permission to the resource policy so that the Lambda replication service can get the function code and configuration. lambda:DisableReplication* – Grants permission to the resource policy so that the Lambda replication service can delete the function. Important You must add the asterisk ( * ) at the end of the lambda:EnableReplication * and lambda:DisableReplication * actions. For the resource, specify the ARN of the function version that you want to execute when a CloudFront event occurs, such as the following example: arn:aws:lambda:us-east-1:123456789012:function: TestFunction :2 iam:CreateServiceLinkedRole – Grants permission to create a service-linked role that Lambda@Edge uses to replicate Lambda functions in CloudFront. After you configure Lambda@Edge for the first time, the service-linked role is automatically created for you. You don't need to add this permission to other distributions that use Lambda@Edge. cloudfront:UpdateDistribution or cloudfront:CreateDistribution – Grants permission to update or create a distribution. For more information, see the following topics: Identity and Access Management for Amazon CloudFront Lambda resource access permissions in the AWS Lambda Developer Guide Function execution role for service principals You must create an IAM role that the lambda.amazonaws.com and edgelambda.amazonaws.com service principals can assume when they execute your function. Tip When you create your function in the Lambda console, you can choose to create a new execution role by using an AWS policy template. This step automatically adds the required Lambda@Edge permissions to execute your function. See Step 5 in the Tutorial: Creating a simple Lambda@Edge function . For more information about creating an IAM role manually, see Creating roles and attaching policies (console) in the IAM User Guide . Example: Role trust policy You can add this role under the Trust Relationship tab in the IAM console. Don't add this policy under the Permissions tab. JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "lambda.amazonaws.com", "edgelambda.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] } For more information about the permissions that you need to grant to the execution role, see Lambda resource access permissions in the AWS Lambda Developer Guide . Notes By default, whenever a CloudFront event triggers a Lambda function, data is written to CloudWatch Logs. If you want to use these logs, the execution role needs permission to write data to CloudWatch Logs. You can use the predefined AWSLambdaBasicExecutionRole to grant permission to the execution role. For more information about CloudWatch Logs, see Edge function logs . If your Lambda function code accesses other AWS resources, such as reading an object from an S3 bucket, the execution role needs permission to perform that action. Service-linked roles for Lambda@Edge Lambda@Edge uses IAM service-linked roles . A service-linked role is a unique type of IAM role that is linked directly to a service. Service-linked roles are predefined by the service and include all of the permissions that the service requires to call other AWS services on your behalf. Lambda@Edge uses the following IAM service-linked roles: AWSServiceRoleForLambdaReplicator – Lambda@Edge uses this role to allow Lambda@Edge to replicate functions to AWS Regions. When you first add a Lambda@Edge trigger in CloudFront, a role named AWSServiceRoleForLambdaReplicator is created automatically to allow Lambda@Edge to replicate functions to AWS Regions. This role is required to use Lambda@Edge functions. The ARN for the AWSServiceRoleForLambdaReplicator role looks like the following example: arn:aws:iam::123456789012:role/aws-service-role/replicator.lambda.amazonaws.com/AWSServiceRoleForLambdaReplicator AWSServiceRoleForCloudFrontLogger – CloudFront uses this role to push log files into CloudWatch. You can use log files to debug Lambda@Edge validation errors. The AWSServiceRoleForCloudFrontLogger role is created automatically when you add Lambda@Edge function association to allow CloudFront to push Lambda@Edge error log files to CloudWatch. The ARN for the AWSServiceRoleForCloudFrontLogger role looks like this: arn:aws:iam::account_number:role/aws-service-role/logger.cloudfront.amazonaws.com/AWSServiceRoleForCloudFrontLogger A service-linked role makes setting up and using Lambda@Edge easier because you don’t have to manually add the necessary permissions. Lambda@Edge defines the permissions of its service-linked roles, and only Lambda@Edge can assume the roles. The defined permissions include the trust policy and the permissions policy. You can't attach the permissions policy to any other IAM entity. You must remove any associated CloudFront or Lambda@Edge resources before you can delete a service-linked role. This helps protect your Lambda@Edge resources so that you don't remove a service-linked role that is still required to access active resources. For more information about service-linked roles, see Service-linked roles for CloudFront . Service-linked role permissions for Lambda@Edge Lambda@Edge uses two service-linked roles, named AWSServiceRoleForLambdaReplicator and AWSServiceRoleForCloudFrontLogger . The following sections describe the permissions for each of these roles. Contents Service-linked role permissions for Lambda replicator Service-linked role permissions for CloudFront logger Service-linked role permissions for Lambda replicator This service-linked role allows Lambda to replicate Lambda@Edge functions to AWS Regions. The AWSServiceRoleForLambdaReplicator service-linked role trusts the replicator.lambda.amazonaws.com service to assume the role. The role permissions policy allows Lambda@Edge to complete the following actions on the specified resources: lambda:CreateFunction on arn:aws:lambda:*:*:function:* lambda:DeleteFunction on arn:aws:lambda:*:*:function:* lambda:DisableReplication on arn:aws:lambda:*:*:function:* iam:PassRole on all AWS resources cloudfront:ListDistributionsByLambdaFunction on all AWS resources Service-linked role permissions for CloudFront logger This service-linked role allows CloudFront to push log files into CloudWatch so that you can debug Lambda@Edge validation errors. The AWSServiceRoleForCloudFrontLogger service-linked role trusts the logger.cloudfront.amazonaws.com service to assume the role. The role permissions policy allows Lambda@Edge to complete the following actions on the specified arn:aws:logs:*:*:log-group:/aws/cloudfront/* resource: logs:CreateLogGroup logs:CreateLogStream logs:PutLogEvents You must configure permissions to allow an IAM entity (such as a user, group, or role) to delete the Lambda@Edge service-linked roles. For more information, see Service-linked role permissions in the IAM User Guide . Creating service-linked roles for Lambda@Edge You don’t typically manually create the service-linked roles for Lambda@Edge. The service creates the roles for you automatically in the following scenarios: When you first create a trigger, the service creates the AWSServiceRoleForLambdaReplicator role (if it doesn’t already exist). This role allows Lambda to replicate Lambda@Edge functions to AWS Regions. If you delete the service-linked role, the role will be created again when you add a new trigger for Lambda@Edge in a distribution. When you update or create a CloudFront distribution that has a Lambda@Edge association, the service creates the AWSServiceRoleForCloudFrontLogger role (if the role doesn’t already exist). This role allows CloudFront to push your log files to CloudWatch. If you delete the service-linked role, the role will be created again when you update or create a CloudFront distribution that has a Lambda@Edge association. To manually create these service-linked roles, you can run the following AWS Command Line Interface (AWS CLI) commands: To create the AWSServiceRoleForLambdaReplicator role Run the following command. aws iam create-service-linked-role --aws-service-name replicator.lambda.amazonaws.com To create the AWSServiceRoleForCloudFrontLogger role Run the following command. aws iam create-service-linked-role --aws-service-name logger.cloudfront.amazonaws.com Editing Lambda@Edge service-linked roles Lambda@Edge doesn't allow you to edit the AWSServiceRoleForLambdaReplicator or AWSServiceRoleForCloudFrontLogger service-linked roles. After the service has created a service-linked role, you can't change the name of the role because various entities might reference the role. However, you can use IAM to edit the role description. For more information, see Editing a service-linked role in the IAM User Guide . Supported AWS Regions for Lambda@Edge service-linked roles CloudFront supports using service-linked roles for Lambda@Edge in the following AWS Regions: US East (N. Virginia) – us-east-1 US East (Ohio) – us-east-2 US West (N. California) – us-west-1 US West (Oregon) – us-west-2 Asia Pacific (Mumbai) – ap-south-1 Asia Pacific (Seoul) – ap-northeast-2 Asia Pacific (Singapore) – ap-southeast-1 Asia Pacific (Sydney) – ap-southeast-2 Asia Pacific (Tokyo) – ap-northeast-1 Europe (Frankfurt) – eu-central-1 Europe (Ireland) – eu-west-1 Europe (London) – eu-west-2 South America (São Paulo) – sa-east-1 Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions Tutorial: Basic Lambda@Edge function Write and create a Lambda@Edge function Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/development.html | Development :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Resources Development Edit this Page Development This page shares information related to the development of Log4j. Content is aimed for users who want to contribute source code patches and maintainers. Do you need help for setting up or configuring Log4j? Please refer to the Support page instead. GitHub setup Log4j uses GitHub extensively: Source code repository https://github.com/apache/logging-log4j2/tree/rel/2.25.3 Issue tracker https://github.com/apache/logging-log4j2/issues Discussions https://github.com/apache/logging-log4j2/discussions Maintainer discussions mostly take place in mailing lists. Please refer to the Support page for the complete list of communication channels. Branching scheme The following branching scheme is followed: 2.x The most recent Log4j 2 code main The most recent Log4j 3 code <sourceBranch>-site-<environment> <sourceBranch>-site-<environment>-out Branches used to serve the staging and production websites. out -suffixed ones are automatically populated by CI, you are not supposed to touch them. See the Logging Parent website for details. release/<version> Branch triggering the CI logic to start the release process How can I build the project? See the build instructions . How can I run fuzz tests? See the fuzzing instructions . I am not a committer. How shall I submit a patch? Is this a trivial fix such as code or documentation typo? Simply submit a pull request. Changelog entry is not needed and make sure ./mvnw verify site succeeds. Is this a non-trivial fix or a new feature ? Pitch it in a maintainer discussion channel and ask for assistance. I am a committer. How shall I push my changes? As per the PMC resolution on 2025-04-10 , all changes must be submitted in a pull request and undergo peer review. Make sure a changelog entry is attached, and ./mvnw verify site succeeds. You are strongly advised to spar with another maintainer first (see maintainer discussion channels ) before starting to code. I am a PMC member. How do I make a new release? All Maven-based Logging Services projects are parented by Logging Parent , which streamlines several project-wide processes, including making a new release. See its release instructions for projects . I am a PMC member. How do I publish a new XML schema? All Maven-based Logging Services projects are parented by Logging Parent , which streamlines several project-wide processes, including publishing XML schemas. See its release instructions for XML schemas . Projects and XML schemas have different lifecycles! A new release of a project does not necessarily mean a new release of its XML schemas. XML schemas might have been untouched, or they might contain minor changes while the project itself contains breaking changes, etc. Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://docs.aws.amazon.com/de_de/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-event-request-response.html | So arbeitet Lambda@Edge mit Anforderungen und Antworten - Amazon CloudFront So arbeitet Lambda@Edge mit Anforderungen und Antworten - Amazon CloudFront Dokumentation Amazon CloudFront Entwicklerhandbuch Die vorliegende Übersetzung wurde maschinell erstellt. Im Falle eines Konflikts oder eines Widerspruchs zwischen dieser übersetzten Fassung und der englischen Fassung (einschließlich infolge von Verzögerungen bei der Übersetzung) ist die englische Fassung maßgeblich. So arbeitet Lambda@Edge mit Anforderungen und Antworten Wenn Sie eine CloudFront Verteilung mit einer Lambda @Edge -Funktion verknüpfen, CloudFront fängt sie Anfragen und Antworten an CloudFront Edge-Standorten ab. Sie können Lambda-Funktionen ausführen, wenn die folgenden CloudFront Ereignisse eintreten: Wann CloudFront erhält er eine Anfrage von einem Zuschauer (Viewer-Anfrage) Bevor CloudFront eine Anfrage an den Ursprung weitergeleitet wird (ursprüngliche Anfrage) Wann CloudFront erhält er eine Antwort vom Ursprung (ursprüngliche Antwort) Before CloudFront gibt die Antwort an den Zuschauer zurück (Antwort des Betrachters) Wenn Sie verwenden AWS WAF, wird die Lambda @Edge Viewer-Anfrage ausgeführt, nachdem alle AWS WAF Regeln angewendet wurden. Weitere Informationen erhalten Sie unter Arbeiten mit Anforderungen und Antworten und Lambda@Edge-Ereignisstruktur . JavaScript ist in Ihrem Browser nicht verfügbar oder deaktiviert. Zur Nutzung der AWS-Dokumentation muss JavaScript aktiviert sein. Weitere Informationen finden auf den Hilfe-Seiten Ihres Browsers. Dokumentkonventionen Anpassen mit Lambda@Edge Anwendungsmöglichkeiten von Lambda@Edge Hat Ihnen diese Seite geholfen? – Ja Vielen Dank, dass Sie uns mitgeteilt haben, dass wir gute Arbeit geleistet haben! Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, was wir richtig gemacht haben, damit wir noch besser werden? Hat Ihnen diese Seite geholfen? – Nein Vielen Dank, dass Sie uns mitgeteilt haben, dass diese Seite überarbeitet werden muss. Es tut uns Leid, dass wir Ihnen nicht weiterhelfen konnten. Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, wie wir die Dokumentation verbessern können? | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/tools/index.html | Log4j Tools :: Apache Log4j Tools Apache Log4j Tools a subproject of Apache Logging Services Home Components Log4j Changelog Log4j Changelog Maven Plugin Log4j Docgen Log4j Docgen Maven Plugin Log4j Docgen AsciiDoctor extension Development Release notes Download Support Security Home Log4j Tools Edit this Page Log4j Tools Tooling internally used by the Apache Log4j project infrastructure: Log4j Changelog Log4j Changelog Maven Plugin Log4j Docgen Log4j Docgen Maven Plugin Log4j Docgen AsciiDoctor extension Maven Bill of Materials (BOM) To keep your Log4j Tools module versions aligned, a Maven Bill of Materials (BOM) POM is provided for your convenience. To use this with Maven, add the dependency listed below to your pom.xml file. Note that the <dependencyManagement> nesting and the <scope>import</scope> instruction. This will import all modules bundled with the associated Log4j release to your dependencyManagement . As a result, you don’t have to specify versions of the imported modules ( log4j-changelog , log4j-docgen , etc.) while using them as a <dependency> . pom.xml snippet importing log4j-tools-bom <dependencyManagement> <dependencies> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-tools-bom</artifactId> <version>0.9.0</version> <scope>import</scope> <type>pom</type> </dependency> </dependencies> </dependencyManagement> Copyright © 1999-2024 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/587 | LLVM Weekly - #587, March 31st 2025 LLVM Weekly - #587, March 31st 2025 If you prefer, you can read a HTML version of this email at https://llvmweekly.org/issue/587 . Welcome to the five hundred and eighty-seventh issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events “walnut356” blogged about an attempt to implement LLDB’s TypeSystem implement for Rust ( TypeSystemRust ). JP Lehr announced that MetaCG development has now moved out into the open . MetaCG offers an annotated whole program call-graph tool for Clang/LLVM. According to the LLVM calendar in the coming week there will be the following: Office hours with the following hosts: Quentin Colombet, Johannes Doerfert, Renato Golin. Online sync-ups on the following topics: Flang, modules, LLVM/Offload, Clang C/C++ language working group, SPIR-V, OpenMP for Flang, HLSL, memory safety working group, MLGO. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Peter Collingbourne published a set of pointer authentication code or memory safety related RFCs: the structure protection family of UAF mitigation techniques , deactivation symbols , and emulated PAC . Vasileios Porpodas provided an update on the Sandbox Vectorizer project , noting that IR coverage is at 90% and the vectorizer is stable enough to compile and run the llvm-test-suite and clang as a workload. Mike Urbach raised the issue of overflow of orderIndex in MLIR for very large (100s of millions of ops!) basic blocks . Tom Stellard has a proposed incident response guide aimed for LLVM project admins looking to handle a security incident. Yingwei Zheng shared news of clang-i18n, a plugin for a dynamically linked clang/LLVM to translate diagnostic messages to other languages . Florian Mayer proposed adding a ‘review notes’ bot which would flag things for reviewers to consider based on predefined rules. Andrzej Warzynski is working to document stacked PR practices . The issue of report_fatal_error and whether it should emit backtraces by default was raised again , still with strongly differing viewpoints. Yeoul Na is looking for feedback on a proposal to allow forward-referencing a struct member without bounds annotations . Balazs Benics gave a report on upgrading Z3 from 4.13.0 to 4.14.1 , nothing that runtime characteristics remain roughly the same. “dpthinker” asked about why applying BOLT to libart.so on Android devices didn’t seem to show much improvement and received good advice. ChuanqiXu updated their proposal for extensions to export macros/preprocessor states for C++20 modules . Alex Zinenko proposed allowing arbitrary vector element types in MLIR , following on from the previous RFC on allowing pointers as vector element types. Orlando Cazalet-Hyams is putting out a call for final feedback / suggested path forward for the is_stmt placement for better debugging RFC . Rolf Morel made an MLIR RFC on generalising TilingInterface and tileUsingSCF driver to operate on ShapedType . LLVM commits A Mustache Templating Language parser was added to llvm/Support. ece59a8 . Documentation was added for the sandbox vectorizer. 31fe0d2 . A new late branch optimisation pass was added for RISC-V, cleaning up conditional branches that can be statically evaluated with an unconditional branch. d8e44a9 . Copy and pasting a failing command from a lit test is now ever so slightly easier, as RUN: at line N: was moved to a comment after the command, allowing the command to be directly copied without deleting the prefix. 8d3dc1e . SystemZ started to implement the isCopyInstrImpl hook. c0a7ccb . The llvm-mca -instruction-tables option now accepts verbosity levels. f4bb9b5 . .option [no]exact is now supported for RISC-V assembler input. 6a371c7 . The static data splitter pass gained some support for constant pool partitioning. 9747bb1 . Clang commits The _Countof operator from C2y is now implemented, and provided as an extension in earlier C language modes. 00c43ae . Binary operators were implemented in ClangIR. 2f3c937 . The alpha.core.FixedAddressDereference checker is no longer marked as alpha. 322b2fe . Clang’s requirements for freestanding builds ( -ffreestanding ) were documented. Notably, 85c54a5 . Other project commits LLD’s ELF linker now has a --why-live flag much like the Mach-O linker, which prints the reasons symbols matching the given globs survived GC. 074af0f . Experimental support was added for compiling flang-rt directly for the GPU, similar to the method used for libc and libc++. 85974a0 . In libclc, a number of helpers were moved to the CLC library. 70c325b , d46a699 , 3013458 , 3284559 , and more. LLDB started to provide a statusline that displays information about the current state of the debugger at the bottom of the screen. 9c18edc . OpenMP was enabled for the Haiku OS. 9b7a7e4 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://docs.aws.amazon.com/id_id/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-testing-debugging.html#lambda-edge-testing-debugging-test-function | Uji dan debug fungsi Lambda @Edge - Amazon CloudFront Uji dan debug fungsi Lambda @Edge - Amazon CloudFront Dokumentasi Amazon CloudFront Panduan Developerr Uji fungsi Lambda @Edge Anda Identifikasi kesalahan fungsi Lambda @Edge di CloudFront Memecahkan masalah respons fungsi Lambda @Edge yang tidak valid (kesalahan validasi) Memecahkan masalah kesalahan eksekusi fungsi Lambda @Edge Tentukan Wilayah Lambda @Edge Tentukan apakah akun Anda mendorong log ke CloudWatch Terjemahan disediakan oleh mesin penerjemah. Jika konten terjemahan yang diberikan bertentangan dengan versi bahasa Inggris aslinya, utamakan versi bahasa Inggris. Uji dan debug fungsi Lambda @Edge Penting untuk menguji kode fungsi Lambda @Edge Anda secara mandiri, untuk memastikan bahwa itu menyelesaikan tugas yang dimaksudkan, dan untuk melakukan pengujian integrasi, untuk memastikan bahwa fungsi berfungsi dengan benar. CloudFront Selama pengujian integrasi atau setelah fungsi Anda di-deploy, Anda mungkin perlu men-debug CloudFront kesalahan, seperti kesalahan HTTP 5xx. Kesalahan dapat menjadi respons tidak valid yang dikembalikan dari fungsi Lambda, kesalahan eksekusi saat fungsi dipicu, atau kesalahan akibat perotasian eksekusi oleh layanan Lambda. Bagian-bagian dalam topik ini membagikan strategi untuk menentukan jenis kegagalan mana yang menjadi masalahnya, kemudian langkah-langkah yang dapat Anda ambil untuk memperbaiki masalah. catatan Saat Anda meninjau file CloudWatch log atau metrik saat Anda memecahkan masalah kesalahan, ketahuilah bahwa kesalahan tersebut ditampilkan atau disimpan di lokasi Wilayah AWS terdekat dengan lokasi di mana fungsi dijalankan. Jadi, jika Anda memiliki situs web atau aplikasi web dengan pengguna di Britania Raya, dan Anda memiliki fungsi Lambda yang terkait dengan distribusi Anda, misalnya, Anda harus mengubah Wilayah untuk CloudWatch melihat metrik atau file log untuk London. Wilayah AWS Untuk informasi selengkapnya, lihat Tentukan Wilayah Lambda @Edge . Topik Uji fungsi Lambda @Edge Anda Identifikasi kesalahan fungsi Lambda @Edge di CloudFront Memecahkan masalah respons fungsi Lambda @Edge yang tidak valid (kesalahan validasi) Memecahkan masalah kesalahan eksekusi fungsi Lambda @Edge Tentukan Wilayah Lambda @Edge Tentukan apakah akun Anda mendorong log ke CloudWatch Uji fungsi Lambda @Edge Anda Terdapat dua langkah untuk menguji fungsi Lambda Anda: pengujian mandiri dan pengujian integrasi. Uji fungsionalitas mandiri Sebelum Anda menambahkan fungsi Lambda CloudFront, pastikan untuk menguji fungsionalitas terlebih dahulu dengan menggunakan kemampuan pengujian di konsol Lambda atau dengan menggunakan metode lain. Untuk informasi selengkapnya tentang pengujian di konsol Lambda, lihat Memanggil fungsi Lambda menggunakan konsol di Panduan Pengembang .AWS Lambda Uji operasi fungsi Anda di CloudFront Penting untuk menyelesaikan pengujian integrasi, di mana fungsi Anda dikaitkan dengan distribusi dan berjalan berdasarkan CloudFront peristiwa. Pastikan bahwa fungsi dipicu untuk acara yang tepat, dan mengembalikan respons yang valid dan benar untuk CloudFront. Misalnya, pastikan bahwa struktur acara sudah benar, bahwa hanya header yang valid yang disertakan, dan sebagainya. Saat Anda mengulangi pengujian integrasi dengan fungsi Anda di konsol Lambda, lihat langkah-langkah dalam tutorial Lambda @Edge saat Anda memodifikasi kode atau mengubah CloudFront pemicu yang memanggil fungsi Anda. Misalnya, pastikan bahwa Anda bekerja dalam versi bernomor dari fungsi Anda, seperti yang dijelaskan dalam langkah tutorial ini: Langkah 4: Tambahkan CloudFront pemicu untuk menjalankan fungsi . Saat Anda membuat perubahan dan menerapkannya, ketahuilah bahwa fungsi dan CloudFront pemicu Anda yang diperbarui akan memakan waktu beberapa menit untuk mereplikasi di semua Wilayah. Ini biasanya memerlukan waktu beberapa menit, tetapi dapat memakan waktu hingga 15 menit. Anda dapat memeriksa untuk melihat apakah replikasi selesai dengan membuka CloudFront konsol dan melihat distribusi Anda. Untuk memeriksa apakah replikasi Anda telah selesai digunakan Buka CloudFront konsol di https://console.aws.amazon.com/cloudfront/v4/home . Pilih nama distribusi. Periksa status distribusi yang akan diubah dari Sedang Berlangsung kembali ke Diterapkan , yang berarti fungsi Anda telah direplikasi. Kemudian ikuti langkah-langkah di bagian berikutnya untuk memverifikasi bahwa fungsi berfungsi. Ketahuilah bahwa pengujian di konsol hanya memvalidasi logika fungsi Anda, dan tidak menerapkan kuota layanan apa pun (sebelumnya dikenal sebagai batas) yang khusus untuk Lambda @Edge. Identifikasi kesalahan fungsi Lambda @Edge di CloudFront Setelah Anda memverifikasi bahwa logika fungsi Anda berfungsi dengan benar, Anda mungkin masih melihat kesalahan HTTP 5xx saat fungsi Anda berjalan. CloudFront Kesalahan HTTP 5xx dapat dikembalikan karena berbagai alasan, yang dapat mencakup kesalahan fungsi Lambda atau masalah lain di dalamnya. CloudFront Jika Anda menggunakan fungsi Lambda @Edge, Anda dapat menggunakan grafik di CloudFront konsol untuk membantu melacak penyebab kesalahan, dan kemudian bekerja untuk memperbaikinya. Misalnya, Anda dapat melihat apakah kesalahan HTTP 5xx disebabkan oleh CloudFront atau oleh fungsi Lambda, dan kemudian, untuk fungsi tertentu, Anda dapat melihat file log terkait untuk menyelidiki masalah tersebut. Untuk memecahkan masalah kesalahan HTTP secara umum di CloudFront, lihat langkah-langkah pemecahan masalah dalam topik berikut:. Memecahkan masalah kode status respons kesalahan di CloudFront Apa yang menyebabkan kesalahan fungsi Lambda @Edge di CloudFront Ada beberapa alasan mengapa fungsi Lambda dapat menyebabkan kesalahan HTTP 5xx, dan langkah-langkah pemecahan masalah yang harus Anda ambil bergantung pada jenis kesalahan. Kesalahan dapat dikategorikan sebagai berikut: Kesalahan eksekusi fungsi Lambda Kesalahan eksekusi terjadi ketika CloudFront tidak mendapatkan respons dari Lambda karena ada pengecualian yang tidak tertangani dalam fungsi atau ada kesalahan dalam kode. Misalnya, jika kode menyertakan callback(Kesalahan). Respons fungsi Lambda yang tidak valid dikembalikan ke CloudFront Setelah fungsi berjalan, CloudFront menerima respons dari Lambda. Kesalahan dikembalikan jika struktur objek tanggapan tidak sesuai dengan Struktur acara Lambda @Edge , atau respons berisi header yang tidak valid atau kolom tidak valid lainnya. Eksekusi di CloudFront dibatasi karena kuota layanan Lambda (sebelumnya dikenal sebagai batas) Eksekusi throttle layanan Lambda di setiap Wilayah, dan menghasilkan kesalahan jika Anda melebihi kuota. Untuk informasi selengkapnya, lihat Kuotas di Lambda@Edge . Cara menentukan jenis kegagalan Untuk membantu Anda memutuskan di mana harus fokus saat Anda men-debug dan bekerja untuk menyelesaikan kesalahan yang dikembalikan oleh CloudFront, akan sangat membantu untuk mengidentifikasi CloudFront mengapa mengembalikan kesalahan HTTP. Untuk memulai, Anda dapat menggunakan grafik yang disediakan di bagian Pemantauan CloudFront konsol di Konsol Manajemen AWS. Untuk informasi selengkapnya tentang melihat grafik di bagian Pemantauan CloudFront konsol, lihat Pantau CloudFront metrik dengan Amazon CloudWatch . Grafik berikut akan sangat membantu ketika Anda ingin melacak apakah kesalahan dikembalikan oleh asal atau fungsi Lambda, dan untuk mempersempit jenis masalah ketika itu adalah kesalahan dari fungsi Lambda. Grafik harga kesalahan Salah satu grafik yang dapat Anda lihat pada Ikhtisar untuk setiap distribusi Anda adalah Tingkat kesalahan grafik. Grafik ini menampilkan tingkat kesalahan sebagai persentase dari total permintaan yang datang ke distribusi Anda. Grafik menunjukkan tingkat kesalahan total, total 4xx kesalahan, total 5xx kesalahan, dan total 5xx kesalahan dari fungsi Lambda. Berdasarkan jenis dan volume kesalahan, Anda dapat mengambil langkah untuk menyelidiki dan memecahkan masalah penyebab. Jika Anda melihat kesalahan Lambda, Anda dapat menyelidiki lebih lanjut dengan melihat jenis kesalahan tertentu yang dikembalikan oleh fungsi tersebut. Kesalahan Lambda@Edge tab menyertakan grafik yang mengategorikan kesalahan fungsi berdasarkan jenis untuk membantu Anda menemukan masalah dari fungsi tertentu. Jika Anda melihat CloudFront kesalahan, Anda dapat memecahkan masalah dan bekerja untuk memperbaiki kesalahan asal atau mengubah konfigurasi Anda CloudFront . Untuk informasi selengkapnya, lihat Memecahkan masalah kode status respons kesalahan di CloudFront . Grafik kesalahan pelaksanaan dan respons fungsi tidak valid Kesalahan Lambda@Edge tab mencakup grafik yang mengkategorikan kesalahan Lambda@Edge untuk distribusi tertentu, berdasarkan jenis. Misalnya, satu grafik menunjukkan semua kesalahan eksekusi oleh Wilayah AWS. Untuk mempermudah pemecahan masalah, Anda dapat mencari masalah tertentu dengan membuka dan memeriksa file log untuk fungsi tertentu berdasarkan Wilayah. Untuk melihat file log untuk fungsi tertentu menurut Wilayah Pada tab kesalahan Lambda @Edge , di bawah fungsi Lambda @Edge Terkait, pilih nama fungsi, lalu pilih Lihat metrik. Selanjutnya, pada halaman dengan nama fungsi Anda, di sudut kanan atas, pilih Lihat log fungsi , lalu pilih Region. Misalnya, jika Anda melihat masalah dalam grafik Kesalahan untuk Wilayah AS Barat (Oregon), pilih Wilayah itu dari daftar tarik-turun. Ini membuka CloudWatch konsol Amazon. Di CloudWatch konsol untuk Wilayah itu, di bawah Aliran log , pilih aliran log untuk melihat peristiwa untuk fungsi tersebut. Selain itu, baca bagian berikut dalam bab ini untuk rekomendasi lebih lanjut tentang pemecahan masalah dan memperbaiki kesalahan. Grafik trotel Kesalahan Lambda@Edge juga mencakup Trotel grafik. Terkadang, layanan Lambda merombak invokasi fungsi Anda dengan basis per Wilayah, jika Anda mencapai kuota konkurensi regional (sebelumnya disebut batas). Jika Anda melihat kesalahan yang melebihi , fungsi Anda telah mencapai kuota yang dikenakan layanan Lambda pada eksekusi di Wilayah. Untuk informasi lebih lanjut, termasuk cara meminta peningkatan kuota, lihat Kuotas di Lambda@Edge . Sebagai contoh tentang cara menggunakan informasi ini dalam mengatasi masalah kesalahan HTTP, lihat Empat langkah untuk melakukan debug pengiriman konten Anda di AWS . Memecahkan masalah respons fungsi Lambda @Edge yang tidak valid (kesalahan validasi) Jika Anda mengidentifikasi bahwa masalah Anda adalah kesalahan validasi Lambda, itu berarti bahwa fungsi Lambda Anda mengembalikan respons yang tidak valid. CloudFront Ikuti panduan di bagian ini untuk mengambil langkah-langkah untuk meninjau fungsi Anda dan memastikan bahwa respons Anda sesuai dengan CloudFront persyaratan. CloudFront memvalidasi respons dari fungsi Lambda dengan dua cara: Respon Lambda harus sesuai dengan struktur objek yang diperlukan. Contoh struktur objek yang buruk mencakup hal berikut: JSON yang tidak dapat dipisahkan, kolom wajib yang hilang, dan objek tidak valid dalam respons. Untuk informasi lebih lanjut, lihat Struktur acara Lambda @Edge . Respons harus menyertakan hanya nilai objek yang valid. Kesalahan akan terjadi jika respons mencakup objek valid tetapi memiliki nilai yang tidak didukung. Contohnya meliputi yang berikut ini: menambahkan atau memperbarui header yang masuk daftar tidak diizinkan atau hanya baca (lihat Pembatasan pada fungsi edge ), melebihi ukuran izi maksimum (lihat dalam Pembatasan Ukuran Respons yang Dihasilkan dalam topic Kesalahan Lambda@Edge) dan karakter atau nilai tidak valid (lihat Struktur acara Lambda @Edge ). Ketika Lambda mengembalikan respons yang tidak valid CloudFront, pesan kesalahan ditulis ke file log yang CloudFront mendorong ke CloudWatch Wilayah tempat fungsi Lambda dijalankan. Ini adalah perilaku default untuk mengirim file log CloudWatch ketika ada respons yang tidak valid. Namun, jika Anda mengaitkan fungsi Lambda CloudFront sebelum fungsionalitas dirilis, fungsi tersebut mungkin tidak diaktifkan untuk fungsi Anda. Untuk informasi lebih lanjut, lihat Tentukan apakah Akun Anda Mendorong Log ke CloudWatch nanti dalam topik. CloudFront mendorong file log ke Wilayah yang sesuai dengan tempat fungsi Anda dijalankan, di grup log yang terkait dengan distribusi Anda. Grup log memiliki format berikut: /aws/cloudfront/LambdaEdge/ DistributionId , di DistributionId mana ID distribusi Anda. Untuk menentukan Wilayah tempat Anda dapat menemukan file CloudWatch log, lihat Menentukan Wilayah Lambda @Edge nanti dalam topik ini. Jika kesalahan dapat direproduksi, Anda dapat membuat permintaan baru yang menghasilkan kesalahan dan kemudian menemukan id permintaan dalam CloudFront respons gagal ( X-Amz-Cf-Id header) untuk menemukan satu kegagalan dalam file log. Entri file log mencakup informasi yang dapat membantu Anda mengidentifikasi mengapa kesalahan dikembalikan, dan juga mencantumkan id permintaan Lambda yang sesuai sehingga Anda dapat menganalisis akar masalah dalam konteks permintaan tunggal. Jika kesalahan terputus-putus, Anda dapat menggunakan log CloudFront akses untuk menemukan id permintaan untuk permintaan yang gagal, dan kemudian mencari CloudWatch log untuk pesan kesalahan yang sesuai. Untuk informasi lebih lanjut, lihat bagian sebelumnya, Menentukan Jenis Kegagalan . Memecahkan masalah kesalahan eksekusi fungsi Lambda @Edge Jika masalahnya adalah kesalahan eksekusi Lambda, akan sangat membantu untuk membuat pernyataan logging untuk fungsi Lambda, untuk menulis pesan ke file CloudWatch log yang memantau eksekusi fungsi Anda CloudFront dan menentukan apakah berfungsi seperti yang diharapkan. Kemudian Anda dapat mencari pernyataan tersebut di file CloudWatch log untuk memverifikasi bahwa fungsi Anda berfungsi. catatan Bahkan jika Anda belum mengubah fungsi Lambda@Edge Anda, pembaruan pada lingkungan pelaksanaan fungsi Lambda dapat memengaruhinya dan dapat mengembalikan kesalahan pelaksanaan. Untuk informasi tentang pengujian dan migrasi ke versi yang lebih baru, lihat Pembaruan mendatang untuk lingkungan eksekusi AWS Lambda dan AWS Lambda @Edge . Tentukan Wilayah Lambda @Edge Untuk melihat Wilayah tempat fungsi Lambda @Edge Anda menerima lalu lintas, lihat metrik untuk fungsi di CloudFront konsol di. Konsol Manajemen AWS Metrik ditampilkan untuk setiap AWS Wilayah. Di halaman yang sama, Anda dapat memilih Wilayah dan melihat file log untuk Wilayah tersebut sehingga Anda dapat menyelidiki masalah. Anda harus meninjau file CloudWatch log di AWS Wilayah yang benar untuk melihat file log yang dibuat saat CloudFront menjalankan fungsi Lambda Anda. Untuk informasi selengkapnya tentang melihat grafik di bagian Pemantauan CloudFront konsol, lihat Pantau CloudFront metrik dengan Amazon CloudWatch . Tentukan apakah akun Anda mendorong log ke CloudWatch Secara default, CloudFront memungkinkan pencatatan respons fungsi Lambda yang tidak valid, dan mendorong file log ke CloudWatch dengan menggunakan salah satu file. Peran terkait layanan untuk Lambda @Edge Jika Anda memiliki fungsi Lambda @Edge yang Anda tambahkan CloudFront sebelum fitur log respons fungsi Lambda yang tidak valid dirilis, logging diaktifkan saat Anda memperbarui konfigurasi Lambda @Edge Anda, misalnya, dengan menambahkan pemicu. CloudFront Anda dapat memverifikasi bahwa mendorong file log ke CloudWatch diaktifkan untuk akun Anda dengan melakukan hal berikut: Periksa untuk melihat apakah log muncul CloudWatch — Pastikan Anda melihat di Wilayah tempat fungsi Lambda @Edge dijalankan. Untuk informasi selengkapnya, lihat Tentukan Wilayah Lambda @Edge . Tentukan apakah peran terkait layanan terkait ada di akun Anda di IAM — Anda harus memiliki peran AWSServiceRoleForCloudFrontLogger IAM di akun Anda. Untuk informasi selengkapnya tentang peran ini, silakan lihat Peran terkait layanan untuk Lambda @Edge . Javascript dinonaktifkan atau tidak tersedia di browser Anda. Untuk menggunakan Dokumentasi AWS, Javascript harus diaktifkan. Lihat halaman Bantuan browser Anda untuk petunjuk. Konvensi Dokumen Tambahkan pemicu ke fungsi Lambda @Edge Hapus fungsi dan replika Apakah halaman ini membantu Anda? - Ya Terima kasih telah memberitahukan bahwa hasil pekerjaan kami sudah baik. Jika Anda memiliki waktu luang, beri tahu kami aspek apa saja yang sudah bagus, agar kami dapat menerapkannya secara lebih luas. Apakah halaman ini membantu Anda? - Tidak Terima kasih telah memberi tahu kami bahwa halaman ini perlu ditingkatkan. Maaf karena telah mengecewakan Anda. Jika Anda memiliki waktu luang, beri tahu kami bagaimana dokumentasi ini dapat ditingkatkan. | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/transform/index.html | Log4j Transform :: Apache Log4j Transform Apache Log4j Transform a subproject of Apache Logging Services Home Components Log4j Transform Maven Plugin Maven Shade Plugin Extensions Development Release notes Download Support Security Home Log4j Transform Edit this Page Log4j Transform Log4j Transform contains tools for binary postprocessing of projects that use the Apache Log4j2 API . Maven Bill of Materials (BOM) To keep your Log4j Transform module versions aligned, a Maven Bill of Materials (BOM) POM is provided for your convenience. To use this with Maven, add the dependency listed below to your pom.xml file. Note that the <dependencyManagement> nesting and the <scope>import</scope> instruction. This will import all modules bundled with the associated Log4j release to your dependencyManagement . As a result, you don’t have to specify versions of the imported modules ( log4j-weaver , etc.) while using them as a <dependency> . pom.xml snippet importing log4j-transform-bom <dependencyManagement> <dependencies> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-transform-bom</artifactId> <version>0.2.0</version> <scope>import</scope> <type>pom</type> </dependency> </dependencies> </dependencyManagement> Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
http://docs.buildbot.net/current/manual/configuration/changesources.html | 2.5.3. Change Sources and Changes — Buildbot 4.3.0 documentation Buildbot 1. Buildbot Tutorial 2. Buildbot Manual 2.1. Introduction 2.2. Installation 2.3. Concepts 2.4. Secret Management 2.5. Configuration 2.5.1. Configuring Buildbot 2.5.2. Global Configuration 2.5.3. Change Sources and Changes 2.5.3.1. How Different VC Systems Specify Sources 2.5.3.2. Choosing a Change Source 2.5.3.3. Configuring Change Sources 2.5.3.4. Mail-parsing ChangeSources 2.5.3.5. PBChangeSource 2.5.3.6. P4Source 2.5.3.7. SVNPoller 2.5.3.8. Bzr Poller 2.5.3.9. GitPoller 2.5.3.10. HgPoller 2.5.3.11. GitHubPullrequestPoller 2.5.3.12. BitbucketPullrequestPoller 2.5.3.13. GerritChangeSource 2.5.3.14. GerritEventLogPoller 2.5.3.15. GerritChangeFilter 2.5.3.16. Change Hooks (HTTP Notifications) 2.5.4. Changes buildbot.changes.changes.Change 2.5.4.1. Who 2.5.4.2. Files 2.5.4.3. Comments 2.5.4.4. Project 2.5.4.5. Repository 2.5.4.6. Codebase 2.5.4.7. Revision 2.5.4.8. Branches 2.5.4.9. Change Properties 2.5.5. Schedulers 2.5.6. Workers 2.5.7. Builder Configuration 2.5.8. Projects 2.5.9. Codebases 2.5.10. Build Factories 2.5.11. Build Sets 2.5.12. Properties 2.5.13. Build Steps 2.5.14. Interlocks 2.5.15. Report Generators 2.5.16. Reporters 2.5.17. Web Server 2.5.18. Change Hooks 2.5.19. Custom Services 2.5.20. DbConfig 2.5.21. Configurators 2.5.22. Manhole 2.5.23. Multimaster 2.5.24. Multiple-Codebase Builds 2.5.25. Miscellaneous Configuration 2.5.26. Testing Utilities 2.6. Customization 2.7. Command-line Tool 2.8. Resources 2.9. Optimization 2.10. Plugin Infrastructure in Buildbot 2.11. Deployment 2.12. Upgrading 3. Buildbot Development 4. Release Notes 5. Older Release Notes 6. API Indices Buildbot 2. Buildbot Manual 2.5. Configuration 2.5.3. Change Sources and Changes View page source 2.5.3. Change Sources and Changes How Different VC Systems Specify Sources Comparison Tree Stability Choosing a Change Source Configuring Change Sources Repository and Project Mail-parsing ChangeSources Subscribing the Buildmaster Using Maildirs Parsing Email Change Messages CVSMaildirSource SVNCommitEmailMaildirSource BzrLaunchpadEmailMaildirSource PBChangeSource Bzr Hook P4Source Example #1 Example #2 SVNPoller Bzr Poller GitPoller HgPoller GitHubPullrequestPoller BitbucketPullrequestPoller GerritChangeSource GerritEventLogPoller GerritChangeFilter Change Hooks (HTTP Notifications) A change source is the mechanism which is used by Buildbot to get information about new changes in a repository maintained by a Version Control System. These change sources fall broadly into two categories: pollers which periodically check the repository for updates; and hooks, where the repository is configured to notify Buildbot whenever an update occurs. A Change is an abstract way that Buildbot uses to represent changes in any of the Version Control Systems it supports. It contains just enough information needed to acquire specific version of the tree when needed. This usually happens as one of the first steps in a Build . This concept does not map perfectly to every version control system. For example, for CVS, Buildbot must guess that version updates made to multiple files within a short time represent a single change. Change s can be provided by a variety of ChangeSource types, although any given project will typically have only a single ChangeSource active. 2.5.3.1. How Different VC Systems Specify Sources For CVS, the static specifications are repository and module . In addition to those, each build uses a timestamp (or omits the timestamp to mean the latest ) and branch tag (which defaults to HEAD ). These parameters collectively specify a set of sources from which a build may be performed. Subversion combines the repository, module, and branch into a single Subversion URL parameter. Within that scope, source checkouts can be specified by a numeric revision number (a repository-wide monotonically-increasing marker, such that each transaction that changes the repository is indexed by a different revision number), or a revision timestamp. When branches are used, the repository and module form a static baseURL , while each build has a revision number and a branch (which defaults to a statically-specified defaultBranch ). The baseURL and branch are simply concatenated together to derive the repourl to use for the checkout. Perforce is similar. The server is specified through a P4PORT parameter. Module and branch are specified in a single depot path, and revisions are depot-wide. When branches are used, the p4base and defaultBranch are concatenated together to produce the depot path. Bzr (which is a descendant of Arch/Bazaar, and is frequently referred to as “Bazaar”) has the same sort of repository-vs-workspace model as Arch, but the repository data can either be stored inside the working directory or kept elsewhere (either on the same machine or on an entirely different machine). For the purposes of Buildbot (which never commits changes), the repository is specified with a URL and a revision number. The most common way to obtain read-only access to a bzr tree is via HTTP, simply by making the repository visible through a web server like Apache. Bzr can also use FTP and SFTP servers, if the worker process has sufficient privileges to access them. Higher performance can be obtained by running a special Bazaar-specific server. None of these matter to the buildbot: the repository URL just has to match the kind of server being used. The repoURL argument provides the location of the repository. Branches are expressed as subdirectories of the main central repository, which means that if branches are being used, the BZR step is given a baseURL and defaultBranch instead of getting the repoURL argument. Darcs doesn’t really have the notion of a single master repository. Nor does it really have branches. In Darcs, each working directory is also a repository, and there are operations to push and pull patches from one of these repositories to another. For the Buildbot’s purposes, all you need to do is specify the URL of a repository that you want to build from. The worker will then pull the latest patches from that repository and build them. Multiple branches are implemented by using multiple repositories (possibly living on the same server). Builders which use Darcs therefore have a static repourl which specifies the location of the repository. If branches are being used, the source Step is instead configured with a baseURL and a defaultBranch , and the two strings are simply concatenated together to obtain the repository’s URL. Each build then has a specific branch which replaces defaultBranch , or just uses the default one. Instead of a revision number, each build can have a context , which is a string that records all the patches that are present in a given tree (this is the output of darcs changes --context , and is considerably less concise than, e.g. Subversion’s revision number, but the patch-reordering flexibility of Darcs makes it impossible to provide a shorter useful specification). Mercurial follows a decentralized model, and each repository can have several branches and tags. The source Step is configured with a static repourl which specifies the location of the repository. Branches are configured with the defaultBranch argument. The revision is the hash identifier returned by hg identify . Git also follows a decentralized model, and each repository can have several branches and tags. The source Step is configured with a static repourl which specifies the location of the repository. In addition, an optional branch parameter can be specified to check out code from a specific branch instead of the default master branch. The revision is specified as a SHA1 hash as returned by e.g. git rev-parse . No attempt is made to ensure that the specified revision is actually a subset of the specified branch. Monotone is another that follows a decentralized model where each repository can have several branches and tags. The source Step is configured with static repourl and branch parameters, which specifies the location of the repository and the branch to use. The revision is specified as a SHA1 hash as returned by e.g. mtn automate select w: . No attempt is made to ensure that the specified revision is actually a subset of the specified branch. Comparison Name Change Revision Branches CVS patch [1] timestamp unnamed Subversion revision integer directories Git commit sha1 hash named refs Mercurial changeset sha1 hash different repos or (permanently) named commits Darcs ? none [2] different repos Bazaar ? ? ? Perforce ? ? ? BitKeeper changeset ? different repos [1] note that CVS only tracks patches to individual files. Buildbot tries to recognize coordinated changes to multiple files by correlating change times. [2] Darcs does not have a concise way of representing a particular revision of the source. Tree Stability Changes tend to arrive at a buildmaster in bursts. In many cases, these bursts of changes are meant to be taken together. For example, a developer may have pushed multiple commits to a DVCS that comprise the same new feature or bugfix. To avoid trying to build every change, Buildbot supports the notion of tree stability , by waiting for a burst of changes to finish before starting to schedule builds. This is implemented as a timer, with builds not scheduled until no changes have occurred for the duration of the timer. 2.5.3.2. Choosing a Change Source There are a variety of ChangeSource classes available, some of which are meant to be used in conjunction with other tools to deliver Change events from the VC repository to the buildmaster. As a quick guide, here is a list of VC systems and the ChangeSource s that might be useful with them. Note that some of these modules are in Buildbot’s master/contrib directory, meaning that they have been offered by other users in hopes they may be useful, and might require some additional work to make them functional. CVS CVSMaildirSource (watching mail sent by master/contrib/buildbot_cvs_mail.py script) PBChangeSource (listening for connections from buildbot sendchange run in a loginfo script) PBChangeSource (listening for connections from a long-running master/contrib/viewcvspoll.py polling process which examines the ViewCVS database directly) Change Hooks in WebStatus SVN PBChangeSource (listening for connections from master/contrib/svn_buildbot.py run in a postcommit script) PBChangeSource (listening for connections from a long-running master/contrib/svn_watcher.py or master/contrib/svnpoller.py polling process SVNCommitEmailMaildirSource (watching for email sent by commit-email.pl ) SVNPoller (polling the SVN repository) Change Hooks in WebStatus Darcs PBChangeSource (listening for connections from master/contrib/darcs_buildbot.py in a commit script) Change Hooks in WebStatus Mercurial Change Hooks in WebStatus (including master/contrib/hgbuildbot.py , configurable in a changegroup hook) BitBucket change hook (specifically designed for BitBucket notifications, but requiring a publicly-accessible WebStatus) HgPoller (polling a remote Mercurial repository) BitbucketPullrequestPoller (polling Bitbucket for pull requests) Mail-parsing ChangeSources , though there are no ready-to-use recipes Bzr (the newer Bazaar) PBChangeSource (listening for connections from master/contrib/bzr_buildbot.py run in a post-change-branch-tip or commit hook) BzrPoller (polling the Bzr repository) Change Hooks in WebStatus Git PBChangeSource (listening for connections from master/contrib/git_buildbot.py run in the post-receive hook) PBChangeSource (listening for connections from master/contrib/github_buildbot.py , which listens for notifications from GitHub) Change Hooks in WebStatus GitHub change hook (specifically designed for GitHub notifications, but requiring a publicly-accessible WebStatus) BitBucket change hook (specifically designed for BitBucket notifications, but requiring a publicly-accessible WebStatus) GitPoller (polling a remote Git repository) GitHubPullrequestPoller (polling GitHub API for pull requests) BitbucketPullrequestPoller (polling Bitbucket for pull requests) Repo/Gerrit GerritChangeSource connects to Gerrit via SSH and optionally HTTP to get a live stream of changes GerritEventLogPoller connects to Gerrit via HTTP with the help of the plugin events-log Monotone PBChangeSource (listening for connections from monotone-buildbot.lua , which is available with Monotone) All VC systems can be driven by a PBChangeSource and the buildbot sendchange tool run from some form of commit script. If you write an email parsing function, they can also all be driven by a suitable mail-parsing source . Additionally, handlers for web-based notification (i.e. from GitHub) can be used with WebStatus’ change_hook module. The interface is simple, so adding your own handlers (and sharing!) should be a breeze. See Change Source Index for a full list of change sources. 2.5.3.3. Configuring Change Sources The change_source configuration key holds all active change sources for the configuration. Most configurations have a single ChangeSource , watching only a single tree, e.g., from buildbot.plugins import changes c [ 'change_source' ] = changes . PBChangeSource () For more advanced configurations, the parameter can be a list of change sources: source1 = ... source2 = ... c [ 'change_source' ] = [ source1 , source2 ] Repository and Project ChangeSource s will, in general, automatically provide the proper repository attribute for any changes they produce. For systems which operate on URL-like specifiers, this is a repository URL. Other ChangeSource s adapt the concept as necessary. Many ChangeSource s allow you to specify a project, as well. This attribute is useful when building from several distinct codebases in the same buildmaster: the project string can serve to differentiate the different codebases. Schedulers can filter on project, so you can configure different builders to run for each project. 2.5.3.4. Mail-parsing ChangeSources Many projects publish information about changes to their source tree by sending an email message out to a mailing list, frequently named PROJECT -commits or PROJECT -changes . Each message usually contains a description of the change (who made the change, which files were affected) and sometimes a copy of the diff. Humans can subscribe to this list to stay informed about what’s happening to the source tree. Buildbot can also subscribe to a -commits mailing list, and can trigger builds in response to Changes that it hears about. The buildmaster admin needs to arrange for these email messages to arrive in a place where the buildmaster can find them, and configure the buildmaster to parse the messages correctly. Once that is in place, the email parser will create Change objects and deliver them to the schedulers (see Schedulers ) just like any other ChangeSource. There are two components to setting up an email-based ChangeSource. The first is to route the email messages to the buildmaster, which is done by dropping them into a maildir . The second is to actually parse the messages, which is highly dependent upon the tool that was used to create them. Each VC system has a collection of favorite change-emailing tools with a slightly different format and its own parsing function. Buildbot has a separate ChangeSource variant for each of these parsing functions. Once you’ve chosen a maildir location and a parsing function, create the change source and put it in change_source : from buildbot.plugins import changes c [ 'change_source' ] = changes . CVSMaildirSource ( "~/maildir-buildbot" , prefix = "/trunk/" ) Subscribing the Buildmaster The recommended way to install Buildbot is to create a dedicated account for the buildmaster. If you do this, the account will probably have a distinct email address (perhaps buildmaster@example.org ). Then just arrange for this account’s email to be delivered to a suitable maildir (described in the next section). If Buildbot does not have its own account, extension addresses can be used to distinguish between emails intended for the buildmaster and emails intended for the rest of the account. In most modern MTAs, the e.g. foo@example.org account has control over every email address at example.org which begins with “foo”, such that emails addressed to account-foo@example.org can be delivered to a different destination than account-bar@example.org . qmail does this by using separate .qmail files for the two destinations ( .qmail-foo and .qmail-bar , with .qmail controlling the base address and .qmail-default controlling all other extensions). Other MTAs have similar mechanisms. Thus you can assign an extension address like foo-buildmaster@example.org to the buildmaster and retain foo@example.org for your own use. Using Maildirs A maildir is a simple directory structure originally developed for qmail that allows safe atomic update without locking. Create a base directory with three subdirectories: new , tmp , and cur . When messages arrive, they are put into a uniquely-named file (using pids, timestamps, and random numbers) in tmp . When the file is complete, it is atomically renamed into new . Eventually the buildmaster notices the file in new , reads and parses the contents, then moves it into cur . A cronjob can be used to delete files in cur at leisure. Maildirs are frequently created with the maildirmake tool, but a simple mkdir -p ~/ MAILDIR / cur,new,tmp is pretty much equivalent. Many modern MTAs can deliver directly to maildirs. The usual .forward or .procmailrc syntax is to name the base directory with a trailing slash, so something like ~/ MAILDIR / . qmail and postfix are maildir-capable MTAs, and procmail is a maildir-capable MDA (Mail Delivery Agent). Here is an example procmail config, located in ~/.procmailrc : # .procmailrc # routes incoming mail to appropriate mailboxes PATH=/usr/bin:/usr/local/bin MAILDIR=$HOME/Mail LOGFILE=.procmail_log SHELL=/bin/sh :0 * new If procmail is not setup on a system wide basis, then the following one-line .forward file will invoke it. !/usr/bin/procmail For MTAs which cannot put files into maildirs directly, the safecat tool can be executed from a .forward file to accomplish the same thing. The Buildmaster uses the linux DNotify facility to receive immediate notification when the maildir’s new directory has changed. When this facility is not available, it polls the directory for new messages, every 10 seconds by default. Parsing Email Change Messages The second component to setting up an email-based ChangeSource is to parse the actual notices. This is highly dependent upon the VC system and commit script in use. A couple of common tools used to create these change emails, along with the Buildbot tools to parse them, are: CVS Buildbot CVS MailNotifier CVSMaildirSource SVN svnmailer http://opensource.perlig.de/en/svnmailer/ commit-email.pl SVNCommitEmailMaildirSource Bzr Launchpad BzrLaunchpadEmailMaildirSource Mercurial NotifyExtension https://www.mercurial-scm.org/wiki/NotifyExtension Git post-receive-email http://git.kernel.org/?p=git/git.git;a=blob;f=contrib/hooks/post-receive-email;hb=HEAD The following sections describe the parsers available for each of these tools. Most of these parsers accept a prefix= argument, which is used to limit the set of files that the buildmaster pays attention to. This is most useful for systems like CVS and SVN which put multiple projects in a single repository (or use repository names to indicate branches). Each filename that appears in the email is tested against the prefix: if the filename does not start with the prefix, the file is ignored. If the filename does start with the prefix, that prefix is stripped from the filename before any further processing is done. Thus the prefix usually ends with a slash. CVSMaildirSource class buildbot.changes.mail. CVSMaildirSource This parser works with the master/contrib/buildbot_cvs_mail.py script. The script sends an email containing all the files submitted in one directory. It is invoked by using the CVSROOT/loginfo facility. The Buildbot’s CVSMaildirSource knows how to parse these messages and turn them into Change objects. It takes the directory name of the maildir root. For example: from buildbot.plugins import changes c [ 'change_source' ] = changes . CVSMaildirSource ( "/home/buildbot/Mail" ) Configuration of CVS and buildbot_cvs_mail.py CVS must be configured to invoke the buildbot_cvs_mail.py script when files are checked in. This is done via the CVS loginfo configuration file. To update this, first do: cvs checkout CVSROOT cd to the CVSROOT directory and edit the file loginfo, adding a line like: SomeModule /cvsroot/CVSROOT/buildbot_cvs_mail.py --cvsroot :ext:example.com:/cvsroot -e buildbot -P SomeModule %@{sVv@} Note For cvs version 1.12.x, the --path %p option is required. Version 1.11.x and 1.12.x report the directory path differently. The above example you put the buildbot_cvs_mail.py script under /cvsroot/CVSROOT. It can be anywhere. Run the script with --help to see all the options. At the very least, the options -e (email) and -P (project) should be specified. The line must end with %{sVv} . This is expanded to the files that were modified. Additional entries can be added to support more modules. See buildbot_cvs_mail.py --help for more information on the available options. SVNCommitEmailMaildirSource class buildbot.changes.mail. SVNCommitEmailMaildirSource SVNCommitEmailMaildirSource parses message sent out by the commit-email.pl script, which is included in the Subversion distribution. It does not currently handle branches: all of the Change objects that it creates will be associated with the default (i.e. trunk) branch. from buildbot.plugins import changes c [ 'change_source' ] = changes . SVNCommitEmailMaildirSource ( "~/maildir-buildbot" ) BzrLaunchpadEmailMaildirSource class buildbot.changes.mail. BzrLaunchpadEmailMaildirSource BzrLaunchpadEmailMaildirSource parses the mails that are sent to addresses that subscribe to branch revision notifications for a bzr branch hosted on Launchpad. The branch name defaults to lp: Launchpad path . For example lp:~maria-captains/maria/5.1 . If only a single branch is used, the default branch name can be changed by setting defaultBranch . For multiple branches, pass a dictionary as the value of the branchMap option to map specific repository paths to specific branch names (see example below). The leading lp: prefix of the path is optional. The prefix option is not supported (it is silently ignored). Use the branchMap and defaultBranch instead to assign changes to branches (and just do not subscribe the Buildbot to branches that are not of interest). The revision number is obtained from the email text. The bzr revision id is not available in the mails sent by Launchpad. However, it is possible to set the bzr append_revisions_only option for public shared repositories to avoid new pushes of merges changing the meaning of old revision numbers. from buildbot.plugins import changes bm = { 'lp:~maria-captains/maria/5.1' : '5.1' , 'lp:~maria-captains/maria/6.0' : '6.0' } c [ 'change_source' ] = changes . BzrLaunchpadEmailMaildirSource ( "~/maildir-buildbot" , branchMap = bm ) 2.5.3.5. PBChangeSource class buildbot.changes.pb. PBChangeSource PBChangeSource actually listens on a TCP port for clients to connect and push change notices into the Buildmaster. This is used by the built-in buildbot sendchange notification tool, as well as several version-control hook scripts. This change is also useful for creating new kinds of change sources that work on a push model instead of some kind of subscription scheme, for example a script which is run out of an email .forward file. This ChangeSource always runs on the same TCP port as the workers. It shares the same protocol, and in fact shares the same space of “usernames”, so you cannot configure a PBChangeSource with the same name as a worker. If you have a publicly accessible worker port and are using PBChangeSource , you must establish a secure username and password for the change source . If your sendchange credentials are known (e.g., the defaults), then your buildmaster is susceptible to injection of arbitrary changes, which (depending on the build factories) could lead to arbitrary code execution on workers. The PBChangeSource is created with the following arguments. port Which port to listen on. If None (which is the default), it shares the port used for worker connections. user The user account that the client program must use to connect. Defaults to change passwd The password for the connection - defaults to changepw . Can be a Secret . Do not use this default on a publicly exposed port! prefix The prefix to be found and stripped from filenames delivered over the connection, defaulting to None . Any filenames which do not start with this prefix will be removed. If all the filenames in a given Change are removed, then that whole Change will be dropped. This string should probably end with a directory separator. This is useful for changes coming from version control systems that represent branches as parent directories within the repository (like SVN and Perforce). Use a prefix of trunk/ or project/branches/foobranch/ to only follow one branch and to get correct tree-relative filenames. Without a prefix, the PBChangeSource will probably deliver Changes with filenames like trunk/foo.c instead of just foo.c . Of course this also depends upon the tool sending the Changes in (like buildbot sendchange ) and what filenames it is delivering: that tool may be filtering and stripping prefixes at the sending end. For example: from buildbot.plugins import changes c [ 'change_source' ] = changes . PBChangeSource ( port = 9999 , user = 'laura' , passwd = 'fpga' ) The following hooks are useful for sending changes to a PBChangeSource : Bzr Hook Bzr is also written in Python, and the Bzr hook depends on Twisted to send the changes. To install, put master/contrib/bzr_buildbot.py in one of your plugins locations a bzr plugins directory (e.g., ~/.bazaar/plugins ). Then, in one of your bazaar conf files (e.g., ~/.bazaar/locations.conf ), set the location you want to connect with Buildbot with these keys: buildbot_on one of ‘commit’, ‘push, or ‘change’. Turns the plugin on to report changes via commit, changes via push, or any changes to the trunk. ‘change’ is recommended. buildbot_server (required to send to a Buildbot master) the URL of the Buildbot master to which you will connect (as of this writing, the same server and port to which workers connect). buildbot_port (optional, defaults to 9989) the port of the Buildbot master to which you will connect (as of this writing, the same server and port to which workers connect) buildbot_pqm (optional, defaults to not pqm) Normally, the user that commits the revision is the user that is responsible for the change. When run in a pqm (Patch Queue Manager, see https://launchpad.net/pqm ) environment, the user that commits is the Patch Queue Manager, and the user that committed the parent revision is responsible for the change. To turn on the pqm mode, set this value to any of (case-insensitive) “Yes”, “Y”, “True”, or “T”. buildbot_dry_run (optional, defaults to not a dry run) Normally, the post-commit hook will attempt to communicate with the configured Buildbot server and port. If this parameter is included and any of (case-insensitive) “Yes”, “Y”, “True”, or “T”, then the hook will simply print what it would have sent, but not attempt to contact the Buildbot master. buildbot_send_branch_name (optional, defaults to not sending the branch name) If your Buildbot’s bzr source build step uses a repourl, do not turn this on. If your buildbot’s bzr build step uses a baseURL, then you may set this value to any of (case-insensitive) “Yes”, “Y”, “True”, or “T” to have the Buildbot master append the branch name to the baseURL. Note The bzr smart server (as of version 2.2.2) doesn’t know how to resolve bzr:// urls into absolute paths so any paths in locations.conf won’t match, hence no change notifications will be sent to Buildbot. Setting configuration parameters globally or in-branch might still work. When Buildbot no longer has a hardcoded password, it will be a configuration option here as well. Here’s a simple example that you might have in your ~/.bazaar/locations.conf . [chroot-*:///var/local/myrepo/mybranch] buildbot_on = change buildbot_server = localhost 2.5.3.6. P4Source The P4Source periodically polls a Perforce depot for changes. It accepts the following arguments: p4port The Perforce server to connect to (as host : port ). p4user The Perforce user. p4passwd The Perforce password. p4base The base depot path to watch, without the trailing ‘/…’. p4bin An optional string parameter. Specify the location of the perforce command line binary (p4). You only need to do this if the perforce binary is not in the path of the Buildbot user. Defaults to p4 . split_file A function that maps a pathname, without the leading p4base , to a (branch, filename) tuple. The default just returns (None, branchfile) , which effectively disables branch support. You should supply a function which understands your repository structure. pollInterval How often to poll, in seconds. Defaults to 600 (10 minutes). pollRandomDelayMin Minimum delay in seconds to wait before each poll, default is 0. This is useful in case you have a lot of pollers and you want to spread the polling load over a period of time. Setting it equal to the maximum delay will effectively delay all polls by a fixed amount of time. Must be less than or equal to the maximum delay. pollRandomDelayMax Maximum delay in seconds to wait before each poll, default is 0. This is useful in case you have a lot of pollers and you want to spread the polling load over a period of time. Must be less than the poll interval. project Set the name of the project to be used for the P4Source . This will then be set in any changes generated by the P4Source , and can be used in a Change Filter for triggering particular builders. pollAtLaunch Determines when the first poll occurs. True = immediately on launch, False = wait for one pollInterval (default). histmax The maximum number of changes to inspect at a time. If more than this number occur since the last poll, older changes will be silently ignored. encoding The character encoding of p4 's output. This defaults to “utf8”, but if your commit messages are in another encoding, specify that here. For example, if you’re using Perforce on Windows, you may need to use “cp437” as the encoding if “utf8” generates errors in your master log. server_tz The timezone of the Perforce server, using the usual timezone format (e.g: "Europe/Stockholm" ) in case it’s not in UTC. use_tickets Set to True to use ticket-based authentication, instead of passwords (but you still need to specify p4passwd ). ticket_login_interval How often to get a new ticket, in seconds, when use_tickets is enabled. Defaults to 86400 (24 hours). revlink A function that maps branch and revision to a valid url (e.g. p4web), stored along with the change. This function must be a callable which takes two arguments, the branch and the revision. Defaults to lambda branch, revision: (u’’) resolvewho A function that resolves the Perforce ‘user @ workspace ’ into a more verbose form, stored as the author of the change. Useful when usernames do not match email addresses and external, client-side lookup is required. This function must be a callable which takes one argument. Defaults to lambda who: (who) Example #1 This configuration uses the P4PORT , P4USER , and P4PASSWD specified in the buildmaster’s environment. It watches a project in which the branch name is simply the next path component, and the file is all path components after. from buildbot.plugins import changes s = changes . P4Source ( p4base = '//depot/project/' , split_file = lambda branchfile : branchfile . split ( '/' , 1 )) c [ 'change_source' ] = s Example #2 Similar to the previous example but also resolves the branch and revision into a valid revlink. from buildbot.plugins import changes s = changes . P4Source ( p4base = '//depot/project/' , split_file = lambda branchfile : branchfile . split ( '/' , 1 )) revlink = lambda branch , revision : 'http://p4web:8080/@md=d&@/ {} ?ac=10' . format ( revision ) c [ 'change_source' ] = s 2.5.3.7. SVNPoller class buildbot.changes.svnpoller. SVNPoller The SVNPoller is a ChangeSource which periodically polls a Subversion repository for new revisions, by running the svn log command in a subshell. It can watch a single branch or multiple branches. SVNPoller accepts the following arguments: repourl The base URL path to watch, like svn://svn.twistedmatrix.com/svn/Twisted/trunk , or http://divmod.org/svn/Divmo/ , or even file:///home/svn/Repository/ProjectA/branches/1.5/ . This must include the access scheme, the location of the repository (both the hostname for remote ones, and any additional directory names necessary to get to the repository), and the sub-path within the repository’s virtual filesystem for the project and branch of interest. The SVNPoller will only pay attention to files inside the subdirectory specified by the complete repourl. split_file A function to convert pathnames into (branch, relative_pathname) tuples. Use this to explain your repository’s branch-naming policy to SVNPoller . This function must accept a single string (the pathname relative to the repository) and return a two-entry tuple. Directory pathnames always end with a right slash to distinguish them from files, like trunk/src/ , or src/ . There are a few utility functions in buildbot.changes.svnpoller that can be used as a split_file function; see below for details. For directories, the relative pathname returned by split_file should end with a right slash but an empty string is also accepted for the root, like ("branches/1.5.x", "") being converted from "branches/1.5.x/" . The default value always returns (None, path) , which indicates that all files are on the trunk. Subclasses of SVNPoller can override the split_file method instead of using the split_file= argument. project Set the name of the project to be used for the SVNPoller . This will then be set in any changes generated by the SVNPoller , and can be used in a Change Filter for triggering particular builders. svnuser An optional string parameter. If set, the option –user argument will be added to all svn commands. Use this if you have to authenticate to the svn server before you can do svn info or svn log commands. Can be a Secret . svnpasswd Like svnuser , this will cause a option –password argument to be passed to all svn commands. Can be a Secret . pollInterval How often to poll, in seconds. Defaults to 600 (checking once every 10 minutes). Lower this if you want the Buildbot to notice changes faster, raise it if you want to reduce the network and CPU load on your svn server. Please be considerate of public SVN repositories by using a large interval when polling them. pollRandomDelayMin Minimum delay in seconds to wait before each poll, default is 0. This is useful in case you have a lot of pollers and you want to spread the polling load over a period of time. Setting it equal to the maximum delay will effectively delay all polls by a fixed amount of time. Must be less than or equal to the maximum delay. pollRandomDelayMax Maximum delay in seconds to wait before each poll, default is 0. This is useful in case you have a lot of pollers and you want to spread the polling load over a period of time. Must be less than the poll interval. pollAtLaunch Determines when the first poll occurs. True = immediately on launch, False = wait for one pollInterval (default). histmax The maximum number of changes to inspect at a time. Every pollInterval seconds, the SVNPoller asks for the last histmax changes and looks through them for any revisions it does not already know about. If more than histmax revisions have been committed since the last poll, older changes will be silently ignored. Larger values of histmax will cause more time and memory to be consumed on each poll attempt. histmax defaults to 100. svnbin This controls the svn executable to use. If subversion is installed in a weird place on your system (outside of the buildmaster’s PATH ), use this to tell SVNPoller where to find it. The default value of svn will almost always be sufficient. revlinktmpl This parameter is deprecated in favour of specifying a global revlink option. This parameter allows a link to be provided for each revision (for example, to websvn or viewvc). These links appear anywhere changes are shown, such as on build or change pages. The proper form for this parameter is an URL with the portion that will substitute for a revision number replaced by ‘’%s’’. For example, 'http://myserver/websvn/revision.php?rev=%s' could be used to cause revision links to be created to a websvn repository viewer. cachepath If specified, this is a pathname of a cache file that SVNPoller will use to store its state between restarts of the master. extra_args If specified, the extra arguments will be added to the svn command args. Several split file functions are available for common SVN repository layouts. For a poller that is only monitoring trunk, the default split file function is available explicitly as split_file_alwaystrunk : from buildbot.plugins import changes , util c [ 'change_source' ] = changes . SVNPoller ( repourl = "svn://svn.twistedmatrix.com/svn/Twisted/trunk" , split_file = util . svn . split_file_alwaystrunk ) For repositories with the /trunk and /branches/ BRANCH layout, split_file_branches will do the job: from buildbot.plugins import changes , util c [ 'change_source' ] = changes . SVNPoller ( repourl = "https://amanda.svn.sourceforge.net/svnroot/amanda/amanda" , split_file = util . svn . split_file_branches ) When using this splitter the poller will set the project attribute of any changes to the project attribute of the poller. For repositories with the PROJECT /trunk and PROJECT /branches/ BRANCH layout, split_file_projects_branches will do the job: from buildbot.plugins import changes , util c [ 'change_source' ] = changes . SVNPoller ( repourl = "https://amanda.svn.sourceforge.net/svnroot/amanda/" , split_file = util . svn . split_file_projects_branches ) When using this splitter the poller will set the project attribute of any changes to the project determined by the splitter. The SVNPoller is highly adaptable to various Subversion layouts. See Customizing SVNPoller for details and some common scenarios. 2.5.3.8. Bzr Poller If you cannot insert a Bzr hook in the server, you can use the BzrPoller . To use it, put master/contrib/bzr_buildbot.py somewhere that your Buildbot configuration can import it. Even putting it in the same directory as the master.cfg should work. Install the poller in the Buildbot configuration as with any other change source. Minimally, provide a URL that you want to poll ( bzr:// , bzr+ssh:// , or lp: ), making sure the Buildbot user has necessary privileges. # put bzr_buildbot.py file to the same directory as master.cfg from bzr_buildbot import BzrPoller c [ 'change_source' ] = BzrPoller ( url = 'bzr://hostname/my_project' , poll_interval = 300 ) The BzrPoller parameters are: url The URL to poll. poll_interval The number of seconds to wait between polls. Defaults to 10 minutes. branch_name Any value to be used as the branch name. Defaults to None, or specify a string, or specify the constants from bzr_buildbot.py SHORT or FULL to get the short branch name or full branch address. blame_merge_author Normally, the user that commits the revision is the user that is responsible for the change. When run in a pqm (Patch Queue Manager, see https://launchpad.net/pqm ) environment, the user that commits is the Patch Queue Manager, and the user that committed the merged, parent revision is responsible for the change. Set this value to True if this is pointed against a PQM-managed branch. 2.5.3.9. GitPoller If you cannot take advantage of post-receive hooks as provided by master/contrib/git_buildbot.py for example, then you can use the GitPoller . The GitPoller periodically fetches from a remote Git repository and processes any changes. It requires its own working directory for operation. The default should be adequate, but it can be overridden via the workdir property. Note There can only be a single GitPoller pointed at any given repository. The GitPoller requires Git-1.7 and later. It accepts the following arguments: repourl The git-url that describes the remote repository, e.g. git@example.com:foobaz/myrepo.git (see the git fetch help for more info on git-url formats) branches One of the following: a list of the branches to fetch. Non-existing branches are ignored. True indicating that all branches should be fetched a callable which takes a single argument. It should take a remote refspec (such as 'refs/heads/master' ), and return a boolean indicating whether that branch should be fetched. If not provided, GitPoller will use HEAD to fetch the remote default branch. branch Accepts a single branch name to fetch. Exists for backwards compatibility with old configurations. pollInterval Interval in seconds between polls, default is 10 minutes pollRandomDelayMin Minimum delay in seconds to wait before each poll, default is 0. This is useful in case you have a lot of pollers and you want to spread the polling load over a period of time. Setting it equal to the maximum delay will effectively delay all polls by a fixed amount of time. Must be less than or equal to the maximum delay. pollRandomDelayMax Maximum delay in seconds to wait before each poll, default is 0. This is useful in case you have a lot of pollers and you want to spread the polling load over a period of time. Must be less than the poll interval. pollAtLaunch Determines when the first poll occurs. True = immediately on launch, False = wait for one pollInterval (default). buildPushesWithNoCommits Determines if a push on a new branch or update of an already known branch with already known commits should trigger a build. This is useful in case you have build steps depending on the name of the branch and you use topic branches for development. When you merge your topic branch into “master” (for instance), a new build will be triggered. (defaults to False). gitbin Path to the Git binary, defaults to just 'git' category Set the category to be used for the changes produced by the GitPoller . This will then be set in any changes generated by the GitPoller , and can be used in a Change Filter for triggering particular builders. project Set the name of the project to be used for the GitPoller . This will then be set in any changes generated by the GitPoller , and can be used in a Change Filter for triggering particular builders. codebase (optional) Set the codebase that poller is tracking. If set, GitPoller will store more granular, per-commit data that can be viewed in the web UI. usetimestamps Parse each revision’s commit timestamp (default is True ), or ignore it in favor of the current time, so that recently processed commits appear together in the waterfall page. encoding Set encoding will be used to parse author’s name and commit message. Default encoding is 'utf-8' . This will not be applied to file names since Git will translate non-ascii file names to unreadable escape sequences. workdir The directory where the poller should keep its local repository. The default is gitpoller_work . If this is a relative path, it will be interpreted relative to the master’s basedir. Multiple Git pollers can share the same directory. only_tags Determines if the GitPoller should poll for new tags in the git repository. sshPrivateKey (optional) Specifies private SSH key for git to use. This may be either a Secret or just a string. This option requires Git-2.3 or later. The master must either have the host in the known hosts file or the host key must be specified via the sshHostKey option. sshHostKey (optional) Specifies public host key to match when authenticating with SSH public key authentication. This may be either a Secret or just a string. sshPrivateKey must be specified in order to use this option. The host key must be in the form of <key type> <base64-encoded string> , e.g. ssh-rsa AAAAB3N<…>FAaQ== . sshKnownHosts (optional) Specifies the contents of the SSH known_hosts file to match when authenticating with SSH public key authentication. This may be either a Secret or just a string. sshPrivateKey must be specified in order to use this option. sshHostKey must not be specified in order to use this option. auth_credentials (optional) An username/password tuple to use when running git for fetch operations. The worker’s git version needs to be at least 1.7.9. git_credentials (optional) See GitCredentialOptions . The worker’s git version needs to be at least 1.7.9. A configuration for the Git poller might look like this: from buildbot.plugins import changes c [ 'change_source' ] = changes . GitPoller ( repourl = 'git@example.com:foobaz/myrepo.git' , branches = [ 'master' , 'great_new_feature' ]) 2.5.3.10. HgPoller The HgPoller periodically pulls a named branch from a remote Mercurial repository and processes any changes. It requires its own working directory for operation, which must be specified via the workdir property. The HgPoller requires a working hg executable, and at least a read-only access to the repository it polls (possibly through ssh keys or by tweaking the hgrc of the system user Buildbot runs as). The HgPoller will not transmit any change if there are several heads on the watched named branch. This is similar (although not identical) to the Mercurial executable behaviour. This exceptional condition is usually the result of a developer mistake, and usually does not last for long. It is reported in logs. If fixed by a later merge, the buildmaster administrator does not have anything to do: that merge will be transmitted, together with the intermediate ones. The HgPoller accepts the following arguments: name The name of the poller. This must be unique, and defaults to the repourl . repourl The url that describes the remote repository, e.g. http://hg.example.com/projects/myrepo . Any url suitable for hg pull can be specified. bookmarks A list of the bookmarks to monitor. branches A list of the branches to monitor; defaults to ['default'] . branch The desired branch to pull. Exists for backwards compatibility with old configurations. workdir The directory where the poller should keep its local repository. It is mandatory for now, although later releases may provide a meaningful default. It also serves to identify the poller in the buildmaster internal database. Changing it may result in re-processing all changes so far. Several HgPoller instances may share the same workdir for mutualisation of the common history between two different branches, thus easing on local and remote system resources and bandwidth. If relative, the workdir will be interpreted from the master directory. pollInterval Interval in seconds between polls, default is 10 minutes pollRandomDelayMin Minimum delay in seconds to wait before each poll, default is 0. This is useful in case you have a lot of pollers and you want to spread the polling load over a period of time. Setting it equal to the maximum delay will effectively delay all polls by a fixed amount of time. Must be less than or equal to the maximum delay. pollRandomDelayMax Maximum delay in seconds to wait before each poll, default is 0. This is useful in case you have a lot of pollers and you want to spread the polling load over a period of time. Must be less than the poll interval. pollAtLaunch Determines when the first poll occurs. True = immediately on launch, False = wait for one pollInterval (default). hgbin Path to the Mercurial binary, defaults to just 'hg' . category Set the category to be used for the changes produced by the HgPoller . This will then be set in any changes generated by the HgPoller , and can be used in a Change Filter for triggering particular builders. project Set the name of the project to be used for the HgPoller . This will then be set in any changes generated by the HgPoller , and can be used in a Change Filter for triggering particular builders. usetimestamps Parse each revision’s commit timestamp (default is True ), or ignore it in favor of the current time, so that recently processed commits appear together in the waterfall page. encoding Set encoding will be used to parse author’s name and commit message. Default encoding is 'utf-8' . revlink A function that maps branch and revision to a valid url (e.g. hgweb), stored along with the change. This function must be a callable which takes two arguments, the branch and the revision. Defaults to lambda branch, revision: (u’’) A configuration for the Mercurial poller might look like this: from buildbot.plugins import changes c [ 'change_source' ] = changes . HgPoller ( repourl = 'http://hg.example.org/projects/myrepo' , branch = 'great_new_feature' , workdir = 'hg | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/589 | LLVM Weekly - #589, April 14th 2025 LLVM Weekly - #589, April 14th 2025 Welcome to the five hundred and eighty-ninth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . Apologies that last week’s LLVM Weekly issue went out with an incorrect subject I’ll hopefully see a number of you at EuroLLVM this week. I’ll be presenting on work done with the support of RISE to improve RISC-V LLVM testing while my Igalia colleague Luke Lau’s presentation covers work to further improve RISC-V vector codegen (extending the VL Optimizer). News and articles from around the web and events Qualcomm have open sourced ELD, an embedded linker with support for AArch32/AArch64, Hexagon, and RISC-V . David Malcolm writes on the Red Hat Developer blog about usability improvements in GCC 15 . Keith Packard blogged about experience using -fsanitize=undefined with Picolibc . The new Munich LLVM meetup will take place on April 29th . A Cambridge pub LLVM social will take place on April 28th . According to the LLVM calendar in the coming week there will be the following: Office hours with the following hosts: Aaron Ballman, Alexey Bader, Phoebe Wang, Johannes Doerfert. Online sync-ups on the following topics: Flang, vectorizer improvements, modules, security response group, LLVM/Offload, Clang C/C++ language working group, SPIR-V, OpenMP for flang, memory safety working group. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums “Sirraide” proposed making -Wreturn-type an error by default in Clang . Aaron Ballman summarised follow-up discussion in a meeting where it was concluded that leaving it as warnin by default makes most sense (due to the possibility of false positives for this warning), but a new -Whardened option could be introduced which opts into stronger diagnostic behaviour. Tristan Ross proposed adding UEFI platform support to LLVM libc . Alex Zinenko is looking to rearrange some upcoming Open MLIR meeting time slots and seeking input. Slides were made available from the recent Open MLIR meeting on an interaction nets dialect . Timur Golubovich is seeking to add YAML scheme support to LLVM’s YAML parser . Sebastian Pop posted an RFC on adding runtime assumptions to DependenceAnalysis . Simon Tatham started an RFC discussion on asm goto vs branch target enforcement , noting that GCC and Clang behaviour currently differs. Joel E. Denny is looking for feedback on proposed efforts to maintain more accurate block frequencies , with the motivation of improving loop transformations. LLVM commits Production of debug intrinsics is “soft” disabled. The commit message requests that if this breaks your downstream test you get in touch to help determine if any further transition work is needed to support the move away from using debug intrinsics. 6a45fce . A new POISON SelectionDAG node was introduced to represent poison values from IR. 378ac572 . The gn buildsystem now has a check-builtins target. 9222607 . llc learned to support the -M option as used by llvm-objdump and llvm-mc. 02b377d . As part of work to improve the runtime of TableGen for architectures like AMDGPU with complex sub-register relations, the super-register class computation was improved. With future yet-to-land work, the commit message indicates it can cutdown AMGPU TableGen runtime by half. 9c31155 . A scheduler model was added for the IBM z17 processor. 80267f8 . Clang commits When given LLVM IR input, Clang will now always verify it first. This improves error reporting for when people pass hand-written IR to Clang. 87a4215 . Initial support started to land for OpenACC lowering with ClangIR. 231aa30 , 6e7c40b . Initial function call support was added to ClangIR. 85614e1 . P2719: type-aware allocation and deallocation functions was implemented. 1cd5926 . Other project commits Haiku support was added to the sanitizers. d1fd977 . Some initial optimisation was done for formatted input in flang-rt. 18fe012 . MLIR’s vector type was extended to support pointer-like types. b7b3758 . The SMT dialect from the CIRCT project was upstreamed into MLIR. de67293 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://www.iso.org/es/sectores/edificacion-construccion | ISO - Edificación y construcción Ir directamente al contenido principal Aplicaciones OBP español English français русский Menú Normas Sectores Salud Tecnologías de la información y afines Gestión y servicios Seguridad, protección y gestión de riesgos Transporte Energía Diversidad e inclusión Sostenibilidad ambiental Alimentos y agricultura Materiales Edificación y construcción Ingeniería Sobre nosotros Perspectivas y actualidad Perspectivas Todos los artículos Salud Inteligencia artificial Cambio climático Transporte Ciberseguridad Gestión de la calidad Energías renovables Seguridad y salud en el trabajo Actualidad Opinión de expertos El mundo de las normas Kit de prensa Resources ISO 22000 explained ISO 9001 explained ISO 14001 explained Participar Tienda Buscar Carrito Edificación y construcción Modelado de la información de construcción (BIM) Accesibilidad de los edificios Materiales de construcción Estructuras de edificios Otros Las Normas Internacionales proporcionan al sector de la edificación y la construcción una base fiable para la innovación, la seguridad y la sostenibilidad. Desde la optimización del rendimiento energético y la garantía de accesibilidad hasta la gestión de las instalaciones y la digitalización de la información, estas normas ofrecen a los actores del sector herramientas para lograr la máxima eficiencia, mejorar la calidad y colaborar de manera fluida entre distintos proyectos y países, mediante la alineación con referencias reconocidas en todo el mundo. Normas esenciales ISO 41001 Facility management — Management systems — Requirements with guidance for use Publicado en 2018 CHF 196 ISO 7685 Glass-reinforced thermosetting plastics (GRP) pipes — Determination of initial ring stiffness Publicado en 2026 CHF 67 ISO/TS 16733-2 Fire safety engineering — Selection of design fire scenarios and design fires Part 2: Design fires Publicado en 2026 CHF 181 ISO 19650-1 Organization and digitization of information about buildings and civil engineering works, including building information modelling (BIM) — Information management using building information modelling Part 1: Concepts and principles Publicado en 2018 CHF 179 ISO 13954 Plastics pipes and fittings — Peel decohesion test for polyethylene (PE) electrofusion assemblies of nominal outside diameter greater than or equal to 90 mm Publicado en 2025 CHF 67 ISO 19650-2 Organization and digitization of information about buildings and civil engineering works, including building information modelling (BIM) — Information management using building information modelling Part 2: Delivery phase of the assets Publicado en 2018 CHF 155 ISO 29481-1 Building information models — Information delivery manual Part 1: Methodology and format Publicado en 2025 CHF 159 Cargar más Perspectivas Equipo de protección personal: proteger a los trabajadores en un lugar de trabajo en constante evolución Para los trabajadores en ambientes de alto riesgo, el EPP no es solo una formalidad en el lugar de trabajo: es una línea de defensa que salva vidas y evita lesiones y catástrofes cada día. Almacenamiento de energía: el motor del futuro de la energía renovable Desde la compacta batería de iones de litio que impulsa su bicicleta eléctrica hasta las colosales soluciones a escala de red capaces de mantener en marcha barrios enteros, el almacenamiento de energía es la fórmula secreta que hace que la energía renovable sea confiable las veinticuatro horas del día. Desarrollo sostenible para un planeta cambiante Las ciudades de todo el mundo corren para adaptarse al cambio climático. El aumento de las temperaturas provocado por las emisiones humanas de gases de efecto invernadero está alterando el equilibrio de los sistemas climáticos. Sectores Edificación y construcción Mapa del sitio Normas Beneficios Normas más comunes Evaluación de la conformidad ODS Sectores Salud Tecnologías de la información y afines Gestión y servicios Seguridad, protección y gestión de riesgos Transporte Energía Sostenibilidad ambiental Materiales Sobre nosotros Qué es lo que hacemos Estructura Miembros Events Estrategia Perspectivas y actualidad Perspectivas Todos los artículos Salud Inteligencia artificial Cambio climático Transporte Actualidad Opinión de expertos El mundo de las normas Kit de prensa Resources ISO 22000 explained ISO 9001 explained ISO 14001 explained Participar Who develops standards Deliverables Get involved Colaboración para acelerar una acción climática eficaz Resources Drafting standards Tienda Tienda Publications and products ISO name and logo Privacy Notice Copyright Cookie policy Media kit Jobs Help and support Seguimos haciendo que la vida sea mejor , más fácil y más segura . Inscríbase para recibir actualizaciones por correo electrónico © Reservados todos los derechos Todos los materiales y publicaciones de ISO están protegidos por derechos de autor y sujetos a la aceptación por parte del usuario de las condiciones de derechos de autor de ISO. Cualquier uso, incluida la reproducción, requiere nuestra autorización por escrito. Dirija todas las solicitudes relacionadas con los derechos de autor a copyright@iso.org . Nos comprometemos a garantizar que nuestro sitio web sea accesible para todo el mundo. Si tiene alguna pregunta o sugerencia relacionada con la accesibilidad de este sitio web, póngase en contacto con nosotros. Añadir al carrito | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/604 | LLVM Weekly - #604, July 28th 2025 LLVM Weekly - #604, July 28th 2025 Welcome to the six hundred and fourth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events The LLVM hearts ML Workshop has been announced , taking place teh day before the LLVM Dev Meeting. As a reminder, the call for papers for the LLVM-HPC Workshop at SC'25 closes on August 15 . The Klipspringer blog features a post presenting an LLVM garbage collection statepoints demo . Fangrui Song gave a detailed write-up of recent changes to section fragment handling in LLVM . According to the LLVM Calendar in the coming week there will be the following: Office hours with the following hosts: Johannes Doerfert, Renato golin. Online sync-ups on the following topics: ClangIR upstreaming, pointer authentication, OpenMP, Flang, LLVM qualification working group, RISC-V, libc, HLSL, MLGO. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Louis Dionne proposed adding a dependency to libc++ on Boost.Math for C++17 math special functions , which as of now aren’t implemented. Andrew Kelly expressed concern on behalf of the Zig community , but other respondents are more positive and are asking for more details on the Zig concerns. Baranov Victor would like to provide a better experience for new clang-tidy users , proposing introducing a set of default checks for users to start with. Now that we have enabled the warning when bypassing premerge testing, Mehdi Amini followed up with an RFC to enable GitHub auto-merge feature to allow merging a patch once the CI completes. There is resounding support for this. Henri Menke proposes the introduction of flang-tidy, inspired by clang-tidy . A PR including 20 checks is already available for review. “stefanp-ibm” initiated a discussion on extending the machine scheduler to more accurately model instruction fusion (e.g. understanding that two fused instructions may now completed in a single cycle). “Aayush910” would like to improve LTO build times with enahnced split DWARF . Currently split DWARF is applied LTO, and the proposal is to temporarily strip out duplicate debug metadata, run LTO on the reduced IR, and then restore it after LTO prior to code generation and split DWARF emission. Fabian Mora suggested creating a GitHub ‘project’ to better organise MLIR issues . Panagiotis Karouzakis started a discussion about DemandedBits and division operations . Dominik Adamski would like to introduce a no-loop mode for OpenMP GPU kernels in Flang . Following a long discussion on 64-bit source locations in Clang, Haojian Wu posted a compromise proposal for an opt-in CMake option for 64-bit source locations . LLVM commits The newly added LDBG macro provides a handy shortcut for debug logging. d368d11 . An initial SFrame parser and dumper was added as well as GNU compatible syntax parsing and an llvm-mc command line flag. aa7ada1 , 29e8599 . llvm-objdump gained support for --debug-inlined-funs which will print the location of inlined functions alongside disassembly. e94bc16 . Guidance was added on specifying GitHub workflows in a way that doesn’t interact negatively with stacked PRs. 09580f7 . DataWithEVL is now used as the preferred tail folding style for RISC-V. This is in preparation to make EVL tail folding by default. 20c52e4 . The LangRef now documents the case of allocated objects that can grow (e.g. using mmap to reserve a wide range of pages). b1aece9 . Vector instruction latencies for the RISC-V SpacemiT-X60 started to be added. 8952225 . Clang commits The experimental lifetime safety analysis can now be enabled/disabled for -f[no]-experimental-lifetime-safety . 0d04789 . Reduced BMI (Binary Module Interface) mode for C++20 modules is now the default option. 255a163 . Code related to trigraph support in clang-format was removed. 12a3afe . ClangIR support for array constructors was added. 3e9d369 . __builtin_wasm_test_function_pointer_signature was added. 15b0368 . Other project commits The lldb-rpc-gen tool was committed. 68c8c8ce . !$omp unroll support was implemented in Flang. b487f9a . LLVM’s libc now has generic comparison operations for floating point types implemented in its FPUtil library. e789f8b . LLD can now read AArch64 build attributes and convert them into GNU properties. d52675e . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://support.microsoft.com/ar-sa/topic/360379ec-153b-4ab4-93ff-85be97789dbb | الحماية من الفيروسات والتهديدات في تطبيق أمن Windows - دعم Microsoft تخطي إلى المحتوى الرئيسي Microsoft الدعم الدعم الدعم الصفحة الرئيسية Microsoft 365 Office المنتجات Microsoft 365 Outlook Microsoft Teams OneDrive Microsoft Copilot OneNote Windows المزيد ... الأجهزة Surface ملحقات الكمبيوتر Xbox ألعاب الكمبيوتر HoloLens Surface Hub ضمانات الأجهزة الحساب & والفوترة حساب Microsoft Store &والفوترة الموارد أحدث الميزات منتديات المجتمعات Microsoft 365 للمسؤولين مدخل الشركات الصغيرة المطور التعليم الإبلاغ عن دعم احتيالي أمان المنتج المزيد شراء Microsoft 365 Microsoft بالكامل Global Microsoft 365 Office Copilot Windows Surface Xbox الدعم Software Software تطبيقات Windows الذكاء الاصطناعي OneDrive Outlook انتقال من Skype إلى Teams OneNote Microsoft Teams PCs & Devices PCs & Devices تسوق للحصول على Xbox Accessories Entertainment Entertainment Xbox Game Pass Ultimate Xbox Game Pass Essential Xbox والألعاب ألعاب الكمبيوتر الشخصي اعمال اعمال الأمان من Microsoft Azure Dynamics 365 Microsoft 365 for business صناعة Microsoft Microsoft Power Platform Windows 365 المطور وذلك المطور وذلك مطور Microsoft Microsoft Learn دعم تطبيقات مواقع تسوق الذكاء الاصطناعي مجتمع Microsoft Tech Microsoft Marketplace Visual Studio Marketplace Rewards أخرى أخرى الأمان والتنزيلات المجانية التعليم بطاقات الهدايا عرض خريطة الموقع بحث بحث عن التعليمات لا نتائج إلغاء تسجيل الدخول تسجيل الدخول باستخدام حساب Microsoft تسجيل الدخول أو إنشاء حساب. مرحباً، تحديد استخدام حساب مختلف! لديك حسابات متعددة اختر الحساب الذي تريد تسجيل الدخول باستخدامه. الحماية من الفيروسات والتهديدات في تطبيق أمن Windows ينطبق على Windows 11 Windows 10 تم تصميم صفحة الحماية من الفيروسات والتهديدات في تطبيق أمن Windows لمساعدتك على حماية جهازك من التهديدات المختلفة مثل الفيروسات والبرامج الضارة وبرامج الفدية الضارة. توفر الصفحة الوصول إلى العديد من الميزات والإعدادات لضمان الحماية الشاملة، ويتم تقسيمها في الأقسام التالية: التهديدات الحالية : يعرض هذا القسم أي تهديدات موجودة حاليا على جهازك، وآخر مرة تم فيها تشغيل الفحص، والمدة التي استغرقها ذلك، وعدد الملفات التي تم مسحها ضوئيا. يمكنك أيضا بدء فحص سريع جديد أو الاختيار من بين خيارات الفحص الأخرى لإجراء فحص أكثر شمولا أو مخصصا إعدادات الحماية من الفيروسات & التهديدات : في هذا القسم يمكنك إدارة إعدادات Microsoft Defender مكافحة الفيروسات ومنتجات مكافحة الفيروسات التابعة لجهة خارجية تحديثات الحماية من الفيروسات & التهديدات : هذا القسم مخصص لضمان حماية جهازك بأحدث تحديثات التحليل الذكي للأمان الحماية من برامج الفدية الضارة : في هذا القسم، يمكنك تكوين الوصول المتحكم به إلى المجلدات ، مما يمنع التطبيقات غير المعروفة من تغيير الملفات في المجلدات المحمية. كما يوفر خيارات لتكوين OneDrive لمساعدتك على التعافي من هجوم برامج الفدية الضارة في تطبيق أمن Windows على الكمبيوتر الشخصي، حدد الحماية من الفيروسات & التهديدات ، أو استخدم الاختصار التالي: الحماية من التهديدات & الفيروسات التهديدات الحالية ضمن التهديدات الحالية ، يمكنك: الاطلاع على أي تهديدات تم العثور عليها حاليا على جهازك الاطلاع على آخر مرة تم فيها تشغيل الفحص على جهازك، والمدة التي استغرقها ذلك، وعدد الملفات التي تم مسحها ضوئيا بدء فحص سريع جديد أو فتح خيارات الفحص لتشغيل فحص أكثر شمولا أو مخصصا الاطلاع على التهديدات التي تم عزلها قبل أن تؤثر عليك وأي شيء تم تحديده على أنه تهديد سمحت بتشغيله على جهازك خيارات الفحص على الرغم من تشغيل أمن Windows وفحص جهازك تلقائيًا، يمكنك تنفيذ أي فحص إضافي كلما أردت. الفحص السريع : يكون هذا الخيار مفيدا عندما لا تريد قضاء الوقت في تشغيل فحص كامل على جميع الملفات والمجلدات. إذا أمن Windows توصي بتشغيل أحد الأنواع الأخرى من عمليات الفحص، إعلامك عند الانتهاء من الفحص السريع بدء فحص سريع الفحص الكامل : فحص كل ملف وبرنامج على جهازك بدء فحص كامل فحص مخصص : يفحص الملفات والمجلدات التي تحددها فقط بدء فحص مخصص Microsoft Defender مكافحة الفيروسات (الفحص دون اتصال بالإنترنت): يستخدم أحدث التعريفات لفحص جهازك بحثا عن أحدث التهديدات. يحدث هذا بعد إعادة التشغيل، دون تحميل Windows، لذلك فإن أي برامج ضارة مستمرة تواجه صعوبة أكبر في إخفاء نفسها أو الدفاع عنها. قم بتشغيله إذا كنت قلقًا بأنه جهازك عرضة للبرامج الضارة أو الفيروسات، أو إذا كنت تريد فحص جهازك دون الاتصال بالإنترنت. سيؤدي هذا إلى إعادة تشغيل جهازك، لذا تأكد من حفظ الملفات التي قد تكون فتحتها. سيتم تحميل Microsoft Defender دون اتصال وإجراء فحص سريع للكمبيوتر في بيئة استرداد Windows. عند اكتمال الفحص، تتم إعادة تشغيل الكمبيوتر تلقائيا بدء فحص دون اتصال ملاحظة: للاطلاع على نتائج الفحص دون اتصال، افتح تطبيق أمن Windows على جهاز Windows وحدد محفوظات الحماية . للاطلاع على نتائج الفحص دون اتصال، افتح تطبيق أمن Windows على جهاز Windows وحدد محفوظات الحماية التهديدات المسموح بها تعرض صفحة التهديدات المسموح بها قائمة بالعناصر التي أمن Windows حددتها كتهديدات، ولكن اخترت السماح بها. لن تتخذ أمن Windows أي إجراءات ضد التهديدات التي سمحت بها. إذا سمحت بخطر عن طريق الخطأ وتريد إزالته، فحدده من القائمة، ثم حدد الزر عدم السماح . ستتم إزالة التهديد من القائمة وسيعمل أمن Windows مرة أخرى عليه في المرة التالية التي يرى فيها هذا التهديد. إعدادات الحماية من التهديدات & الفيروسات استخدم إعدادات الحماية من الفيروسات والمخاطر عندما تريد تخصيص مستوى الحماية، أو إرسال عينات ملفات إلى Microsoft أو استبعاد الملفات والمجلدات الموثوقة من الفحص المتكرر أو إيقاف تشغيل حمايتك مؤقتًا. في تطبيق أمن Windows على الكمبيوتر الشخصي، حدد الحماية من التهديدات & الفيروسات > إدارة الإعدادات أو استخدم الاختصار التالي: إعدادات الحماية من التهديدات & الفيروسات الحماية في الوقت الحقيقي الحماية في الوقت الحقيقي هي ميزة في تطبيق أمن Windows تراقب جهازك باستمرار للتهديدات المحتملة مثل الفيروسات والبرامج الضارة وبرامج التجسس. تضمن هذه الميزة حماية جهازك بشكل نشط عن طريق مسح الملفات والبرامج ضوئيا عند الوصول إليها أو تنفيذها. إذا تم الكشف عن أي نشاط مريب، فستنبهك الحماية في الوقت الحقيقي وستتخذ الإجراء المناسب لمنع التهديد من التسبب في ضرر. يمكنك استخدام إعداد الحماية في الوقت الحقيقي لإيقاف تشغيله مؤقتا؛ ومع ذلك، سيتم إعادة تشغيل الحماية في الوقت الحقيقي تلقائيا بعد فترة قصيرة لاستئناف حماية جهازك. بينما تكون الحماية في الوقت الحقيقي متوقفة عن التشغيل، لن يتم فحص الملفات التي تقوم بفتحها أو تنزيلها بحثًا عن التهديدات. ضع في اعتبارك أنه إذا قمت بذلك، فقد يكون جهازك عرضة للتهديدات وسيستمر تشغيل عمليات الفحص المجدولة. ومع ذلك، لن يتم مسح الملفات التي تم تنزيلها أو تثبيتها ضوئيا حتى الفحص المجدول التالي. يمكنك تشغيل الحماية في الوقت الحقيقي أو إيقاف تشغيلها باستخدام زر التبديل. ملاحظات: إذا كنت تريد فقط استبعاد ملف أو مجلد واحد من فحص مكافحة الفيروسات، يمكنك القيام بذلك عن طريق إضافة استثناء. هذا أكثر أمانا من إيقاف تشغيل الحماية من الفيروسات بأكملها إذا قمت بتثبيت برنامج الحماية من الفيروسات غير متوافق مع Microsoft Microsoft Defender سيتم إيقاف تشغيل برنامج الحماية من الفيروسات تلقائيا إذا تم تشغيل الحماية من العبث ، فستحتاج إلى إيقاف تشغيلها قبل أن تتمكن من إيقاف تشغيل الحماية في الوقت الحقيقي حماية محرك أقراص التطوير ملاحظة: لا تتوفر الحماية من محرك أقراص التطوير على Windows 10. توفر Dev Drive Protection مساحة آمنة ومعزولة للمطورين لتخزين التعليمات البرمجية الخاصة بهم والعمل عليها، ما يضمن حماية بيئة التطوير الخاصة بهم من التهديدات والثغرات الأمنية المحتملة. تتضمن Dev Drive Protection وضع أداء يفحص Dev Drive بشكل غير متزامن. وهذا يعني أنه يتم تأجيل عمليات فحص الأمان حتى بعد اكتمال عملية الملف، بدلا من تنفيذها بشكل متزامن أثناء معالجة عملية الملف. يوفر وضع الفحص غير المتزامن هذا توازنا بين الحماية من التهديدات والأداء، ما يضمن إمكانية عمل المطورين بكفاءة دون مواجهة تأخيرات كبيرة بسبب عمليات فحص الأمان. يمكنك تشغيل حماية Dev Drive أو إيقاف تشغيلها باستخدام زر التبديل حدد عرض وحدات التخزين لمراجعة قائمة وحدات التخزين التي تم تمكين حماية Dev Drive بها لمعرفة المزيد، راجع حماية Dev Drive باستخدام وضع الأداء . الحماية المقدمة من السحابة يسمح هذا الإعداد Microsoft Defender بالحصول على تحسينات محدثة باستمرار من Microsoft أثناء اتصالك بالإنترنت. سيؤدي ذلك إلى تحديد التهديدات وإيقافها وإصلاحها بشكل أكثر دقة. تقديم نموذج تلقائي إذا كنت متصلا بالسحابة مع الحماية المقدمة من السحابة، يمكنك أن يكون Defender يرسل تلقائيا ملفات مشبوهة إلى Microsoft للتحقق منها بحثا عن التهديدات المحتملة. ستقوم Microsoft بإعلامك إذا كنت بحاجة إلى إرسال ملفات إضافية، وتنبيهك إذا كان الملف المطلوب يحتوي على معلومات شخصية حتى تتمكن من تحديد ما إذا كنت تريد إرسال هذا الملف أم لا. إذا كنت قلقا بشأن ملف وتريد التأكد من إرساله للتقييم، فيمكنك تحديد إرسال عينة يدويا لإرسال أي ملف تريده إلينا. الحماية من العبث الحماية من العبث هي ميزة تساعد على منع التطبيقات الضارة من تغيير إعدادات الحماية من الفيروسات Microsoft Defender المهمة. يتضمن ذلك إعدادات مثل الحماية في الوقت الحقيقي والحماية المقدمة من السحابة. من خلال ضمان بقاء هذه الإعدادات دون تغيير، تساعد الحماية من العبث في الحفاظ على سلامة تكوين أمان جهازك وتمنع التطبيقات الضارة من تعطيل ميزات الأمان الهامة. إذا كانت الحماية من العبث قيد التشغيل وكنت مسؤولا على جهازك، فلا يزال بإمكانك تغيير هذه الإعدادات في تطبيق أمن Windows. ومع ذلك، لا يمكن لتطبيقات أخرى تغيير هذه الإعدادات. يمكنك تشغيل الحماية من العبث أو إيقاف تشغيلها باستخدام زر التبديل. ملاحظة: لا تؤثر الحماية من العبث بالفيروسات على كيفية عمل تطبيقات مكافحة الفيروسات التابعة لجهة خارجية أو كيفية تسجيلها مع أمن Windows. الوصول المتحكم به إلى المجلدات استخدم إعداد الوصول إلى المجلدات الخاضعة للرقابة لإدارة المجلدات التي يمكن للتطبيقات غير الموثوق بها إجراء تغييرات عليها. يمكنك أيضا إضافة تطبيقات إضافية إلى القائمة الموثوق بها حتى يتمكنوا من إجراء تغييرات في هذه المجلدات. هذه أداة قوية لجعل ملفاتك أكثر أمانا من برامج الفدية الضارة. عند تشغيل الوصول المتحكم به إلى المجلدات، فإن العديد من المجلدات التي تستخدمها غالبا ما تكون محمية بشكل افتراضي. هذا يعني أنه لا يمكن الوصول إلى المحتوى في أي من هذه المجلدات أو تغييرها من خلال أي تطبيقات غير معروفة أو غير موثوقة. إذا أضفت مجلدات إضافية، فستصبح محمية أيضا. التعرّف على المزيد حول الوصول المتحكم به إلى المجلدات الاستبعادات بشكل افتراضي، يتم تشغيل برنامج الحماية من الفيروسات Microsoft Defender في الخلفية، ومسح الملفات والعمليات التي تفتحها أو تقوم بتنزيلها بحثا عن البرامج الضارة. قد تكون هناك مثيلات عندما يكون لديك ملف أو عملية معينة لا تريد مسحها ضوئيا في الوقت الحقيقي. عند حدوث ذلك، يمكنك إضافة استثناء لهذا الملف أو نوع الملف أو المجلد أو العملية. إنذار: إضافة استثناء إلى أمن Windows يعني أن برنامج الحماية من الفيروسات Microsoft Defender لن يتحقق بعد الآن من هذه الأنواع من الملفات بحثا عن التهديدات، مما قد يترك جهازك وبياناتك عرضة للخطر. تأكد من أنك تريد حقا القيام بذلك قبل المتابعة. تنطبق الاستثناءات فقط على الفحص في الوقت الحقيقي باستخدام برنامج الحماية من الفيروسات Microsoft Defender. قد لا تزال أي عمليات فحص مجدولة باستخدام برنامج الحماية من الفيروسات Microsoft Defender أو منتجات مكافحة البرامج الضارة التابعة لجهة خارجية تفحص هذه الملفات أو العمليات. لإضافة استثناء حدد إضافة استثناءات أو إزالتها اختر أحد الخيارات الأربعة استنادا إلى نوع الاستبعاد الذي تحاول إضافته: ملف: استبعاد ملف معين مجلد: استبعاد مجلد معين (وجميع الملفات الموجودة داخل هذا المجلد) نوع الملف: استبعاد كافة الملفات من نوع محدد، مثل .docx أو .pdf عملية: إضافة استثناء لعملية ما يعني أنه سيتم استبعاد أي ملف يتم فتحه بواسطة هذه العملية من الفحص في الوقت الحقيقي. سيظل يتم مسح هذه الملفات ضوئيا بواسطة أي عمليات فحص عند الطلب أو مجدولة، ما لم يتم أيضا إنشاء استثناء ملف أو مجلد يعفيها تلميح: يوصى باستخدام المسار الكامل واسم الملف لاستبعاد عملية معينة. وهذا يجعل من الأقل احتمالا أن تستخدم البرامج الضارة نفس اسم الملف كعملية موثوقة ومستبعدة وتهرب من الكشف. لإزالة استثناء إنذار: قد يؤدي استبعاد ملف أو عملية من فحص مكافحة الفيروسات إلى جعل جهازك أو بياناتك أكثر عرضة للخطر. تأكد من أنك تريد القيام بذلك قبل المتابعة. حدد إضافة استثناءات أو إزالتها حدد الاستبعاد الذي تريد إزالته وحدد إزالة استخدام أحرف البدل أو متغيرات البيئة يمكنك استخدام حرف بدل " * " لاستبدال أي عدد من الأحرف. في استثناءات نوع الملف: إذا كنت تستخدم علامة نجمية في ملحق الملف، فإنها تعمل كحرف بدل لأي عدد من الأحرف. سيستبعد " *st " .test و.past و.invest وأي أنواع ملفات أخرى حيث ينتهي الملحق في st في استثناءات العملية: سيستبعد C:\MyProcess\* الملفات التي تفتحها جميع العمليات، الموجودة في C:\MyProcess ، أو أي مجلدات فرعية من C:\MyProcess test.* سيستبعد الملفات التي تفتحها جميع العمليات المسماة test ، بغض النظر عن ملحق الملف يمكنك استخدام متغيرات البيئة في استثناءات العملية الخاصة بك أيضا. على سبيل المثال: ٪ALLUSERSPROFILE٪\CustomLogFiles\test.exe سيؤدي ذلك إلى استبعاد أي ملفات تم فتحها بواسطة C:\ProgramData\CustomLogFiles\test.exe . للحصول على قائمة كاملة بمتغيرات بيئة Windows، راجع: متغيرات البيئة المعترف بها . تحديثات الحماية من الفيروسات & التهديدات التحليل الذكي للأمان (يشار إليه أحيانا باسم التعريفات ) هي ملفات تحتوي على معلومات حول أحدث التهديدات التي قد تصيب جهازك. يستخدم أمن Windows ذكاء الأمان في كل مرة يتم فيها إجراء فحص. يقوم Windows تلقائيا بتنزيل أحدث التحليل الذكي للأمان كجزء من Windows Update، ولكن يمكنك أيضا التحقق منه يدويا. في تطبيق أمن Windows على الكمبيوتر الشخصي، حدد الحماية من الفيروسات & التهديدات > تحديثات الحماية > التحقق من وجود تحديثات أو استخدم الاختصار التالي: التحقق من وجود تحديثات الحماية من برامج الفدية الضارة تحتوي صفحة الحماية من برامج الفدية الضارة في أمن Windows على إعدادات لكل من الحماية من برامج الفدية الضارة والاسترداد إذا تعرضت للهجوم. في تطبيق أمن Windows على الكمبيوتر الشخصي، حدد الحماية من الفيروسات & التهديدات > إدارة الحماية من برامج الفدية الضارة أو استخدم الاختصار التالي: إدارة الحماية من برامج الفدية الضارة الوصول المتحكم به إلى المجلدات تم تصميم الوصول المتحكم به إلى المجلدات لحماية بياناتك القيمة من التطبيقات والتهديدات الضارة، مثل برامج الفدية الضارة. تعمل هذه الميزة من خلال التحقق من التطبيقات مقابل قائمة التطبيقات المعروفة والموثوق بها وحظر التطبيقات غير المصرح بها أو غير الآمنة من الوصول إلى الملفات أو تغييرها في المجلدات المحمية. عند تمكين الوصول المتحكم به إلى المجلدات، فإنه يساعد على حماية بياناتك من خلال: حظر التغييرات غير المصرح بها: يسمح فقط للتطبيقات الموثوق بها بإجراء تغييرات على الملفات في المجلدات المحمية. إذا تم تحديد أن أحد التطبيقات ضار أو مريب، حظره من إجراء أي تغييرات حماية المجلدات المهمة: بشكل افتراضي، يحمي الوصول المتحكم به إلى المجلدات الشائعة مثل المستندات والصور ومقاطع الفيديو والموسيقى وسطح المكتب. يمكنك أيضا إضافة مجلدات إضافية لتكون محمية توفير الإعلامات: إذا تم حظر أحد التطبيقات من إجراء التغييرات، فستتلقى إعلاما يسمح لك باتخاذ الإجراء المناسب لإضافة مجلدات محمية أو إزالتها، حدد مجلدات محمية أو استخدم الاختصار التالي: المجلدات المحمية لإضافة تطبيق أو إزالته من خلال الوصول المتحكم به إلى المجلدات، حدد السماح بتطبيق من خلال الوصول المتحكم به إلى المجلدات أو استخدم الاختصار التالي: السماح لتطبيق من خلال الوصول المتحكم به إلى المجلدات إنذار: كن مدروسا حول التطبيقات التي تضيفها. ستتمكن أي تطبيقات إضافية من الوصول إلى الملفات الموجودة في المجلدات المحمية وإذا تعرض هذا التطبيق للخطر، فقد تكون البيانات الموجودة في تلك المجلدات في خطر. إذا تلقيت الرسالة يتم حظر التطبيق عند محاولة استخدام تطبيق مألوف، يمكنك إلغاء الحظر باستخدام الخطوات التالية: لاحظ مسار التطبيق المحظور حدد الرسالة، ثم حدد إضافة تطبيق مسموح به استعرض بحثا عن البرنامج الذي تريد السماح بالوصول إليه ملاحظة: إذا حاولت حفظ ملف إلى مجلد وكان المجلد محظورًا، فهذا يعني أنه لا يُسمح التطبيق الذي تستخدمه بالحفظ إلى هذا الموقع. إذا حدث ذلك، فاحفظ الملف إلى موقع آخر على جهازك. ثم استخدم الخطوات السابقة لإلغاء حظر التطبيق، وسوف تتمكن من حفظ الملفات إلى الموقع الذي تريده. لمزيد من التفاصيل حول الوصول المتحكم به إلى المجلدات، راجع حماية المجلدات المهمة باستخدام الوصول إلى المجلدات الخاضعة للرقابة. استرداد بيانات برامج الفدية الضارة تم تصميم قسم استرداد بيانات برامج الفدية الضارة لمساعدتك في استرداد ملفاتك في حالة هجوم برامج الفدية الضارة. يوفر العديد من الوظائف الرئيسية لضمان بقاء بياناتك آمنة ويمكن استعادتها إذا تم تشفيرها أو حظرها بواسطة برامج الفدية الضارة. تم دمج قسم استرداد بيانات برامج الفدية الضارة مع Microsoft OneDrive. يسمح لك هذا بنسخ ملفاتك المهمة احتياطيا إلى OneDrive، مما يضمن أن لديك نسخة آمنة من بياناتك يمكن استعادتها في حالة هجوم برامج الفدية الضارة. إذا تأثرت ملفاتك ببرامج الفدية الضارة، فسيرشدك تطبيق أمن Windows خلال عملية استعادة ملفاتك من OneDrive. يساعدك هذا في استرداد بياناتك بسرعة دون الحاجة إلى دفع الفدية. ستتلقى إعلامات وتنبيهات إذا تم الكشف عن برامج الفدية الضارة أو إذا كانت هناك أي مشكلات في النسخ الاحتياطي ل OneDrive. وهذا يضمن أنك دائما على دراية بحالة حماية البيانات الخاصة بك. الاشتراك في موجز ويب لـ RSS هل تحتاج إلى مزيد من المساعدة؟ الخروج من الخيارات إضافية؟ اكتشاف المجتمع استكشف مزايا الاشتراك، واستعرض الدورات التدريبية، وتعرف على كيفية تأمين جهازك، والمزيد. ميزات اشتراك Microsoft 365 تدريب Microsoft 365 أمان من Microsoft مركز إمكانية وصول ذوي الاحتياجات الخاصة تساعدك المجتمعات على طرح الأسئلة والإجابة عليها، وتقديم الملاحظات، وسماعها من الخبراء ذوي الاطلاع الواسع. طرح أسئلة في Microsoft Community مجتمع Microsoft التقني مشتركو Windows Insider المشاركون في برنامج Microsoft 365 Insider هل كانت المعلومات مفيدة؟ نعم لا شكراً لك! هل لديك أي ملاحظات إضافية لـ Microsoft? هل يمكنك مساعدتنا على التحسين؟ (أرسل ملاحظات إلى Microsoft حتى نتمكن من المساعدة.) ما مدى رضاك عن جودة اللغة؟ ما الذي أثّر في تجربتك؟ ساعد على حل مشكلتي مسح الإرشادات سهل المتابعة لا توجد لغة غير مفهومة كانت الصور مساعِدة جودة الترجمة غير متطابق مع شاشتي إرشادات غير صحيحة تقني بدرجة كبيرة معلومات غير كافية صور غير كافية جودة الترجمة هل لديك أي ملاحظات إضافية؟ (اختياري) إرسال الملاحظات بالضغط على "إرسال"، سيتم استخدام ملاحظاتك لتحسين منتجات Microsoft وخدماتها. سيتمكن مسؤول تكنولوجيا المعلومات لديك من جمع هذه البيانات. بيان الخصوصية. نشكرك على ملاحظاتك! × الجديد Surface Pro Surface Laptop Copilot للمؤسسات Copilot للاستخدام الشخصي Microsoft 365 استكشف منتجات Microsoft Microsoft Store ملف تعريف الحساب مركز التنزيل تعقب الطلب التعليم Microsoft Education أجهزة التعليم Microsoft Teams للتعليم Microsoft 365 Education Office Education تدريب المعلمين وتطويرهم عروض للطلاب وأولياء الأمور Azure للطلاب الأعمال الأمان من Microsoft Azure Dynamics 365 Microsoft 365 Microsoft Advertising Microsoft 365 Copilot Microsoft Teams المطور وتكنولوجيا المعلومات مطور Microsoft Microsoft Learn دعم تطبيقات مواقع تسوق الذكاء الاصطناعي مجتمع Microsoft Tech Microsoft Marketplace Microsoft Power Platform Marketplace Rewards Visual Studio الشركة الوظائف نبذة عن Microsoft الخصوصية في Microsoft المستثمرون الاستدامة العربية (المملكة العربية السعودية) أيقونة إلغاء الاشتراك في اختيارات خصوصيتك خيارات خصوصيتك أيقونة إلغاء الاشتراك في اختيارات خصوصيتك خيارات خصوصيتك خصوصية صحة المستهلك الاتصال بشركة Microsoft الخصوصية إدارة ملفات تعريف الارتباط بنود الاستخدام العلامات التجارية حول إعلاناتنا © Microsoft 2026 | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/jakarta.html | Integrating with Jakarta EE :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Resources Integrating with Jakarta EE Edit this Page Integrating with Jakarta EE In a Jakarta EE environment, there are two possible approaches to logging: Each application can use their own copy of Log4j Core and include log4j-core in the WAR or EAR archive. Applications can also use a single copy of Log4j Core that must be installed globally on the application server. While the first approach is the easiest to implement, it has some limitations: Shared libraries Log events emitted by each application and the libraries bundled with it will be handled by Log4j Core, but events related to the application emitted by a shared library (e.g. JPA implementation) will be handled by the application server. To diagnose a problem with the application, you might need to look into multiple log files. Separate log files Each application must use a different log file to prevent problems with concurrent access to the same file by multiple applications. Problems may arise, especially if a rolling file appender is used. Lifecycle Web applications have a different lifecycle from the application server. Additional care is required to stop Log4j Core when the application is stopped. See Integrating with web applications for more details. The second approach requires changes to the configuration of the application server, but produces better results in terms of separating log events of different applications. See Sharing Log4j Core between Web Applications for more details. Integrating with web applications To avoid problems, some Log4j API and Log4j Core features are automatically disabled when running in a Jakarta EE environment. Most notably: the usage of ThreadLocal for object pooling is disabled. a web-safe implementation of ThreadContextMap is used. JMX notifications are sent synchronously. the JVM shutdown hook is disabled. See log4j2.isWebapp for more details. Using a logging implementation like Log4j Core in a Jakarta EE application requires particular care. Since the lifecycle of a container or web application is independent of the lifecycle of the JVM, it’s important for logging resources to be properly cleaned up (database connections closed, files closed, etc.) when the container or web application shuts down. To properly synchronize the lifecycles of Log4j Core and Jakarta EE applications, an additional Log4j Web artifact is provided. Installation To install Log4j Web in your web application, you need to add it as a runtime dependency: Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-jakarta-web</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-jakarta-web' Click here if you are you using Jakarta EE 8 or any version of Java EE? Jakarta EE 8 and all Java EE applications servers use the legacy javax package prefix instead of jakarta . If you are using those application servers, you should replace the dependencies above with: Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-web</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-web' If you are writing a Servlet 3.0 or later application, Apache Log4j Web will register a ServletContainerInitializer that takes care of configuring the Log4j lifecycle for you. Under the hood this will: initialize Log4j Core with the correct configuration file. register a Log4jServletContextListener to automatically shut down Log4j Core, when the application shuts down. register a Log4jServletFilter to enable the web lookup . See also Application server specific notes . While the Servlet Specification allows web fragments to automatically add context listeners, it does not give any guarantees regarding the order in which those listeners are executed (see Section 8.2.3 ). If other context listeners in your application use logging, you need to make sure that Log4jServletContextListener is the last listener to be executed at shutdown. To do it, you must create a web.xml descriptor and add the Log4jServletContextListener explicitly as the first context listener: Snippet from an example web.xml <listener> <description>Handles Log4j Core lifecycle</description> <listener-class> org.apache.logging.log4j.web.Log4jServletContextListener </listener-class> </listener> Manual installation If you are maintaining an older Servlet 2.5 (or earlier) application, or if you disabled the servlet container initializer . Snippet from an example web.xml <listener> <description>Handles Log4j Core lifecycle</description> <listener-class> org.apache.logging.log4j.web.Log4jServletContextListener </listener-class> </listener> <filter> <description>Adds Log4j Core specific attributes to each request</description> <filter-name>log4jServletFilter</filter-name> <filter-class>org.apache.logging.log4j.web.Log4jServletFilter</filter-class> </filter> <filter-mapping> <filter-name>log4jServletFilter</filter-name> <url-pattern>/*</url-pattern> <dispatcher>REQUEST</dispatcher> <dispatcher>FORWARD</dispatcher> <dispatcher>INCLUDE</dispatcher> <dispatcher>ERROR</dispatcher> <!-- Servlet 3.0 with disabled auto-initialization; not supported in 2.5 <dispatcher>ASYNC</dispatcher> --> </filter-mapping> Configuration Log4j Web provides many configuration options to finely tune its installation. These configuration options should be specified as servlet context initialization parameters . isLog4jAutoInitializationDisabled Type boolean Default value false If set to true , the Log4jServletContainerInitializer will be disabled, which prevents the automatic registration of both the Log4jServletContextListener and Log4jServletFilter . isLog4jAutoShutdownDisabled Type boolean Default value false If set to true , the Log4jServletContextListener will not register a Log4jServletContextListener to handle the web application shut down. log4j.stop.timeout.timeunit Type TimeUnit Default value SECONDS Specifies the TimeUnit used for the shut-down delay. log4j.stop.timeout Type long Default value 30 It specifies the duration of the shut-down delay. log4jContextName Type String Default value automatically computed Used to specify the name of the logger context. If JndiContextSelector is used, this parameter must be explicitly provided. Otherwise, the default value is: the servlet context name, if present, the servlet context path, including the leading / , otherwise. isLog4jContextSelectorNamed Type boolean Default value false Must be set to true to use the JNDI configuration . log4jConfiguration Type URI Default value false The location of a Log4j Core configuration file. If the provided value is not an absolute URI, Log4j interprets it as: the path to an existing servlet context resource , the path to an existing file, the path to a classpath resource . If no value is provided: Log4j Web looks for a servlet context resource named /WEB-INF/log4j2-<contextName>.<extension> , where <contextName> is the name of the logger context, if no such file exists it looks for a servlet context resource named /WEB-INF/log4j2.<extension> , otherwise, it searches for a configuration file on the classpath using the usual automatic configuration procedure . Asynchronous requests and threads In order for the web lookup to work correctly, Log4j must be able to always identify the ServletContext used by the current thread. When standard requests, forwards, inclusions, and error resources are processed, the Log4jServletFilter binds the LoggerContext to the thread handling the request, and you don’t have to do anything. The handling of asynchronous requests is however more tricky, since it allows you to execute code on threads that were not prepared by Log4jServletFilter . Such a situation occurs, for example, if your code was started using the AsyncContext.start(Runnable) method. To successfully propagate the logger context along asynchronous calls, the WebLoggerContextUtils helper class is made available. Using this class you can either decorate a Runnable with method calls that bind the appropriate logger context to the thread: Snippet from an example AsyncServlet.java AsyncContext asyncContext = req.startAsync(); asyncContext.start(WebLoggerContextUtils.wrapExecutionContext(getServletContext(), () -> { // Put your logic here })); or, if more flexibility is required, you can apply the same logic by using Log4jWebSupport : Snippet from an example AsyncServlet.java AsyncContext asyncContext = req.startAsync(); Log4jWebSupport webSupport = WebLoggerContextUtils.getWebLifeCycle(getServletContext()); asyncContext.start(() -> { try { webSupport.setLoggerContext(); // Put your logic here } finally { webSupport.clearLoggerContext(); } }); Logging in JavaServer Pages The Log4j Tag library is planned to be removed in the next major release! If you are using this library, please get in touch with the Log4j maintainers using the official support channels . To help users add logging statements to JavaServer Pages, Log4j provides a JSP tag library modeled after the Jakarta Commons Log Tag library . To use it, you need to add the following runtime dependency to your web application project: Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-taglib</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-taglib' and add the following declaration to your JSP pages: <%@ taglib prefix="log4j" uri="http://logging.apache.org/log4j/tld/log" %> The Log4j Taglib component is deprecated and is scheduled for removal in Log4j 3. Currently, it only works with JavaServer Pages 2.3 and previous releases, and no version compatible with Jakarta Server Pages 3.0 is available. The Log4j Taglib library defines a tag for most Logger methods, including: simple and parameterized log statements: Snippet from an example taglib.jsp <log4j:debug message="Simple message"/> <log4j:info message="Hello {}!" p0="${param.who}"/> <log4j:warn message="Message with marker" marker="${requestScope.marker}"/> flow tracing statements: Snippet from an example taglib.jsp <log4j:entry p0="${param.who}"/> <log4j:exit/> catching and throwing statements: Snippet from an example taglib.jsp <c:catch var="exception"> <%= 5 / 0 %> </c:catch> <c:if test="${exception != null}"> <log4j:catching exception="${exception}"/> </c:if> tags to test the current log level: <log4j:ifEnabled level="INFO"> <code>INFO</code> is enabled. </log4j:ifEnabled> tags to set the name of the logger used: Snippet from an example taglib.jsp <log4j:setLogger logger="example.jsp"/> a dump tag that prints the contents of a JSP scope: Snippet from an example taglib.jsp <log4j:dump scope="request"/> Application server specific notes WildFly WildFly implicitly adds a shared copy of log4j-api to each web application deployment. This copy of log4j-api is configured to forward all events to WildFly’s centralized logging system and does not use the copy of Log4j Core bundled with the web application. To use Log4j Core, you need to set the add-logging-api-dependencies attribute of the logging subsystem to false . See WildFly documentation for more details. Sharing Log4j Core between Web Applications Since Log4j Core supports multiple logger contexts , it is possible to share a single instance of Log4j Core without losing the ability to configure logging for each application separately. Sharing Log4j Core has two main advantages: You can send log statements from multiple applications to the same log file. Under the hood, Log4j Core will use a single manager per file, which will serialize concurrent access from multiple applications. You can capture log statements issued by other shared libraries, so you don’t have to look for them in the global application server log. Setup To share Log4j Core between applications, you need to share at least these two JAR files: log4j-api log4j-core log4j-jakarta-web (or log4j-web if you use a Java EE application server) Since sharing libraries between applications is not part of the Jakarta EE standard, the instructions are specific to each application server: GlassFish Jetty OpenLiberty Payara Tomcat WildFly In GlassFish, you can add those libraries to the common classloader . See GlassFish documentation for more details. Recent versions of Jetty have a logging-log4j2 module that can be easily enabled to share Log4j Core between applications and to use Log4j Core for the Jetty server itself. See Jetty Modules documentation for more details. In OpenLiberty, you can add Log4j as a global library . See OpenLiberty documentation for more details. See Payara Common Libraries documentation . In Tomcat, you can use the common classloader . See Tomcat classloader documentation for more details. You can install Log4j as a global module or in a global directory . See WildFly EE Application Deployment documentation for more details. Check also the WildFly note above . Web application classloaders (see Servlet Specification 10.7.2 ) use a "parent last" delegation strategy, but prevent application from overriding implementation classes provided by the container. If you share Log4j between applications and the applications themselves contain Log4j Core, the logging behavior depends on the application server. Some application servers will use the shared instance (e.g., WildFly), while others will use the application instance (e.g., Tomcat). There are two solutions to this problem: you can remove Log4j from the WAR or EAR archive: Maven Gradle You can declare the scope of all Log4j libraries as provided . You can add log4j-api to the providedCompile configuration, while log4j-core to the providedRuntime configuration. See the Gradle WAR plugin for more details. you can use an application-server-specific configuration option to delegate the loading of Log4j API to the parent classloader. Log separation When using a shared instance of Log4j Core, you might be interested in identifying the application associated with a given log event. Log4j Core provides a mechanism to split all Logger instances into logging domains called LoggerContext s. You have therefore two ways to separate log events: You can create a separate logger context for each web application and one context for the common libraries. See Multiple logger contexts for more details. You can also use a single logger context for all log events, but use lookups to add context data to your log events. See Single logger context for more details. These two approaches deliver similar results for log events generated by the web applications themselves or the libraries bundled in the WAR or EAR archive. Differences between these approaches appear in the handling of shared libraries. There are two kinds of shared libraries: Shared libraries that use static Logger fields. These libraries will always use the same logger context, which will not be one of the per-application contexts. This kind includes all the shared libraries, which were not written with Jakarta EE in mind. Shared libraries that use instance Logger fields. These libraries will use the logger context associated with the web application that uses them. Application server implementations usually use instance Logger fields. Since the first kind of libraries is more common, counterintuitively the Single logger context approach will usually give better results than the Multiple logger contexts approach. Single logger context By default, Log4j Core creates a separate logger context per classloader. To use a single logger context, you need to set the log4j2.contextSelector system property to: either org.apache.logging.log4j.core.selector.BasicContextSelector to use synchronous loggers, or org.apache.logging.log4j.core.async.BasicAsyncLoggerContextSelector to use asynchronous loggers. In this approach, you must use lookups to register the application that generated a log event. The most useful lookups in this case are: Web lookup It does not require any setup, but it is available only after Log4jServletFilter has been executed. Some log events pertinent to a web application can be unmarked. See web lookup for more information. JNDI lookup It covers a larger part of the handling of a request, but it requires additional setup to export the name of the application via JNDI. See JNDI lookup for more information. When using a single logger context, you choose between: Logging all events to a single appender. We strongly recommend using a structured layout (e.g., JSON Template Layout ) with an additional field capturing the Servlet context name. This would allow separation of application logs by filtering on the context name. The following example demonstrates this scheme using a Socket Appender writing to Elasticsearch: XML JSON YAML Properties <File name="GLOBAL" fileName="logs/global.log"> <JsonTemplateLayout> <EventTemplateAdditionalField key="contextName" value="$${web:contextName}"/> </JsonTemplateLayout> </File> "File": { "name": "GLOBAL", "fileName": "logs/global.log", "JsonTemplateLayout": { "EventTemplateAdditionalField": { "key": "contextName", "value": "$${web:contextName}" } } }, File: name: "GLOBAL" fileName: "logs/global.log" JsonTemplateLayout: EventTemplateAdditionalField: key: "contextName", value: "$${web:contextName" appender.0.type = File appender.0.name = GLOBAL appender.0.fileName = logs/global appender.0.layout.type = JsonTemplateLayout appender.0.layout.0.type = EventTemplateAdditionalField appender.0.layout.0.key = contextName appender.0.layout.0.value = $${web:contextName} Logging events to a separate appender for each application. In this case, you can use routing appender to separate the events. This kind of configuration might be used on the development server together with the human-friendly Pattern Layout : XML JSON YAML Properties <Routing name="ROUTING"> <Routes pattern="$${web:contextName:-common}"> <Route> <File name="${web:contextName:-common}" fileName="logs/${web:contextName:-common}.log"> <PatternLayout pattern="%d [%t] %-5p %c - %m%n"/> </File> </Route> </Routes> </Routing> "Routing": { "name": "ROUTING", "Routes": { "pattern": "$${web:contextName:-common}", "Route": { "File": { "name": "${web:contextName:-common}", "fileName": "logs/${web:contextName:-common}.log", "PatternLayout": { "pattern": "d [%t] %-5p %c - %m%n" } } } } } Routing: name: "ROUTING" Routes: pattern: "$${web:contextName:-common}" File: name: "${web:contextName:-common}" fileName: "logs/${web:contextName:-common}.log" PatternLayout: pattern: "%d [%t] %-5p %c - %m%n" appender.1.type = Routing appender.1.name = ROUTING appender.1.route.type = Routes appender.1.route.pattern = $${web:contextName:-common} appender.1.route.0.type = Route appender.1.route.0.appender.type = File appender.1.route.0.appender.name = ${web:contextName:-common} appender.1.route.0.appender.fileName = logs/${web:contextName:-common}.log appender.1.route.0.appender.layout.type = PatternLayout appender.1.route.0.appender.layout.pattern = %d [%t] %-5p %c - %m%n Multiple logger contexts Since Log4j Core uses ClassLoaderContextSelector by default, no configuration is needed to achieve multiple logger contexts in your application server: the classes of each classloader will use the logger context associated with the classloader. To provide a different configuration file for each logger context, you can add files named log4j2<contextName>.xml to the classpath of your application server. See log4jContextName and log4jConfiguration for more details. Associating logger contexts to classloaders has, however, some limitations: shared libraries will not be able to use the per-application logger contexts. To overcome this limitation, Log4j Core provides an alternative algorithm to determine the right logger context to choose: JNDI lookups. JNDI context selector Application servers set up the correct JNDI context as soon as they determine which application will handle a request. Log4j Core allows the usage of JNDI to coordinate the usage of logger contexts in a Jakarta EE application server. To use this feature, you need to: Set the log4j2.contextSelector Log4j configuration property to org.apache.logging.log4j.core.selector.JndiContextSelector , For security reasons you need to enable the selector, by setting the log4j2.enableJndiContextSelector Log4j configuration property to true , Each web application needs to configure the servlet context parameter isLog4jContextSelectorNamed to true and provide a value for the log4jContextName servlet context parameter and java:comp/env/log4j/context-name JNDI environment entry: Snippet from an example web.xml <context-param> <param-name>isLog4jContextSelectorNamed</param-name> <param-value>true</param-value> </context-param> <context-param> <param-name>log4jContextName</param-name> <param-value>your_application_name</param-value> </context-param> <env-entry> <env-entry-name>log4j/context-name</env-entry-name> <env-entry-value>your_application_name</env-entry-value> <env-entry-type>java.lang.String</env-entry-type> </env-entry> Replacing the application server logging subsystem Some application servers allow administrators to replace the default logging subsystem of the application server with Log4j Core. Known instructions are listed in the section. If your application server is not listed here, check the documentation of the application server. Tomcat Tomcat uses a modified version of Apache Commons Logging called Tomcat JULI as the internal logging system. Tomcat JULI uses java.util.logging as default logging implementation, but since Tomcat 8.5 you can replace it with a different backend. To use Log4j Core as logging backend, you need to modify the system classloader of the server. Assuming $CATALINA_BASE is the main directory of your Tomcat instance you need to: Create a $CATALINA_BASE/log4j folder to contain Log4j dependencies, Download the following JAR files into $CATALINA_BASE/log4j : log4j-appserver : the bridge between Tomcat JULI and Log4j API, log4j-api , log4j-core . Add a Log4j Core configuration file called either log4j2.xml or log4j2-tomcat.xml to the $CATALINA_BASE/log4j folder. Modify the system classloader classpath to include all the JAR files and the $CATALINA_BASE/log4j folder itself. If you are starting Tomcat using the scripts in $CATALINA_HOME/bin , you can do it by creating a $CATALINA_BASE/bin/setenv.sh file with content: CLASSPATH="$CATALINA_HOME/log4j/*:$CATALINA_HOME/log4j/" Windows users can modify the classpath using the Procrun monitor application GUI application. The application is traditionally located in $CATALINA_HOME/bin/tomcat<n>w.exe , where <n> is the major version number of Tomcat. Jetty In recent Jetty versions you just need to enable the logging-log4j2 module. See Jetty Modules documentation for more details. On Jetty 9.x or earlier you need to: Add the following JAR files to Jetty’s classpath: log4j-appserver , log4j-api , log4j-core . Set the system property org.eclipse.jetty.util.log.class to org.apache.logging.log4j.appserver.jetty.Log4j2Logger Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/614 | LLVM Weekly - #614, October 6th 2025 LLVM Weekly - #614, October 6th 2025 Welcome to the six hundred and fourteenth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events Naveen Seth Hanig wrote on the LLVM blog about their GSoC project to support the use of simple C++20 modules from the Clang driver . Submissions are open for topics for the 2025 LLVM Runtimes Workshop , taking place at the US LLVM Developers' Meeting. According to the LLVM Calendar in the coming week there will be the following: Office hours with the following hosts: Kristof Beyls, Johannes Doerfert Online sync-ups on the following topics: MLIR C/C++ frontend, ClangIR upstreaming, pointer authentication, MemorySSA, LLVM qualification group, Flang, RISC-V, LLVM embedded toolchains, HLSL. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Reid Kleckner started an RFC thread on revising LLVM’s AI tool policy , following on from the recent previous discussion . The revised PR includes some text from the Fedora proposal for AI policy as well as other changes. Petr Hosek, on behalf of the LLVM infra area team, provided an update on lnt.llvm.org . The official server has been inoperable for some time, and the plan is to now move it to LLVM Foundation operation on an AWS instance. Scott Linder posted a proposal for changing interfaces for building/editing/querying DIExpression s . Victor Campos returned to the RFC on baremetal stdio in libc, proposing paths forward . LLVM Foundation board meeting minutes from September are now available . David Blaikie is collecting requirements for tooling to land commits via PRs without precommit review in a straightforward way. Felipe de Azevedo Piovezan proposed a new vectorised memory read packet for LLDB , allowing the debugger to request multiple memory reads from a remote at once. Kristof Beyls, Peter Smith, Marius Brehler, and Stefan Gränitz are inviting additional help in organising the FOSDEM 2026 devroom . Ferdinand Lemaire is seeking additional reviewers for the Wasm MLIR dialect upstreaming . LLVM commits !captures metadata on stores was introduced, allowing a frontend to specify that an escaping pointer will only be used for reads. 63ca848 . A GNU make jobserver implementation was added. ffc503e . It’s now possible to dump SelectionDAGs with sorted nodes (e.g. with -debug-only=isel-dump ). c2c2e4e . An OnDiskTrieRawHashMap data structure was added for use by the content addressable storage effort. 2936a2c . llvm-readobj --offloading will list available offload bundles. 07f8f08 . PowerPC elliptic curve cryptography instructions were supported. 2802ab6 . LLVM’s coding standards are now explicit that Unix line endings must be used for all files other than files that need CRLF endings (e.g. tests, .bat). 4e404d0 . Many cl::opts were moved to the llvm namespace. 11a4b2d . After various fixes, non-trivially rematerialisation is now allowed by default resulting in meaningful reduction in reloads. 795a115 . Clang commits Operator delete support was added to ClangIR. 38953f4 . X86 i16/i32 shuffle intrinsics can now be used in constexpr . 952b123 . Other project commits Tail merging of strings was added back to LLD’s MachO linker. The commit message notes this optimisation can kick in for ObjC method names in many cases. d0e9890 . BOLT gained new helpers to match MCInst s. d884b55 . Flang now supports the standalone OpenMP tile construct. 375f489 . faccessat was implemented in LLVM’s libc. 44d471e . Performance of std::find was improved by up to 2x for integral types. 97367d1 . Subscribe at LLVMWeekly.org . < | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/manual/markers.html | Markers :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Manual API Fish tagging Markers Edit this Page Markers Markers allow to tag log statements with a Marker object, labeling them as belonging to a specific type. For example, developers can use markers to tag log statements related to a particular subsystem or functionality. By using markers, it is possible to filter log statements based on the Marker and display only those log statements that are of interest, such as those related to XML processing or SQL queries. Markers offer more fine-grained control over log filtering beyond log levels or package names. Creating Markers Simple markers To create a Marker , create a field in your class using the MarkerManager.getMarker() method: private static final Marker SQL_MARKER = MarkerManager.getMarker("SQL"); Since a Marker is reusable across multiple log statements, storing it in a static final field makes it a constant. Once created, use it as the first argument in the log statement: LOGGER.debug(SQL_MARKER, "SELECT * FROM {}", table); If you use the configuration example below , one can see the following log statement on your console: 10:42:30.982 (SQL) SELECT * FROM my_table Parent and child markers A marker can have zero or more parent markers, allowing for a hierarchy of markers. To create such a hierarchy, you must use the addParents() method on the Marker object after you make the child marker. private static final Marker QUERY_MARKER = MarkerManager.getMarker("SQL_QUERY").addParents(SQL_MARKER); private static final Marker UPDATE_MARKER = MarkerManager.getMarker("UPDATE").addParents(SQL_MARKER); Child markers do not differ from simple markers; one must pass them on as the first argument of a logging call. LOGGER.debug(QUERY_MARKER, "SELECT * FROM {}", table); LOGGER.debug(UPDATE_MARKER, "UPDATE {} SET {} = {}", table, column, value); Messages marked with children’s markers behave as if they were both marked with the children’s marker and all its parents. If you use the configuration example below , you’ll see the following log statement on your console: 10:42:30.982 (SQL_QUERY[ SQL ]) SELECT * FROM my_table 10:42:30.982 (SQL_UPDATE[ SQL ]) UPDATE my_table SET column = value Pitfalls It is important to note that marker names must be unique, as Log4j registers them permanently by name. Developers are advised to avoid generic marker names, as they may conflict with those provided by third parties. For technical reasons the Marker.setParents(Marker…​) method can be called at runtime to modify the list of parents of the current marker. However, we discourage such a practice and advise you to only use the method at initialization time. It is also worth noting that markers without parents are more efficient to evaluate than markers with multiple parents. It is generally a good idea to avoid complex hierarchies of markers where possible. Configuring filtering Developers can use markers to filter the log statements delivered to log files. Marker processing is supported at least by Logback and the Log4j Core logging implementations. We will provide a sample configuration for both these backends. Log4j Core To filter messages by marker, you need to add MarkerFilter to your configuration file. For example, you can use the configuration below to redirect all SQL-related logs to the SQL_LOG appender, regardless of the level of the events: XML JSON YAML Properties Snippet from an example log4j2.xml <Appenders> <Console name="SQL_LOG"> <PatternLayout pattern="%d{HH:mm:ss.SSS} (%marker) %m%n"/> </Console> </Appenders> <MarkerFilter marker="SQL" onMatch="ACCEPT" onMismatch="NEUTRAL"/> (1) <Loggers> <Root level="INFO"> <AppenderRef ref="SQL_LOG"> <MarkerFilter marker="SQL"/> (2) </AppenderRef> </Root> </Loggers> Snippet from an example log4j2.json "Appenders": { "Console": { "name": "SQL_LOG", "PatternLayout": { "pattern": "%d{HH:mm:ss.SSS} (%marker) %m%n" } } }, "MarkerFilter": { (1) "marker": "SQL", "onMatch": "ACCEPT", "onMismatch": "NEUTRAL" }, "Loggers": { "Root": { "level": "INFO", "AppenderRef": { "ref": "SQL_LOG", "MarkerFilter": { (2) "marker": "SQL" } } } } Snippet from an example log4j2.yaml Appenders: Console: name: "SQL_LOG" PatternLayout: pattern: "%d{HH:mm:ss.SSS} (%marker) %m%n" MarkerFilter: (1) marker: "SQL" onMatch: "ACCEPT" onMismatch: "NEUTRAL" Loggers: Root: level: "INFO" AppenderRef: ref: "SQL_LOG" MarkerFilter: (2) marker: "SQL" Snippet from an example log4j2.properties appender.0.type = Console appender.0.name = SQL_LOG appender.0.layout.type = PatternLayout appender.0.layout.pattern = %d{HH:mm:ss.SSS} (%marker) %m%n (1) filter.0.type = MarkerFilter filter.0.marker = SQL filter.0.onMatch = ACCEPT filter.0.onMismatch = NEUTRAL rootLogger.level = INFO rootLogger.appenderRef.0.ref = SQL_LOG (2) rootLogger.appenderRef.0.filter.type = MarkerFilter rootLogger.appenderRef.0.filter.marker = SQL 1 Accepts all events marked with SQL regardless of their level, 2 Only allow events marked with SQL or one of its children to be sent to the SQL_LOG appender. Logback Logback differentiates two kinds of filters: TurboFilter s, which are applied before a log event is created, and Filter s, which are applied only when a log event reaches an appender. See Logback filters for more information. You can use a combination of MarkerFilter , EvaluatorFilter and OnMarkerEvaluator to redirect all messages marked with SQL to a specific appender, regardless of their level. In order to do that, you can use a configuration as below: Snippet from an example logback.xml <turboFilter class="ch.qos.logback.classic.turbo.MarkerFilter"> (1) <Marker>SQL</Marker> <OnMatch>ACCEPT</OnMatch> </turboFilter> <appender name="SQL_LOG" class="ch.qos.logback.core.ConsoleAppender"> <encoder> <pattern>%d{HH:mm:ss.SSS} (%marker) %m%n</pattern> </encoder> <filter class="ch.qos.logback.core.filter.EvaluatorFilter"> (2) <evaluator class="ch.qos.logback.classic.boolex.OnMarkerEvaluator"> <marker>SQL</marker> </evaluator> <onMismatch>DENY</onMismatch> </filter> </appender> <root level="INFO"> <appender-ref ref="SQL_LOG"/> </root> 1 Accepts all events marked with SQL regardless of their level, 2 Only allow events marked with SQL or one of its children to be sent to the SQL_LOG appender.git Complete example To try the examples on this page: add MarkerExample.java to the src/main/java/example folder of your project, if your project uses Log4j Core add log4j2.xml to the src/main/resources folder of your project. if your project uses Logback add logback.xml to the src/main/resources folder of your project. Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/622 | LLVM Weekly - #622, December 1st 2025 LLVM Weekly - #622, December 1st 2025 Welcome to the six hundred and twenty-second issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events Amr Hesham wrote on the LLVM blog about their ClangIR upstreaming GSoC project . Matt Godbolt is doing a series of blog posts “ Advent of Compiler Optimisations" . The first one has now been posted . The deadline for submissions to the FOSDEM 2026 LLVM dev room has been extended to December 7th . According to the LLVM Calendar in the coming week there will be the following. Office hours with the following hosts: Quentin Colombet, Johannes Doerfert, Renato Golin. Online sync-ups on the following topics: MLIR C/C++ frontend, ClangIR upstreaming, pointer authentication, MemorySSA, overflow behavior types, qualification group, libc++, OpenMP, Clang C/C++ language working group, Flang, RISC-V, embedded toolchains, MLGO, reflection. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Last call for volunteers for the EuroLLVM developers' meeting program committee (apply by end of today, December 1st). “gbMattN” started RFCs on the behaviour of TypeSanitizer when Clang’s TBAA is incorrect and adding TBAA metadata to returned aggregates “valadaptive” posted an RFC aiming to resolve issues around the semantics for various floating point minimum and maximum operations . Nikita Popov is re-checking if anyone has views on deprecating C API functions using the global context . Charles Zablit would like to add a check-python subcommand to lldb and lldb-dap to allow tools like the VSCode lldb-dap extension to check Python is correctly configured. Luke Lau proposes allowing non-constant offsets in the llvm.vector.splice intrinsic . Endre Fülöp started an RFC discussion on offering finer grained control on when the security.insecureAPI.DeprecatedOrUnsafeBufferHandling checker reports warnings for deprecated buffer handling functions . More round table notes were posted: “safe mode” , C++ safety adoption tooling , and C/C++ bounds safety . Fangrui Song shared some results from porting the Mach-O compact unwind format to ELF . LLVM commits Rematerialization for scalar loads was enabled in the RISC-V backend. 4b35ff5 . BPF now has a new allow-builtin-calls target feature (you’ll never guess what it does!). 23907a2 . Guidance for reviewing commit access requests was documented. d7dcc10 . A vector scheduling model was added for the Tenstorrent Ascalon D8 RISC-V processor. ad3d9fb . Dwarf fission ( -gsplit-dwarf ) can now be used for RISC-V. 5f777b2 , cc1c417 . Support was added for “deactivation symbols”. Used by pointer field protection, these allow object files to disable specific instructions in other object files at link time. 6227eb9 . Clang commits Documentation on how Clang-generated HIP fat binaries are registered and unregistered with the HIP runtime was improved. 1b8626b . Amongst many other ClangIR patches this week, support was added for __builtin_operator_{new,delete} . e6f60a6 . Other project commits The BOLT inliner was extended to work on functions with pointer authentication. bab1c29 . Lowering was implemented for the Flang PAUSE statement. 70970d0 . LLVM’s libc gained clock_gettime for Darwin. 80e4a3f . picolibc and newlib suport was added to the RUNTIMES_USE_LIBC build option in libcxx. a6643f2 . A ControllerAccess interface was added to orc-rt, providing bidirectional RPC between the executor (containing JITted code) and the controller (containing the llvm::orc::ExecutionSession ). bfc732e . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/624 | LLVM Weekly - #624, December 15th 2025 LLVM Weekly - #624, December 15th 2025 Welcome to the six hundred and twenty-fourth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events My Igalia colleague Luke Lau wrote the first part of a blog series on closing the performance gap between Clang/LLVM RISC-V and GCC . This part covers methodology, LNT, and how fixing a failure to select fmsub give a ~1.8% improvement in instruction count. Miguel Cárdenas wrote on the LLVM project blog about their GSoC project to provide new visualisation tools (see a demo here ). The next Portland area LLVM social will take place on December 18th . Note the new location. According to the LLVM Calendar in the coming week there will be the following. Office hours with the following hosts: Phoebe Wang Johannes Doerfert. Online sync-ups on the following topics: ClangIR upstreaming, pointer authentication, vectoriser improvements, security response group public sync-up, OpenMP in LLVM, Clang C/C++ language working group, Flang, RISC-V, LLVM libc, HLSL. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Hana Dusíková is collecting input on experience with [[clang::musttail]] . There is some discussion in the responses about the C committee proposal for return goto . There’s been an incredible amount of back and forth discussion on the floating point minimum and maximum operations RFC. Nikita Popv posted a summary of facts and proposed next steps . Andrzej Warzynski started an RFC discussion on whether the MLIR vector dialect should introduce vector.compressstore and vector.expandload ops . Anutosh Bhat is seeking feedback on a proposal to implement a flang interpreter that could be the basis of a Jupyter Fortran kernel . Andrzej Warzynski wrote up some 2025 highlights for the MLIR vector dialect . Matt Bentley shared some notes on an alternative implementation approach for the proposed std::hive datastructure . Meeting notes from the last LLVM project council meeting are now available . There has been some further discussion on the the SFrame RFC, where Nikita Popov has now written up some thoughts . Clément Fournier shared the zsh plugin they’ve created to help with using mlir-opt in the terminal . Rahul Joshi started a discussion about using variadic isa<> in LLVM code noting that it is used widely in the MLIR codebase but there was some pushback elsewhere within LLVM. LLVM commits Modifying BlockFrequencyInfo to use getPredBlockCostDivisor resulted in a 7% speedup for the 531.deepsjeng_r benchmark on RISC-V. e8219e5 . ConstantInt::get() now has an ImplicitTrunc parameter (true by default for now). It will be switch to false in the future to help guard against case where unintended truncation may be taking place. 7b65219 . An llvm::reverse_conditionally() iterator was added, which allows conditionally iterating a colleciton in reverse. 9a5fa30 , 085dc63 . The LoadStoreVectorizer now implements a gap filling optimisation, aiming to fill in hols in otherwise contiguous load/store chains to enable vectorisation. It can also extend the end of a chain to the closest power of two. 5c8c7f3 . llvm.experimental.vp.splat was removed. 86c5539 . An Apple M5/A19 CPU definition was added. f85494f . In LLVM IR, switch case values are no longer stored as operands and are instead stored more efficiently as a simple array of pointers after the uses. 6813f8f . llvm-lit now has an option to re-run only the tests that failed during the previous run. 04ce013 . Clang commits Clang was updated to use the data layouts from LLVM’s TargetParser as opposed to maintaining its own copies of them. 9dc3255 . -fdevirtualize-speculatively can be used to opt-in to the speculative devirtualisation optimisation. 9e7ce77 . Basic support for data member pointers was added to ClangIR. 87bf5ee . Support for defer in C was added (needs -fdefer-ts ). The implements the draft specification. 71bfdd1 . clang-doc HTML generation now uses the Mustache backend. 24117f7 . Other project commits A baremetal version of the compiler-rt profile library was added. bc0d0bb . A policy was added for when LLVM libc ports can be ‘sunsetted’ (removed). 2797688 . A WebAssembly ‘platform’ was added to LLDB. The commit message gives an example of using it to launch binaries under the WebAssembly Micro Runtime. 8d59cca . MLIR pass instrumentation can now signal failures such as failed invariant checks. 6b7b0ab . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/593 | LLVM Weekly - #593, May 12th 2025 LLVM Weekly - #593, May 12th 2025 Welcome to the five hundred and ninety-third issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events Recordings from EuroLLVM 2025 have started to be posted to YouTube . If you might like to run a workshop the day before the 2025 LLVM Developers' Meeting, now is the time to submit the proposal (deadline June 1st). My Igalia colleague Mikhail Gadelha has a blog post on work to improve RISC-V LLVM performance as compared to GCC , a project done through RISE and also summarised on their blog . Mikhail’s SpacemiT X60 scheduling model patch was also mentioned on Phoronix . I talk about tech in general, LLVM Weekly and projects I’ve been involved in such as RISC-V, LLVM, lowRISC, Raspberry Pi, and work at Igalia on the latest episode of the TMPDIR podcast hosted by Khem Raj and Cliff Brake. Not so on-top for LLVM, but if you’re interested in my computer setup enjoy scrolling through lots of Linux configuration notes you might like my blog post on the MiniBook X N150 - the netbook isn’t dead (yet!) . I also wrote up suite-helper , the helper script I split out to handle my most common tasks when building and diving into llvm-test-suite configurations - I’d highlight particularly the reduce-ll helper to llvm-reduce a target assembly snippet for a given .c input. Finally I’m presenting a talk about improvements to RISC-V vector code generation in LLVM at the RISC-V Summit Europe in Paris this week. If you’re here, be sure to say hi! The next LLVM Bay Area Monthly Meetup will take place on Monday 12th May . According to the LLVM Calendar in the coming week there will be the following: Office hours with the following hosts: Alexey Bader, Alina Sbirlea, Kristof Beyls, Johannes Doerfert, Aaron Ballman. Online sync-ups on the following topics: Flang, modules, libc++, LLVM/offload, BOLT, SPIR-V, OpenMP for Flang, memory safety working group. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Reid Kleckner found that each class template instantiation in Clang costs about 1KiB of memory . Utkarsh Saxena and collaborators propose a Polonius-inspired intra-procedural lifetime analysis in Clang intended to detect issues such as use-after-free and use-after-return. Britton Watson is seeking volunteers for the 2025 LLVM Developers' Meeting Program Committee and for the Student Travel Grant Review Committee. “wdx727” proposed adding stale profile handling to Propeller, inspired by BOLT . Alex Zinenko is trying to build a maintainer list for MLIR . In response to a question, Peter Waller gave guidance on how to use CSPGO . Ramkumar Ramachandra started an RFC discussion on a new analysis to recognise polynomial hash functions like CRC . Md Abdullah Shahneous Bari made an MLIR RFC on adding a generic way to imitate/emulate unsupported data types in a target environment . VenkataKeerthy started a discussion on upstreaming support for generating IR2Vec embeddings into Machine Learning Guided Optimisation (MLGO) for inlining . Csanád Hajdú wrote up a summary of current support for execute-only libraries on AArch64, its limitations, and a proposed path forward that adds a new frontend option. The next MLIR open meeting will be an open design meeting on symbolic expressions and symbolic tensors in MLIR . LLVM commits reportFatalInternalError and reportFatalUsageError were introduced and report_fatal_error was deprecated. As discussed at length in a previous RFC , report_fatal_error generates a backtrace and an invitation to submit a bug report unless GenCrashDiag=false is passed. You would now use reportFatalInternalError for situations indicating a bug in LLVM (crash dialog generated) and reportFatalUsageError for exiting with an error but no crash dialog. b492ec5 , 562a455 . An initial scheduler model was added for the SpacemiT-X60 RISC-V CPU. 4eac576 . A new flag was added to disable the SchedModel/Itineraries during scheduling. 00e7a02 . It is now documented that an attempt to evade a non-permanent ban frmo the project will result in being banned permanently. f2f4eac . The uselist was removed for ConstantData, meaning it is no longer possible to inspect the uses of ConstantData. 9383fb2 , 4d60c6d . Assembly printer passes are now registered in the pass registry meaning you can use llc -start-before=<target>-asm-printer in tests. 675cb70 . As a debug tool, new statistics were added to track the number of instructions that remain in source order after scheduling and the total number of instructions scheduled. A range of other statistics were added too. ddfdecb , cdde6a6 . Initial code generation support was added for the RISC-V Zvqdotq (dot product) extension. 1ac489c . A new TargetInstrInfo hook was added and used in MachineCopyPropagation to simplify/canonicalise instructions after copy propagation (there’s a pattern where tail duplication followed copy propagation results in operands that are the zero register, which depending on the opcode might mean it can be simplified to a canonical move or load-immediate). 52b345d . The last mentions of IRC were removed from LLVM’s documentation, as it’s not used by the community any more. 7548cec . RISC-V SDNodes are now tablegenerated (similar cleanups were made recently to other backends, but I think this is the most in-depth). c60db55 . $HOME is now passed through to tests run via lit . 635c648 . Documentation started to land on MLGO (Machine Learning Guided Optimisation). 77d1db6d . Clang commits -Wjump-bypasses-init was renamed to -Wjump-misses-init . 43c05d9 . The core language parts of the C++ trivial relocation proposal were implemented and the previously implemented __is_trivially_reloctable was deprecated in favour of __builtin_is_cpp_trivially_relocatable . 300d402 , 09c80e2 . A cir-simplify pass was added. 2eb6545 . __ptrauth can now be applied to integer types. 65a6cbd . Other project commits crt1 for the UEFI platform was added to LLVM’s libc. 865fb9c . libcxx added the __is_replaceable type trait. 45d493b . MLIR’s LLVM dialect gained ProfileSummary module flag support. 28934fe . An --affine-raise-from-memref pass was added to MLIR. 7aabf47 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://docs.aws.amazon.com/it_it/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-testing-debugging.html | Test e debug delle funzioni Lambda@Edge - Amazon CloudFront Test e debug delle funzioni Lambda@Edge - Amazon CloudFront Documentazione Amazon CloudFront Guida per gli sviluppatori Test delle funzioni Lambda@Edge Identifica gli errori della funzione Lambda @Edge in CloudFront Risoluzione dei problemi relativi alle risposte non valide della funzione Lambda@Edge (errori di convalida) Risoluzione dei problemi relativi agli errori di esecuzione della funzione Lambda@Edge Determinazione della regione Lambda@Edge Determina se il tuo account invia i log a CloudWatch Le traduzioni sono generate tramite traduzione automatica. In caso di conflitto tra il contenuto di una traduzione e la versione originale in Inglese, quest'ultima prevarrà. Test e debug delle funzioni Lambda@Edge È importante testare il codice della funzione Lambda @Edge in modo indipendente, per assicurarsi che completi l'attività prevista, ed eseguire test di integrazione, per assicurarsi che la funzione funzioni correttamente. CloudFront Durante i test di integrazione o dopo la distribuzione della funzione, potrebbe essere necessario eseguire il debug di CloudFront errori, come gli errori HTTP 5xx. Gli errori possono essere una risposta non valida restituita dalla funzione Lambda, gli errori di esecuzione quando la funzione è attivata, oppure gli errori dovuti al throttling di esecuzione da parte del servizio Lambda. Le sezioni in questo argomento condividono le strategie per determinare quale tipo di errore è il problema, e quindi quale procedura adottare per risolvere il problema. Nota Quando esamini i file di CloudWatch registro o le metriche durante la risoluzione degli errori, tieni presente che vengono visualizzati o archiviati nella posizione Regione AWS più vicina alla posizione in cui è stata eseguita la funzione. Quindi, se hai un sito Web o un'applicazione Web con utenti nel Regno Unito e hai una funzione Lambda associata alla tua distribuzione, ad esempio, devi modificare la regione per visualizzare le CloudWatch metriche o i file di registro per Londra. Regione AWS Per ulteriori informazioni, consulta Determinazione della regione Lambda@Edge . Argomenti Test delle funzioni Lambda@Edge Identifica gli errori della funzione Lambda @Edge in CloudFront Risoluzione dei problemi relativi alle risposte non valide della funzione Lambda@Edge (errori di convalida) Risoluzione dei problemi relativi agli errori di esecuzione della funzione Lambda@Edge Determinazione della regione Lambda@Edge Determina se il tuo account invia i log a CloudWatch Test delle funzioni Lambda@Edge Sono disponibili due fasi per il test della funzione Lambda: test autonomo e test di integrazione. Test di funzionalità autonoma Prima di aggiungere la funzione Lambda CloudFront, assicurati di testarla prima utilizzando le funzionalità di test nella console Lambda o utilizzando altri metodi. Per ulteriori informazioni sul test nella console Lambda, consulta Invocare una funzione Lambda utilizzando la console nella Guida per gli sviluppatori di AWS Lambda . Verifica il funzionamento della tua funzione in CloudFront È importante completare i test di integrazione, in cui la funzione è associata a una distribuzione ed è eseguita in base a un CloudFront evento. Verifica che la funzione sia attivata per l'evento corretto e restituisca una risposta valida e corretta per CloudFront. Ad esempio, verifica che la struttura dell’evento sia corretta, che siano incluse solo le intestazioni valide e così via. Mentre esegui il test di integrazione con la tua funzione nella console Lambda, fai riferimento ai passaggi del tutorial Lambda @Edge mentre modifichi il codice o cambi il trigger che chiama CloudFront la tua funzione. Ad esempio, assicurati che si stia utilizzando una versione numerata della tua funzione, come descritto in questa fase del tutorial: Fase 4: Aggiungere un CloudFront trigger per eseguire la funzione . Mentre apporti modifiche e le distribuisci, tieni presente che ci vorranno diversi minuti prima che la funzione e i CloudFront trigger aggiornati si replichino in tutte le regioni. Questa operazione di solito richiede alcuni minuti, in alcuni casi fino a 15. Puoi verificare se la replica è terminata accedendo alla CloudFront console e visualizzando la distribuzione. Come verificare se l’implementazione della replica è terminata Apri la CloudFront console all'indirizzo https://console.aws.amazon.com/cloudfront/v4/home . Scegli il nome della distribuzione. Controlla che lo stato della distribuzione torni da In Progress (In corso) a Deployed (Implementato) , il che significa che la funzione è stata replicata. Quindi segui la procedura nella sezione successiva per verificare che la funzione sia attiva. Tieni presente che il test nella console convalida solo la logica della funzione e non applica quote di servizio (precedentemente note come limiti) specifiche di Lambda@Edge. Identifica gli errori della funzione Lambda @Edge in CloudFront Dopo aver verificato il corretto funzionamento della logica della funzione, potresti continuare a visualizzare errori HTTP 5xx durante l'esecuzione della funzione. CloudFront Gli errori HTTP 5xx possono essere restituiti per diversi motivi, tra cui errori della funzione Lambda o altri problemi in. CloudFront Se utilizzi le funzioni Lambda @Edge, puoi utilizzare i grafici nella CloudFront console per individuare la causa dell'errore e quindi lavorare per correggerlo. Ad esempio, puoi vedere se gli errori HTTP 5xx sono causati da CloudFront o da funzioni Lambda e quindi, per funzioni specifiche, puoi visualizzare i file di registro correlati per esaminare il problema. Per risolvere gli errori HTTP in generale in CloudFront, consulta la procedura di risoluzione dei problemi nel seguente argomento:. Risoluzione dei problemi relativi ai codici di stato della risposta agli errori in CloudFront Cosa causa gli errori della funzione Lambda @Edge in CloudFront Ci sono vari motivi per cui una funzione Lambda potrebbe causare un errore HTTP 5xx e i passaggi di risoluzione dei problemi da eseguire variano a seconda del tipo di errore. Gli errori possono essere classificati come riportato di seguito: Un errore di esecuzione della funzione Lambda Si verifica un errore di esecuzione quando CloudFront non riceve una risposta da Lambda perché ci sono eccezioni non gestite nella funzione o c'è un errore nel codice. Ad esempio, se il codice include callback (Error). Viene restituita una risposta alla funzione Lambda non valida a CloudFront Dopo l'esecuzione della funzione, CloudFront riceve una risposta da Lambda. Si verifica un errore nel caso in cui la struttura dell'oggetto della risposta non è conforme a Struttura dell'evento Lambda@Edge , oppure la risposta contiene le intestazioni non valide o altri campi non validi. L'esecuzione in CloudFront è limitata a causa delle quote del servizio Lambda (precedentemente note come limiti) Il servizio Lambda limita le esecuzioni in ciascuna regione e restituisce un errore se si supera la quota. Per ulteriori informazioni, consulta Quote di Lambda@Edge . Come stabilire il tipo di errore Per aiutarti a decidere dove concentrarti mentre esegui il debug e lavori per risolvere gli errori restituiti da CloudFront, è utile identificare il motivo per cui sta restituendo un errore HTTP. CloudFront Per iniziare, puoi utilizzare i grafici forniti nella sezione Monitoraggio della CloudFront console su. Console di gestione AWS Per ulteriori informazioni sulla visualizzazione dei grafici nella sezione Monitoraggio della CloudFront console, vedere. Monitoraggio delle metriche CloudFront con Amazon CloudWatch I seguenti grafici possono essere particolarmente utili quando desideri stabilire se gli errori vengono restituiti da origini o da una funzione Lambda e limitare il tipo di problema quando si tratta di un errore di una funzione Lambda. Grafico delle percentuali di errore Uno dei grafici che puoi visualizzare nella scheda Overview (Panoramica) per ciascuna delle distribuzioni è un grafico Error rates (Percentuali di errore) . Questo grafico visualizza la percentuale di errori come una percentuale di richieste totali pervenute alla distribuzione. Il grafico mostra la percentuale di errori totale, gli errori 4xx totali, gli errori 5xx totali e gli errori 5xx totali da funzioni Lambda. In base al tipo di errore e al volume, puoi eseguire fasi per individuare e risolvere la causa. Se sono visibili errori Lambda, puoi indagare ulteriormente osservando i tipi di errori specifici restituiti dalla funzione. La scheda Lambda@Edge errors (Errori Lambda@Edge) include grafici che classificano errori di funzioni in base al tipo per individuare il problema per una funzione specifica. Se riscontri CloudFront degli errori, puoi risolverli e correggere gli errori di origine o modificare la CloudFront configurazione. Per ulteriori informazioni, consulta Risoluzione dei problemi relativi ai codici di stato della risposta agli errori in CloudFront . Errori di esecuzione e grafici delle rispose di funzione non validi La scheda Lambda@Edge errors (Errori Lambda@Edge) include grafici che classificano gli errori Lambda@Edge per una distribuzione specifica, in base al tipo. Ad esempio, un grafico mostra tutti gli errori di esecuzione in base alla Regione AWS. Per semplificare la risoluzione dei problemi, puoi cercare problemi specifici aprendo ed esaminando i file di log per funzioni specifiche in base alla regione. Come visualizzare i file di log per una funzione specifica in base alla regione Nella scheda Errori Lambda@Edge , in Funzioni Lambda@Edge associate , scegli il nome della funzione, quindi seleziona Visualizza metriche . Nella pagina con il nome della funzione, nell’angolo in alto a destra, scegli Visualizza log delle funzioni , quindi seleziona una regione. Ad esempio, se visualizzi problemi nel grafico Errori per la regione Stati Uniti occidentali (Oregon), scegli tale regione dall’elenco a discesa. Verrà aperta la CloudWatch console Amazon. Nella CloudWatch console di quella regione, in Log stream , scegli un flusso di log per visualizzare gli eventi relativi alla funzione. Inoltre, leggi le seguenti sezioni in questo capitolo per ulteriori suggerimenti sulla risoluzione dei problemi e la correzione degli errori. Grafico di throttling La scheda Lambda@Edge errors (Errori Lambda@Edge) include anche un grafico Throttles (Throttle) . Talvolta, il servizio Lambda limita le invocazioni della funzione in base alla regione, se raggiungi la quota (precedentemente nota come limite) di simultaneità regionale. Se viene visualizzato un errore di superamento del limite , la tua funzione ha raggiunto una quota che il servizio Lambda impone sulle esecuzioni in una regione. Per ulteriori informazioni su come richiedere un aumento della quota, consulta Quote di Lambda@Edge . Per un esempio su come utilizzare queste informazioni nella risoluzione di errori HTTP, consulta Quattro fasi per il debug della distribuzione di contenuti su AWS . Risoluzione dei problemi relativi alle risposte non valide della funzione Lambda@Edge (errori di convalida) Se si identifica che il problema è un errore di convalida Lambda, significa che la funzione Lambda sta restituendo una risposta non valida a. CloudFront Segui le indicazioni in questa sezione per prendere provvedimenti per rivedere la tua funzione e assicurarti che la risposta sia conforme ai requisiti. CloudFront CloudFront convalida la risposta di una funzione Lambda in due modi: La risposta Lambda deve rispettare la struttura richiesta dell'oggetto. Tra gli esempi di errata struttura dell'oggetto figurano i seguenti: JSON non analizzabile, campi obbligatori mancanti e un oggetto non valido nella risposta. Per ulteriori informazioni, consulta Struttura dell'evento Lambda@Edge . La risposta deve includere solo i valori di oggetti validi. Si verifica un errore se la risposta include un oggetto valido ma con valori non supportati. Alcuni esempi sono i seguenti: l'aggiunta o l'aggiornamento di intestazioni inserite nella blacklist o di sola lettura (consulta Restrizioni sulle funzioni edge ) che superano la dimensione del corpo massima (consulta Restrizioni sulla dimensione della risposta generata nell’argomento Lambda@Edge Errori ) e caratteri o valori non validi (vedi Struttura dell'evento Lambda@Edge ). Quando Lambda restituisce una risposta non valida a CloudFront, i messaggi di errore vengono scritti nei file di registro che vengono CloudFront inviati nella regione CloudWatch in cui è stata eseguita la funzione Lambda. È il comportamento predefinito a cui inviare i file di registro in CloudWatch caso di risposta non valida. Tuttavia, se hai associato una funzione Lambda a CloudFront prima del rilascio della funzionalità, potrebbe non essere abilitata per la tua funzione. Per ulteriori informazioni, consulta Stabilire se l'account invia i log a CloudWatch più avanti in questo argomento. CloudFront invia i file di registro nella regione corrispondente a dove è stata eseguita la funzione, nel gruppo di log associato alla distribuzione. I gruppi di log hanno il seguente formato: /aws/cloudfront/LambdaEdge/ DistributionId , DistributionId dov'è l'ID della tua distribuzione. Per determinare la regione in cui trovare i file di CloudWatch registro, consulta Determinazione della regione Lambda @Edge più avanti in questo argomento. Se l'errore è riproducibile, puoi creare una nuova richiesta che genera l'errore e quindi trovare l'ID della richiesta in una CloudFront risposta non riuscita ( X-Amz-Cf-Id intestazione) per individuare un singolo errore nei file di registro. La voce del file di log contiene informazioni che consentono di identificare perché l'errore viene restituito ed elenca anche l'id della richiesta Lambda corrispondente che permette di analizzare la causa principale nel contesto di una singola richiesta. Se un errore è intermittente, è possibile utilizzare i log di CloudFront accesso per trovare l'ID della richiesta per una richiesta non riuscita e quindi cercare nei CloudWatch log i messaggi di errore corrispondenti. Per ulteriori informazioni, consulta la sezione precedente, Determinazione del Tipo di fallimento . Risoluzione dei problemi relativi agli errori di esecuzione della funzione Lambda@Edge Se il problema è un errore di esecuzione Lambda, può essere utile creare istruzioni di registrazione per le funzioni Lambda, scrivere messaggi nei file di CloudWatch registro che monitorano l'esecuzione della funzione CloudFront e determinare se funziona come previsto. Quindi puoi cercare queste istruzioni nei file di CloudWatch registro per verificare che la tua funzione funzioni. Nota Anche se non hai modificato la funzione Lambda@Edge, gli aggiornamenti per l'ambiente di esecuzione della funzione Lambda potrebbero influenzarla e potrebbero restituire un errore di esecuzione. Per informazioni sui test e la migrazione a una versione successiva, consulta Prossimi aggiornamenti dell'ambiente di esecuzione AWS Lambda e AWS Lambda @Edge . Determinazione della regione Lambda@Edge Per vedere le regioni in cui la tua funzione Lambda @Edge riceve traffico, visualizza le metriche per la funzione sulla CloudFront console su. Console di gestione AWS Le metriche vengono visualizzate per ogni regione. AWS Nella stessa pagina, puoi scegliere una regione e visualizzare i file di log per tale regione, in modo da analizzare i problemi. È necessario esaminare i file di CloudWatch registro nella AWS regione corretta per visualizzare i file di registro creati durante l' CloudFront esecuzione della funzione Lambda. Per ulteriori informazioni sulla visualizzazione dei grafici nella sezione Monitoraggio della CloudFront console, consulta. Monitoraggio delle metriche CloudFront con Amazon CloudWatch Determina se il tuo account invia i log a CloudWatch Per impostazione predefinita, CloudFront abilita la registrazione delle risposte della funzione Lambda non valide e invia i file di registro utilizzando uno dei. CloudWatch Ruoli collegati ai servizi per Lambda@Edge Se hai funzioni Lambda @Edge che hai aggiunto CloudFront prima del rilascio della funzionalità di registro delle risposte della funzione Lambda non valida, la registrazione viene abilitata al successivo aggiornamento della configurazione Lambda @Edge, ad esempio aggiungendo un trigger. CloudFront Puoi verificare che l'invio dei file di registro a CloudWatch sia abilitato per il tuo account effettuando le seguenti operazioni: Controlla se i log vengono visualizzati in CloudWatch : assicurati di cercare nella regione in cui è stata eseguita la funzione Lambda @Edge. Per ulteriori informazioni, consulta Determinazione della regione Lambda@Edge . Verifica se il ruolo collegato al servizio correlato esiste nell’account in IAM : devi disporre del ruolo IAM AWSServiceRoleForCloudFrontLogger nell’account. Per ulteriori informazioni su questo ruolo, consulta Ruoli collegati ai servizi per Lambda@Edge . JavaScript è disabilitato o non è disponibile nel tuo browser. Per usare la documentazione AWS, JavaScript deve essere abilitato. Consulta le pagine della guida del browser per le istruzioni. Convenzioni dei documenti Aggiunta di trigger a una funzione Lambda@Edge Eliminazione delle funzioni e delle repliche Questa pagina ti è stata utile? - Sì Grazie per averci comunicato che stiamo facendo un buon lavoro! Se hai un momento, ti invitiamo a dirci che cosa abbiamo fatto che ti è piaciuto così possiamo offrirti altri contenuti simili. Questa pagina ti è stata utile? - No Grazie per averci comunicato che questa pagina ha bisogno di essere modificata. Siamo spiacenti di non aver soddisfatto le tue esigenze. Se hai un momento, ti invitiamo a dirci come possiamo migliorare la documentazione. | 2026-01-13T09:30:34 |
https://www.mongodb.com/community/forums/t/mongodb-weeklyupdate-66-april-22-2022-hackathons-mongosh-and-github/159504 | MongoDB $weeklyUpdate #66 (April 22, 2022): Hackathons, mongosh, and Github - $weeklyUpdate - MongoDB Community Hub NEW: MongoDB Community Hub is Now in Public Preview! × MongoDB Community Hub MongoDB $weeklyUpdate #66 (April 22, 2022): Hackathons, mongosh, and Github About the Community $weeklyUpdate mongodb-shell , weekly-update , swift , storage Megan_Grant (Megan Grant) April 22, 2022, 3:30pm 1 $weeklyUpdate Hi everyone! Welcome to MongoDB $weeklyUpdate! Here, you’ll find the latest developer tutorials, upcoming official MongoDB events, and get a heads up on our latest Twitch streams and podcast, curated by Megan Grant . Enjoy! Freshest Tutorials on DevHub Want to find the latest MongoDB tutorials and articles created for developers, by developers? Look no further than our DevHub ! Generating MQL Shell Commands Using OpenAI and New mongosh Shell @Pavel_Duchovny In this article, I will show you my experiment, including the game-changing capabilities of the new MongoDB Shell (mongosh) which can extend scripting with npm modules integrations. Continuously Building and Hosting our Swift DocC Documentation using Github Actions and Netlify @Diego_Freniche In this post, we’ll see how to use Github Actions to continuously generate the DocC documentation for our Swift libraries and how to publish this documentation. Christmas Lights and Webcams with the MongoDB Data API @John_Page I built a Christmas tree with an API so you, dear reader, can control the lights as well as a webcam to view it. Create a Custom Data-Enabled API in MongoDB Realm in 10 Minutes or Less @Michael_Lynn In this article, I’ll explain the steps to follow to quickly create an API that exposes data from a sample database in MongoDB Atlas. Official MongoDB Events & Community Events Attend an official MongoDB event near you! Chat with MongoDB experts, learn something new, meet other developers, and win some swag! April 11 - May 20 (Virtual) - MongoDB Hackathon Apr 25-26 (London) - MongoDB @ Kafka Summit London Apr 27-28 (London) - MongoDB at AWS Summit London Apr 27 (6pm GMT+1 | Dublin) - Dublin MUG: Let’s talk Atlas, Realm and MongoDB Queries May 4 (New York) - New York MUG: Fast Track into MongoDB World ‘22 Hackathon MongoDB on Twitch & YouTube We stream tech tutorials, live coding, and talk to members of our community via Twitch and YouTube . Sometimes, we even stream twice a week! Be sure to follow us on Twitch and subscribe to our YouTube channel to be notified of every stream! Latest Stream More Video Goodness Follow us on Twitch and subscribe to our YouTube channel so you never miss a stream! Last Word on the MongoDB Podcast Latest Episode Spotify Ep. 108 Exploring Postman with Arlemi Turpault Listen to this episode from The MongoDB Podcast on Spotify. Postman is an application used for API testing. It is an HTTP client that tests HTTP requests, utilizing a graphical user interface, through which we obtain different types of responses that... Catch up on past episodes : Ep. 107 - Introduction to WiredTiger with Dr. Michael Cahill Ep. 106 - Securing the Internet with Josh Aas, Sarah Gran of ISRG Ep. 105 - The MongoDB World 2022 Hackathon (Not listening on Spotify? We got you! We’re most likely on your favorite podcast network, including Apple Podcasts , PlayerFM , Podtail , and Listen Notes ) Did you know that you get these $weeklyUpdates before anyone else? It’s a small way of saying thank you for being a part of this community. If you know others who want to get first dibs on the latest MongoDB content and MongoDB announcements as well as interact with the MongoDB community and help others solve MongoDB related issues, be sure to share a tweet and get others to sign up today! 1 Like Home Categories Guidelines Terms of Service Privacy Policy Powered by Discourse , best viewed with JavaScript enabled | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/log4j-spring-cloud-config-client.html | Log4j Spring Cloud Configuration :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Components Log4j Spring Cloud Configuration Edit this Page Log4j Spring Cloud Configuration This module allows logging configuration files to be dynamically updated when new versions are available in Spring Cloud Configuration. Overview Spring Boot applications initialize logging 3 times. SpringApplication declares a Logger. This Logger will be initialized using Log4j’s "normal" mechanisms. Thus, the log4j2.configurationFile system property will be checked to see if a specific configuration file has been provided, otherwise it will search for a configuration file on the classpath. The property may also be declare in log4j2.component.properties. Usage Log4j configuration files that specify a monitor interval of greater than zero will use polling to determine whether the configuration has been updated. If the monitor interval is zero then Log4j will listen for notifications from Spring Cloud Config and will check for configuration changes each time an event is generated. If the monitor interval is less than zero Log4j will not check for changes to the logging configuration. When referencing a configuration located in Spring Cloud Config the configuration should be referenced similar to log4j.configurationFile=http://host.docker.internal:8888/ConfigService/sampleapp/default/master/log4j2.xml Log4j also supports Composite Configurations. The standard way to do that is to concatentate the paths to the files in a comma separated string. Unfortunately, Spring validates the URL being provided and commas are not allowed. Therefore, additional configurations must be supplied as "override" query parametes. log4j.configurationFile=http://host.docker.internal:8888/ConfigService/sampleapp/default/master/log4j2.xml?override=http://host.docker.internal:8888/ConfigService/sampleapp/default/master/log4j2-sampleapp.xml Note that the location within the directory structure and how configuration files are located is completely dependent on the searchPaths setting in the Spring Cloud Config server. When running in a docker container host.docker.internal may be used as the domain name to access an application running on the same hose outside of the docker container. Note that in accordance with Spring Cloud Config practices but the application, profile, and label should be specified in the url. The Spring Cloud Config support also allows connections using TLS and/or basic authentication. When using basic authentication the userid and password may be specified as system properties, log4j2.component.properties or Spring Boot’s bootstrap.yml. The table below shows the alternate names that may be used to specify the properties. Any of the alternatives may be used in any configuration location. Property Alias Spring-like alias Purpose log4j2.configurationUserName log4j2.config.username logging.auth.username User name for basic authentication log4j2.configurationPassword log4j2.config.password logging.auth.password Password for basic authentication log4j2.configurationAuthorizationEncoding logging.auth.encoding Encoding for basic authentication (defaults to UTF-8) log4j2.configurationAuthorizationProvider log4j2.config.authorizationProvider logging.auth.authorizationProvider Class used to create HTTP Authorization header log4j2.configurationUserName=guest log4j2.configurationPassword=guest As noted above, Log4j supports accessing logging configuration from bootstrap.yml. As an example, to configure reading from a Spring Cloud Configuration service using basic authoriztion you can do: spring: application: name: myApp cloud: config: uri: https://spring-configuration-server.mycorp.com username: appuser password: changeme logging: config: classpath:log4j2.xml label: ${spring.cloud.config.label} --- spring: profiles: dev logging: config: https://spring-configuration-server.mycorp.com/myApp/default/${logging.label}/log4j2-dev.xml auth: username: appuser password: changeme Note that Log4j currently does not directly support encrypting the password. However, Log4j does use Spring’s standard APIs to access properties in the Spring configuration so any customizations made to Spring’s property handling would apply to the properties Log4j uses as well. If more extensive authentication is required an AuthorizationProvider can be implemented and its fully qualified class name should be specified in the log4j2.configurationAuthorizationProvider system property, in log4j2.component.properties or in Spring’s bootstrap.yml using either the log4j2.authorizationProvider or logging.auth.authorizationProvider key. For the properties required by TLS configuration see the Transport Security configuration . Requirements The Log4j 2 Spring Cloud Configuration integration has a dependency on Log4j 2 API, Log4j 2 Core, and Spring Cloud Configuration versions 2.0.3.RELEASE or 2.1.1.RELEASE or later versions it either release series. Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/613 | LLVM Weekly - #613, September 29th 2025 LLVM Weekly - #613, September 29th 2025 Welcome to the six hundred and thirteenth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events Erick Velez wrote on the LLVM blog about their GSoC project to improve clang-doc . Krishna Pandey wrote on the LLVM blog about their GSoC project to add BFloat16 support to LLVM’s libc . Min-Yih Hsu blogged about LLVM’s machine scheduler . Proposals are invited for BOF session proposals at the 2025 LLVM Developers' Meeting Embedded Toolchains Workshop . According to the LLVM Calendar in the coming week there will be the following: Office hours with the following hosts: Quentin Colombet, Johannes Doerfert, Renato Golin. Online sync-ups on the following topics: Flang, modules, libc++, lifetime safety, LLVM/Offload, Clang C/C++ language working group, SPIR-V, HLSL, memory safety, MLGO. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums LLVM 21.1.2 was released . The discussion on LLVM’s AI policy has continued at pace. James Y Knight pointed to the policy recently adopted by Fedora as relevant work . Ryotaro Kasuga started an RFC discussion on adding a new pass intended to replace DependenceAnalysis . stefanp provided an update on their previous machine scheduler / instruction fusion RFC . wdx727 gave an update on their RFC on matching and inference functionality for Propeller , noting that a PR has now been posted. Nightly releases of the LLDB-DAP VS Code extension are now being published . Congcong Cai posed the question of which languages should clang-tidy checks claim to support by default (currently, due to the default isLanguageVersionSupported behaviour a newly added check will claim to support any language clang-tidy supports). dlav-sc proposed adding software watchpoints support to LLDB . LLVM commits MachineIR was extended to allow save and restore points with independent sets of registers. 1132e82 . The concept of non-integral and unstable pointer properties was split in DataLayout and the LangRef. dde000a . The SPV_KHR_bfloat16 SPIR-V extension is now supported. f91e0bf . Serialisation of bitstream remarks was altered so that the remark string table is always written to the end of the remarks file. dfbd76b . update_llc_test_checks should no longer fail silently to generate CHECK lines when there is an irreconcilable conflict for a subset of functions in an input file. 9d48df7 . llvm.errno.tbaa named metadata was introduced for specifying the TBAA node used for accessing errno . 32c6e16 . llvm-remarkutil now has a filter subcommand which can be used to extract remarks just for a certain function/pass/type. 6e6a3d8 . X86 GlobalISel support was added for llvm.set.rounding . 0c1087b . Benchmarks were added for LLVM’s Mustache template library implementation. 1867ca7 . Support for %T was removed from llvm-lit. 7ff6973 . Clang commits Exciting news for those wanting to target AArch64 or RISC-V ports of GNU Hurd: Clang now supports the appropriate triples. 092bc04 . ClangIR upstreaming continues with support added for lambda expressions. 9b9b9c6 . __builtin_masked_{gather,scatter} builtins were added. 2c6adc9 . HLSL now supports the matrix type. 6ac40aa . Implementation has started for constexpr structured bindings. 83331cc . As part of the lifetime safety work, lifetimebound attribute support was added. 0e17fcf . clang-tidy gained a readability-redundant-parentheses check. 85aeb6a . Other project commits A number of additional improvements were made to the libcxx benchmark comparison and visualisation scripts, such as the ability to find outliers or to compare more than two benchmarks. 42bb5a5 , d636dc8 , 72c512f . A source of n^2 complexity in BOLT was removed, reducing the runtime for optimising rustc_driver.so from 15 minutes to 7 minutes. 9469ea2 . Flang now supports -gsplit-dwarf . 96675a4 . MLIR gained a transform to “bubble down” memory-space cast operations, to allow consumer operations to use the original memory-space rather htan first casting to a different one. 077a796 . A ptr.ptr_diff operation was added to MLIR. e3aa00e . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/600 | LLVM Weekly - #600, June 30th 2025 LLVM Weekly - #600, June 30th 2025 Welcome to the six hundredth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . 600 issues is another big milestone (still without missing a week) - thank you for reading! News and articles from around the web and events As a reminder, the first Bristol LLVM/MLIR meetup will take place on 2nd July . According to the LLVM Calendar in the coming week there will be the following: Office hours with the following hosts: Johannes Doerfert, Quentin Colombet, Renato Golin. Online sync-ups on the following topics: ClangIR, pointer authentication, LLVM libc, OpenMP, Clang C/C++ language working group, Flang, RISC-V, HLSL, MLGO. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Tobias Hieta provided the branch date for LLVM 21.x (15th July). Yaxun (Sam) Liu wrote an RFC proposal on adding GNU make jobserver support to LLVM , which raised a number of questions about things like current jobserver protocol support in Ninja. Alex Zinenko has re-summarised the vcix dialect removal discussion (especially taking into account how the discussion has branched into other topics). A new report on Flang progress as presented to “J3” is now available . Fangrui Song raised the prospect of deleting the Lanai target , though respondents so far suggest it may be “feature complete” as opposed to “abandoned”. Nikita Popov gave some updated numbers on the slowdown for check-llvm and check-clang with dynamic libraries . A GSoC mid-term conference will take place on July 15th . Initial meetings have been scheduled for the newly formed LLVM Qualification Group. Recordings are now available from the MLIR open meeting on the WebAssembly dialect and the MLIR open meeting on rank-0 vectors . LLVM commits The number of dynamic relocations in libLLVM.so was drastically reduced by using llvm::StringTable to convert various string pointers in the MC layer to using 32-bit offsets into a character array. bb72424 . Support was added for the Windows Secure Hot-Patching feature, allowing LLVM to generate code changes and CodeView symbols to allow hot-patching applications (i.e. apply changes without restarting). 0a3c5c4 . Initial support was implemented for the Xtensa floating point instruction set (‘option’). 4154ada . The RISC-V SiFive 7-series scheduling model was refactored in order to maximise reuse with the upcoming X390 model. 7a33569 , f40909f . A scheduling model was added for the Intel Lunar Lake P-core, generated by schedtool . 2b93876 . A limited version of the WebAssembly stackification pass is now run even for O0 builds. This reduces the use of locals, which for some programs can be so great that it hits engine limitations. cd46354 . Recently agreed changes to the LLVM release process were committed to the documentation. 31545ca . The ILP32D calling convention was implemented for LoongArch. 4bb5e48 . RuntimeLibcalls.def was replaced with a tablegen-erated version. 3fdf46a , b88e1f6 . The newly created LLVM Qualification Group was documented 2b48ce7 . Clang commits The documentation on debugging C++ coroutines was revamped. b8f1228 . ClangIR now supports function linkage and visibility and basic support for operator new, among many other changes. 1e45ea1 , 74cabdb . clang-nvlink-wrapper gained support for passing on optimization record options. e80acd4 . libclang gained new bindings to query information about GCC-style inline assembly blocks. d76fdf7 . Out-of-process execution is now supported for clang-repl. 3f53155 . __builtin_invoke was added and is used in libc++. 7138397 . A modernize-use-scoped-lock check was added to clang-tidy. a3a60e0 . Other project commits pstl was removed from the top-level monorepo. f8ed456 . LLVM’s libc memcpy was improved for Cortex-M processors supporting unaligned accesses. 7289b67 . flat_map::insert was optimised. 34b2e93 . Build-time logic for checking LLDB plugin layering violations was added. e7c1da7 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/621 | LLVM Weekly - #621, November 24th 2025 LLVM Weekly - #621, November 24th 2025 Welcome to the six hundred and twenty-first issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events Abdullah Amin wrote on the LLVM project blog about their GSoC project to extend LLDB with a rich disassembler . As a reminder, the call for proposals for the FOSDEM 2026 LLVM dev room closes on November 30th . Recordings from the 2025 US LLVM Developer’s Meeting have started to appear on YouTube. They’re not collected in a playlist yet, but for now just see the LLVM video list . According to the LLVM Calendar in the coming week there will be the following (though note it’s thanksgiving in the US this week so it’s possible some of the meetings towards the end of the week get cancelled). Office hours with the following hosts: Kristof Beyls, Johannes Doerfert, Amara Emerson. Online sync-ups on the following topics: Flang, modules, libc++, lifetime safety, LLVM/Offload, SPIR-V, OpenMP for flang, HLSL, memory safety. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Tanya Lattner is seeking volunteers for the EuroLLVM Developers Meeting Program Committee . Malavika Samak posted an RFC on integrating the clang-tidy checkers into Clang Static Analyzer (CSA) , allowing clang-tidy and checks and CSA to be run in a single analysis pass. David Stone kicked off an RFC discussion on raising the minimum compiler requirements for building LLVM to one that supports (much of) C++20 . The thread has multiple tables indicating compiler versions packaged for different OSes etc. One of the areas of concern was around which versions of Clang/GCC support a sufficiently large subset of C++20 without blocking bugs. Another area was the impact on compile times. There was some discussion on where working group meeting materials should go , e.g. an existing repo, new repos, etc. Ben Stott would like to support type entries in the TypeSanitizer ignore list . LLVM 21.1.6 was released . The Clang area team are proposing to grow the team size to 5 in 2026 . “We think the Clang area team would benefit from having more folks to take on action items, read RFCs in detail, do more follow-up, and do more to shepherd RFCs along. There were many talented, qualified candidates last year, and we’d like to rotate new folks onto the area team to have fresh faces, get more folks engaged, diversify the domain expertise on the area team, and empower more people to get stuff done.” Round table notes were shared on lifetimes , C++ interoperability , “operational maturity” , and annotations for C++ interoperability . Additionally, new LLVM project council meeting notes discuss SFrame support and requiring pull requests for all llvm-project commits. Amin Safavi wrote on the forum to showcase the eigen_tidy_plugin which aims to catch common pitfalls when using the Eigan C++ template library for linear algebra . Marcell Leleszi and Muhammad Bassiouni are seeking feedback on their proposal for a wctype header implementation in LLVM libc . Alex Susu started an RFC discussion on upstreaming a backend for the Connex vector accelerator . Rahul Joshi suggested we support named address spaces rather than simply numbered spaces. The Clang working group are proposing to set up a dedicated meeting for C++26 reflection in Clang . Fill in your availability if you’re interested. LLVM commits A distributed thin LTO cache was implemented. 3ee54a6b . Tablegen infrastructure was added to support pretty printing arguments in LLVM intrinsics, for instance printing a human readable name alongside the argument. 39e7712 . The X86 profile guided prefetch passes were removed as they are no longer being developed. Prefetch related efforts are focused on opst-link optimisers. 1425d75 . LLVM’s assembly parser now supports the .base64 directive which GCC has started to use as of GCC 15. 6245a4f . VPlan can how hoist loads out of the vector loop to the preheader when scoped noalias metadata proves they cannot alias with any other stores in the loop. 7c34848 . Carry-less multiply primitives were added to APInt. 727ee7e . llvm-dwp can now emit string tables over 4GB without losing data. ac6e48d . TableGen was updated so a target can “steal” the definition of a generic pseudoinstruction and remap the operands. bfb9539 . The TargetLibraryInfo data was moved to TableGen. c9f5734 . lit now sets a LIT_CURRENT_TESTCASE environment variable. 3f6cbde . A RISC-V Zilsd load/store pair optimisation pass was implemented. 645e0dc . An llvm.dbg.declare_value intrinsic was added, motivated by Swift async code. 20ebc7e . Clang commits Clang can now produce a crash reproducer shell script for IR inputs as well. 83d27f6c . The malloc_span attribute was introduced, which can be applied to functions returning span-like structures. eb65517 . Clang’s lifetime analysis can now detect use after return. 5343dd9 . Other project commits Address sanitizer support was added for AIX. c62fc06 . Arm optimized implementations for mulsf3 and divsf3 were implemented in compiler-rt. 5efce73 . Flang now parses OpenMP loop nests as a whole rather than in a piece-wise manner. c2d659b . LLVM’s libc now has a float-only implementation for atanf. aa3f930 . LLDB’s codebase was prepared for supporting integer registers wider than 64 bits. 1fb8e3d . A pass to narrow i64 TOSA operations to i32 was added to MLIR. c61c5d2 . New debug macros were added to LLVM/Offload. 66ddc9b . ORC runtime design documentation was committed. de9c182 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/615 | LLVM Weekly - #615, October 13th 2025 LLVM Weekly - #615, October 13th 2025 Welcome to the six hundred and fifteenth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events A final call has gone out for registration for the 2025 US LLVM Developers' Meeting, nothing the event is nearly full. Additionally, the agenda has been announced for the LLVM hearts ML workshop. Videos from the GNU tools cauldron 2025 have now been published (also available on YouTube . The next LLVM meetup in Munich will take place on October 28th . Note the requirements for pre-registration and bringing an ID card in order to access the building where it is held. According to the LLVM Calendar in the coming week there will be the following: Office hours with the following hosts: Aaron Ballman, Alexey Bader, Alina Sbirlea, Phoebe Wang, Johannes Doerfert. Online sync-ups on the following topics: Flang, alias analysis, modules, lifetime safety, LLVM/Offload, Clang C/C++ language working group, SPIR-V, OpenMP for flang, HLSL. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Aaron Ballman suggested Clang explicitly document a guarantee that type punning through a union is allowed in C++ , as it is often used in real code, matches what Clang does today, and GCC documents such a guarantee. If you have any feedback or feature requests for MLIR’s remark infrastructure, Guray Ozen would love to hear from you . Florian Mayer started an RFC discussion on adding a __builtin_static_analysis_assume attribute builtin, whihc could be used to communicate information to static analysis tools such as clang-tidy checks. Gergely Bálint posted an RFC on adding support to BOLT for optimising BTI binaries (that is, binaries using Arm’s Branch Target Identification feature for control flow integrity). LLVM 21.1.3 was released . Dharuni R Acharya proposed adding support for pretty printing immediate arguments in LLVM intrinsics , which would add a comment with a description for integer immediates. LLVM commits The sanitize_alloc_token attribute and alloc_token metadata was added, alongside the AllocToken instrumentation pass. This is part of work to enable allocator partitioning hints for use by hardened allocators. 224873d , c7274fc . The LLVM and SPIRV backend part of the HLSL working group typed buffer counters proposal was implemented. 5547c0c . LLVM’s OptTable command line option handling now supports subcommands. fdbd17d . llvm-reduce gained a new reduction pass to inline call sites. ff394cd . LLVMGetOrInsertFunction was added to LLVM’s C API. 01f4510 . clang-offload-packager was renamed to llvm-offload-binary and moved into the llvm/ subdirectory. 2499fe1 . The layout of the .callgraph section containing metadata on the function call graph was documented. 6fb87b2 . Clang commits Initial array new expression support was added to ClangIR, as well as dynamic cast. 09f0f38 , 4e53067 . A liveness-based lifetime policy was implemented as part of the lifetime safety work. 6bbd7ea . -fsanitize=alloc-token is now supported. 774ffe5 . wasm32-linux-muslwabi can now be targeted. 7e7c923 . Other project commits BOLT gained now passes to enable optimisation of AArch64 binaries with pac-ret hardening. 32eaf5b . A Nix recipe for collecting linker reproducers for benchmarking purposes was added to the lld subdirectory. 74858f3 . After a period of deprecation, vector.splat was removed from the MLIR vector dialect (vector.broadcast should be used instead). ea291d0 . MLIR now supports Python-defined rewrite patterns. 7aec3f2 . A memory allocation backend for use with ORC was added to orc-rt. 891f002 . Subscribe at LLVMWeekly.org . < | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/616 | LLVM Weekly - #616, October 20th 2025 LLVM Weekly - #616, October 20th 2025 Welcome to the six hundred and sixteenth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events Anthony Tran wrote on the LLVM blog about usability improvements for the Undefined Behavior Sanitizer implemented as part of GSoC. Timm Baeder blogged about recent developments for the Clang bytecode interpreter . The next LLVM Bay Area Monthly meetup will take place on Monday October 27th . According to the LLVM Calendar in the coming week there will be the following: Office hours with the following hosts: Kristof Beyls, Johannes Doerfert, Amara Emerson. Online sync-ups on the following topics: ClangIR, pointer authentication, vectoriser improvements, OpenMP, Flang, RISC-V, LLVM libc, HLSL. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Petr Hosek provided an update on plans to require pull requests for all llvm-project commits . The next step is to enable the restriction but given everyone the ability to bypass it. Henrik G. Olsson started a conversation on adding new command line options for controlling llvm-lit output . For instance, providing finer grained control for aspects such as whether test results are printed during testing or in a summary at the end. Reid Kleckner reported back on the CMake option for 64-bit source location RFC based on the discussion in the area team meeting. I don’t think I can make a more effective summary than Reid’s very clear bulletpoint listing, so I encourage you to go and read that! Rahman Lavaee posted an RFC on extending the PGO analysis map with Propeller CFG frequencies . Rahul Joshi queried when to use Twine vs StringRef as a function argument in LLVM . The LLVM Code of Conduct Committee shared this year’s Code of conduct Transparency report . Thank you to the committee for their hard work in this period. Serge Pavlov proposed introducing new generic DAG nodes for floating-point operations , whereby the same node is used for both default and strictfp environments rather than duplicating each operation with a strict and non-trict node. Ben Stott started an RFC thread on reducing process creation overhead in LLVM regression tests , nothing that on Windows in particular the overhead from the many processes spawned during check-llvm can be substantial. From the responses so far, there is support fr looking for ways of reducing this overhead, but some concerns that the approach presented in the RFC might not be the best way to do so. Yury Plyakhin proposed the addition of SYCLBIN, a format for SYCL device code . Jiachen Yuan raised the issue of running out of LaneBitMask bits . Maksim Levental started an RFC discussion on upgrading LLVM’s minimum required Python version . This triggered a lengthy discussion about topics such as level of testing, what a sufficient bar is for opting to increase the minimum version, and so on. Nikita Popov suggested deprecating C API functions using the global context , noting that a common source of mistakes is to accidentally mix APIs using the global context and those using an explicit context. LLVM commits -print-debug-counter-queries was added to print the current value of the counter and whether it is executed/skipped each time it is queried. 9ace311 . An arbitrary number of cases can now be used in LLVM’s StringSwitch . 8642762 . Register usage debug printing at the point of maximum register pressure was implemented for AMDGPU. 8823efe . An llvm.dx.resource.getdimensions.x intrinsic was added. 78d9816 . IndVarSimplify learned to better optimise some loops with thread-local writes. 39b0cbe . BlockFrequencyInfo is no longer used in SimpleLoopUnswitch, which was the last loop pass using it. df89564 . Codegen was implemented for the RISC-V Zvfbfa extension. 0727e7a . Clang commits The core.NullPointerArithm checker was introduced, which will find undefined arithmetic operations with null pointers. 8570ba2d . X86 PSHUFB intrinsics can now be used in constexpr. 140d465 . clang-format’s algorithm for continuing aligned lines was improved. a0b8261 . Other project commits The MultiMemRead packet for LLDB’s debugserver was both documented and implemented. It reads memory from multiple memory ranges. e91786a , 5e668fe . A new lower-workdistribute pass was added to Flang that will operate on lowered Fortran array statements and perform various rewrites and optimisations. f4fe714 . libcxx std::distance and std::ranges::distance were optimised for segmented iterators. As this reduced the complexity from O(n) there are some big gains in the microbenchmarks. fd08af0 . The MathToXeVM and GPUToXeVM passes were added to MLIR. 3f3ffed , 89d1143 . The Wasm binary to WasmSSA MLIR importer now supports control flow operations, comparison operations, and conversion operations. e40f215 . Polly’s PolyhedralInfo pass (an experiment to make Polly’s analysis available to other passes) was removed as its implementation was very tied to the legacy pass manager. c46fec8 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/609 | LLVM Weekly - #609, September 1st 2025 LLVM Weekly - #609, September 1st 2025 Welcome to the six hundred and ninth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events LLVM 21.1.0 has now been released . Congratulations and thank you to all contributors! The early bird registration deadline for the 2025 US LLVM Developers' Meeting ends on September 5th . Blog posts from LLVM’s GSoC have started to be published, Leandro Augusto Lacerda Campos writing about profiling and testing math functions on GPUs . The next LLVM Meetup in Darmstadt will take place on the 24th of September . Andrew Pinski has started posting weekly GNU toolchain developments on Mastodon at @gnutoolsweekly@hachyderm.io . See week 1 here . According to the LLVM Calendar in the coming week there will be the following: Office hours with the following hosts: Quentin Colombet, Johannes Doerfert, Renato Golin. Online sync-ups on the following topics: Flang, LLVM qualification, modules, libc++, lifetime safety, LLVM/Offload, Clang C/C++ language working group, OpenMP for Flang, HLSL, memory safety, MLGO. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Mehdi Amini advertises that MLIR pattern search is now live . Jim Ingham proposes breakpoint add as a redesign of LLVM’s breakpoint setting command interface . Mehdi Amini started a discussion on supporting newer C++ standards in the LLVM codebase , seeking to clarify that patches that allow building LLVM with more recent versions of the C++ standard than the minimum supported version are welcome. Wendi Urribarri asks the wider LLVM community for views on use of AI transcripts in LLVM working group meetings . Some respondents have concerns that direct transcription of all comments may hurt communication by making people be overly cautious about what they say. Alexander Richardson returned to an old thread to suggest AlignConsecutiveShortCaseStatements and AllowShortCaseLabelsOnASingleLine might be useful for LLVM’s .clang-format . Tobias Hieta gave an update on his thread regarding reducing executable size , summarising what they ended up doing. John Regehr shared that the paper “Translation Validation for LLVM’s AArch64 Backend” is now available . Prabhu Rajasekaran posted an RFC on subcommand feature support in LLVM OptTable (command line parsing) . David Spickett posted a PSA that Linux buildbots testing compiler-rt will in the future need the Python ‘packaging’ module . Michael Adams' lecture slides on learning to use the Clang libraries have been updated to Clang/LLVM 21 . Gabriel Ford started an RFC thread on upstreaming Zig language support to LLDB . LLVM commits TableGen will now emit a getOperandIdxName function, which is the inverse of getNamedOperandIdx returning the OpName for the given operand index of an opcode. 2d5a3c8 . llvm-readobj gained support for --coff-pseudoreloc , which will dump runtime pseudo-relocation records. faf6ba4 . MC layer support was added for the experimental RISC-V Zvfbfa extension (additional BF16 vector compute support). 717771e . Targets can now custom expand atomic load/stores. 8b9b0fd . For Windows, llvm-lit.cmd is is now generated so that llvm-lit can be used as a direct executable without external wrappers or modifications to buildbot scripts. f875a73 . After being deprecated for eight years(!), the LLVM_ENABLE_IR_PGO CMake option was removed. b1c8228 . Clang commits Infrastructure was added to allow more detailed “trap reasons” for UBSan. This allows messages like Undefined Behavior Sanitizer: signed integer addition overflow in 'a + b' rather than the less specific Undefined Behavior Sanitizer: Integer addition overflowed . f1ee047 . Clang now throws a frontend error when a function marked with the flatten attribute calls another function that requires target features not enabled in the caller. c6bcc74 . The -Walloc-size warning was implemented, which will flag calls to functions decorated with the alloc_size attribute that don’t allocate enough space for the target pointer type. 6dc188d . RISC-V vector builtins code generation was refactored to reduce compile time (as reported in the commit message, from 1m5s to 23s on the tested system). 9db7e8d . -gkey-instructions was enabled by default if optimisations are enabled, intended to improve the debug-stepping experience for optimised code. 9d65f77 . ClangIR upstreaming continues with advancements such as support for virtual base classes. e7c9f2d . clang-tidy gained a cppcoreguidelines-pro-bounds-avoid-unchecked-container-access check which will find calls to operator[] in STL containers and suggest replacing them with safe alternatives. bc278d0 . Other project commits The LLD MachO linker now has the ability to preload input files into memory, resulting in meaningful improvements in linking times for large inputs. 2b24287 . Loop interchange was disabled in flang by default, matching Clang. 8849750 . All of Flang’s lit tests now work with lit’s internal shell, which is now used by default. 3e17864 , 794f82e . LLVM’s JSONTransport is now used to implement the LLDB MCP server. Also lldb-mcp was added to act as a proxy between the LLM and one or more LLDB instances. 1ba8b36 , aa71d95 . The workdistribute construct was added to MLIR’s OpenMP dialect. 7681855 . A README was added for the LLVM Offload GPU math conformance test suite. 9c410dd3 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/605 | LLVM Weekly - #605, August 4th 2025 LLVM Weekly - #605, August 4th 2025 Welcome to the six hundred and fifth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events The agenda has now been posted for the upcoming US LLVM Developers' Meeting. The next Bay Area LLVM meetup will take place on Monday August 11th . According to the LLVM Calendar in the coming week there will be the following: Office hours with the following hosts: Johannes Doerfert, Quentin Colombet. Online sync-ups on the following topics: Flang, MemorySSA, qualification group, modules, libc++, lifetime safety, LLVM/Offload, C/C++ language working group, SPIR-V, OpenMP for flang, HLSL, memory safety working group. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Renato Golin started a discussion on further fleshing out LLVM’s maintainer policy . Peter Hallonmark shared some thoughts on supporting MISRA checks in a clang-tidy module . Owen Anderson, Jessica Clarke, Alex Richardson, David Chisnall posted an RFC on upstreaming target support for CHERI-enabled architectures . As the post explains, more fine-grained RFCs will follow - the intent of this one is to get feedback/consensus on the overall direction of upstreaming. Respondents so far are all very positive. LLVM 21.1.0-rc2 was released . Clang/LLVM’s lack of full support for the restrict qualifier has been discussed many times over the years. Vladislav Belov breathed new life into the discussion with a new proposal based on encoding lexical scope information in TBAA metadata. The MLIR project lighthouse repository is now online at llvm/lighthouse . “Sirraide” started a Clang RFC discussion on a new text diagnostics format that supports nested diagnostics . Prerona Chaudhuri is collecting a list of common errors or missing features in TableGen as part of a “FriendlyTableGen” initiative. LLVM Foundation Board meeting minutes have now been posted for meetings on February 28th , April 4th , May 9th , and June 5th . “aman612” asked for and received thoughts on GlobalISel vs SelectionDAG in 2025 . Jason Eckhardt shared experiences from bringing up new backends from scratch on GlobalISel and Amara Emerson noted seom of the current limitations/downsides . Muhammad Bassiouni proposed adding bounds-checking interfaces to LLVM libc (as defined in Annex K of C11). Aaron Ballman, Shafik Yaghmour, Vlad Serebrennikov, and Corentin Jabot are seeking community feedback on a ‘hardening’ mode for Clang . As outlined in the RFC, although there are lots of choices to be made about how this is done, you could imagine enabling various -f , -m , -D , and -W flags by default. “We are looking for high-level direction from the community on how to proceed. Once we know that the community supports the notion of a hardened mode, and we know the general shape of how the community wants that mode surfaced, we intend to come back with a separate proposal for that particular path forward as well as the initial set of functionality enabled by that mode.” LLVM commits More float operations are expanded by default. fe0dbe0 . LLVM’s debug logging infrastructure now supports a ‘log level’. e.g. if you LDBG(2) << "foo" and run with --debug-only=some_type:1 then the foo message will be filtered out. 9c82f87 . A new CreateVectorInterleave interface as added to IRbuilder . 6fbc397 . GlobalISel MachineIRBuilder::(build|materialize)ObjectPtrOffset interfaces were introduced, similar to SelectionDAG::getObjectPtrOffset . Additionally, MIFlags;:InBounds was introduced to indicate the operation implements an inbounds getelementptr operation. d64240b , ef6eaa0 . The implementation of SFrame support continues, with the addition of parsing and dumping of SFrame function description entries (FDEs). ded255e . llvm-mc now accepts a --runs option to to aid benchmarking. 4f39139 . The guide on cross-compiling compiler-rt builtins for Arm was updated. c10736a . llvm-lit learned to support an --exclude-xfail option, which skips running xfailed tests. b383efc . The AsmPrinter can now emit a call graph section. 7c6a1c3 . LoopInvariantCodeMotion stopped reassociating constant offset GEPs. 0a41e7c . Clang commits A new bugprone-invalid-enum-default-initialization check was added. 6f2cf6b . The use of %T was removed from clang-tools-extra tests, noting it has been deprecated for 7 years. Similar patches landed in other LLVM subprojects. 5d489b8 . The run-clang-tidy script now has an enable-check-profiling option to aid benchmarking. f72b3ec . There’s been lots of additional ClangIR upstreaming. e.g. support for the poison attribute and further complex type support. e2b4ba0 , 03e54a1 . Other project commits Scudo can now be compiled without quarantine support. ef96275 . A crt0 implementation for Arm baremetal was added to LLVM’s libc. 8c9863e . libcxx introduced “assertion semantics”, givein control over what happens when an assertion fails. 3eee9fc . libsycl was started, a runtime library implementation for SYCL. 4cec493 . A WebAssembly process plugin was added to LLDB, which adds support for fetching the call stack from the WebAssembly runtime. RegisterContextWasm was also introduced, allowing local variables to be shown. a28e7f1 , f623702 . MLIR vector.extractelement and vextor.insertelement ops were removed in favor of vector.extract and vector.insert . 33465bb . LLVM’s Offload subproject gained a framework for math conformance tests. This is intended to be used to measure the accuracy of math functions in the GPU libc. 2abd58c . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/597 | LLVM Weekly - #597, June 9th 2025 LLVM Weekly - #597, June 9th 2025 Welcome to the five hundred and ninety-seventh issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events The call for proposals for the LLVM Developers' Meeting is open . Submissions are due by July 15th. The event will take place on October 28th-29th in Santa Clara. Matt Godbolt has written up a blog post on how Compiler Explorer works in 2025 . According to the LLVM Calendar in the coming week there will be the following: Office hours with the following hosts: Aaron Ballman, Alexey Bader, Alina Sbirlea, Kristof Beyls, Johannes Doerfert Online sync-ups on the following topics: Flang, alias analysis, modules, libc++, SPIR-V, BOLT, OpenMP for Flang, memory safety. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Luc Forget, Ferdinand Lemaire, and Jessica Paquette proposed an MLIR dialect for WebAssembly . This is intended to support ahead of time compilation of Wasm to native code. A group of MLIR developers proposed introducing the ‘MLIR Project Lighthouse’ . The idea is to make a separate LLVM repository, like LLVM’s test-suite that provides end-to-end pipelines. “The key here is to serve as an upstream demonstrator of existing expectations. Downstream projects (such as IREE, TPP, Tile IR, CIRCT) would work together with their communities to add common recipes as standalone scripts/binaries or -opt pass pipelines, and tests, to make sure integration end-to-end testing is done in upstream LLVM/MLIR.” Jeremy Kun started a discussion on what patterns can/should be upstreamed to MLIR which generated some thoughtful replies. The LLVM 20.1.7 release was delayed to June 12th and the intention is that this is the last 20.1.x release. Rahul Joshi suggests changing the LLVM coding standard to deprecate the use of (void)Foo in favour of [[maybe_unused]] for suppressing unused variable warnings. Karl Friebel started a thread to reignite discussions from the EuroLLVM roundtable on MLIR debug information and pass tracing . Caroline Tice proposed a migration plan for moving libc++ premerge testing to the new LLVM premerge testing infrastructure . Kristof Beyls on behalf of the security response group proposed improvements to the wording of what is considered a security issue by the LLVM project . The main feedback so far is that it would be good to provide more clarity on what kinds of issues in the parts of the LLVM project deemed “security sensitive” should be be handled as a security issue. Rahul Joshi notes the large size of some *ISelLowering.cpp and suggests an effort to split them up. Donát Nagy wrote up notes on the role of ProgramPointTag in the static analyzer as well as ideas to simplify it. Apologies for missing the discussion before, but Haojian Wu started a discussion on moving to 64-bit source locations in Clang and shared some statistics on the memory usage impact. Yingwei Zheng started an RFC discussion on a function attribute to provide constant time execution guarantees in LLVM . Respondents so far are suggesting introducing constant-time intrinsics instead. Britton Watson is still seeking more volunteers for the LLVM Dev Meeting program committee and student travel grant committee . Joseph Faulls is seeking feedback on an RFC to improve MachineSink sinking into cycles . LLVM commits The size of the MC instruction decoder table was reduced by 5-30% across different targets by changes to the decoder opcode representation. e53ccb7 . A SimplifyTypeTests pass was introduced. 3fa231f . In order to avoid the need for targets to define MCTargetExpr subclasses just to encode an expression with a relocation specifier, MCSpecifierExpr was introduced. 97a32f2 , 4a6b4d3 , and more. The new HashRecognize analysis was introduced, which will recognise polynomial hashes (e.g. to help select CRC instructions). af2f8a8 . MachineCopyPropagation now performs instruction simplification ( TargetInstrInfo::simplifyInstruction ) iteratively, which catches more cases than before. e723e15 . The RISC-V backend now materialises constants using lui and addi rather than lui and addiw on RV64 whenever possible, which provides more opportunities for memory offset folding. This reduces the dynamic instruction count of the 519.lbm_r benchmark by 4%. 3d2650b . Various LLVM interfaces have been annotated for DLL export, as part of the work to (in the future) support an LLVM Windows DLL (shared library) build. 7dc5dc9 . Initial support was implemented for SVE unwind on Windows AArch64. 6f64a60 . IRMover no longer considers opaque types isomorphic to other types. bb75f65 . A scheduling model was implemented to the RISC-V backend for the Andes 45 series processor. 991d754 . LoopAccessAnalysis now has an option to keep the runtime pointer checks for the pointer that it could analyse even when there are some that it can’t. 81d3189 . LLVM’s libc is no longer included in builds with LLVM_ENABLE_PROJECTS=all . 52ad274 . The machine scheduler’s instruction clustering was improved. 0487db1 . Clang commits libclang’s ABI and API stability was documented. It includes a non-exhaustive list of cases where ABI and API breaking changes may occur. ab650c6 . The -CCC_OVERRIDE_OPTIONS option was documented. 95ea436 . __builtin_get_vtable_pointer was added. 93314bd . The new cfi_unchecked_callee attribute can be used to prevent the compiler from instrumenting Control Flow Integrity checks on indirect function calls. b194cf1 . Other project commits Building libclc by including it in LLVM_ENABLE_PROJECTS was deprecated. LLVM_ENABLE_RUNTIMES should be used instead. 6306f0f . libc++’s priority_queue is now constexpr for C++26. 3e5fd77 . Unroll patterns and a blocking pass was added for the MLIR XeGPU dialect. 0210750 . A new isMaskTriviallyFoldable helper was added to MLIR’s vectorization hooks, and createWriteOrMaskedWrite gained a new parameter for write indices. b4b86a7 . LLVM’s OpenMP library can now be built for SPARC. 3f8827e . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/618 | LLVM Weekly - #618, November 3rd 2025 LLVM Weekly - #618, November 3rd 2025 Welcome to the six hundred and eighteenth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events Narayan Sreekumar wrote on the LLVM project blog about their ABI lowering library GSoC project . Min Hsu published a second blog post on LLVM’s Machine Scheduler . The call for proposals for the 2026 FOSDEM LLVM dev room is now open and will remain open until November 30th. Submissions for the tenth LLVM Performance Workshop at CGO are open until December 16th . The next Portland area LLVM Social will take place on November 20th . According to the LLVM Calendar in the coming week there will be the following. Office hours with the following hosts: Johannes Doerfert, Renato Golin, Quentin Colombet. Online sync-ups on the following topics: ClangIR, pointer authentication, LLVM qualification, OpenMP, Clang C/C++ language working group, Flang, RISC-V, LLVM embedded toolchains, HLSL, MLGO. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Various notes from roundtables at the LLVM Dev Meeting have been shared: AI policy , RISC-V , loop optimisation . Slides from the LLVM hearts ML workshop are also now available . Based on discussions at the LLVM Dev Meeting, Krzysztof Drewniak posted an MLIR RFC on properties and attributes . “At a high-level, the current system for non-attribute properties is a bit too incremental and a bit hard to introspect. However, one realization is that both current attributes and current non-attribute properties are kinds of data you can have in your program / attach to an operation. That is, we can re-root the Attribute hierarchy under a more general Property hierarchy…”. Nathan Gauër followed up on the GEP type information RFC with a specific proposal for an llvm.structured.gep intrinsic . Discussion resumed on an 8-year old RFC on improving x86-64 compact unwind descriptors . “john123” proposed Arey, a debugging-focused dialect of MLIR intended to support printf-style debugging and runtime assertions inside MLIR IR. Shreeyash Pandey posted an RFC on porting LLVM libc to MacOS . LLVM commits A scheduling model was added for the Neoverse V3 and V3AE. a17dcaf . A new summary tool was added to llvm-remarkutil. 128214f . Guidelines for adding/enabling passes were documented. 43ea75d . update_test_checks now supports -check-inst-comments to check instruction annotations in comments. 511c9c0 . llvm-config’s new --quote-paths option will cause paths to be printed quoted and escaped. 421ba7f . Clang commits Clang gained code generation support for __builtin_infer_alloc_token() . 8c8f2df . The readability-redundant-typename check was added to clang-tidy. 81de861 . Other project commits Type Sanitizer gained an option to use function calls rather than inline checks for its instrumentation. 7a957bd . compiler-rt tests now default to Lit’s internal shell. 2253413 . flang-rt now has an install target for its header files. 4d6bff4 . Boost.Math 1.89 was imported in the third-party subdirectory. 585da50 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/610 | LLVM Weekly - #610, September 8th 2025 LLVM Weekly - #610, September 8th 2025 Welcome to the six hundred and tenth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events The call for proposals is open for the workshop on supporting memory safety in LLVM , taking place just before the US LLVM Developers' Meeting. Pedro Lobo wrote on the LLVM blog about their GSoC project to add a byte type to LLVM IR . Fangrui Song blogged about changes in the LLD 21 release . The next LLVM meetup in Berlin will take place on 18th September , featuring Morris Hafner presenting his work on upstreaming ClangIR. The next compilers social in Cambridge UK will take place this Friday 12th September as a full day “MLIR (Un)School Meets UK Compiler Community” event. According to the LLVM Calendar in the coming week there will be the following: Office hours with the following hosts: Alexey Bader, Alina Sbirlea, Kristof Beyls, Johannes Doerfert. Online sync-ups on the following topics: MLIR C/C++ frontend working group, ClangIR upstreaming, alias analysis, pointer authentication, OpenMP, Flang, RISC-V, embedded toolchains, HLSL. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Petr Hosek proposed that all unreviewed changes to the main branch in the llvm-project repo should go through a PR . The proposal still allows merging before CI checks finish, meaning quick fixes and reverts can still be made. Nathan Gauër started an RFC thread on adding instructions to carry GEP type traversal information . This is moivated by the SPIR-V backend, which struggle due to information loss with plain ptradd . Fabian Mora shared a proposal on cleaning up the MLIR GPU dialect and strengthening its semantics . The proposal suggest moving a number of its operations either to new dialects, an existing dialect (when appropriate), or removing them altogether. Nikita Popov provided an update on work to implement an ABI lowering library for LLVM . Initial results (thanks to vortex73’s GSoC work) are positive and upstreaming should start soon. Farid Zakaria raised the issue that as of recently, GitHub has started timing out downloads if they don’t finish in 5 minutes which poses problems for people downloading GitHub releases on a slower connection. Erik Hogeman started a discussion on extending the current set of fast-math flags . As Nikita notes in response, we’re currently out of free bits in llvm::Value . “buggfg” noted that LLVM vectorization memory analysis seems to fall short in some examples vs GCC and ICX . Rick van Voorden wondered if use of the term ‘sanity checks’ should be moved to an alternative phrase . Quentin Colombet advertised his book, LLVM Code Generation: a deep dive into compiler backend development . LLVM Foundation board meeting minutes from August have now been posted . Arjun Ramesh proposed the addition of a wasm32-wali-inux-musl target to Clang . Tom Stellard kicked off an RFC thread on adding AArch64 pre-commit CI , with discussion revolving around exactly what sort of configurations should be tested. LLVM commits A new doc was started on debugging LLVM. 82978df . The HashRecognize analysis is now used by LoopIdiomRecognize to optimize CRC. c5da190 . The LDBG macro was documented in the programmer manual. 5f41241 . mmra metadata was documented. e34d2e1 . A VPlan-level common subexpression elimination pass was implemented. d8fd511 . InstCombine learned to merge constant offsets getlementptrs (geps) across variable geps. 349523e . LLVM_ENABLE_STRICT_FIXED_SIZE_VECTORS was removed. cd7f7cf . Codegen and intrinsics for the proposed RISC-V Zvqdotq “vector quad widening 4D dot product” extension was implemented. 7fb1dc0 . An OPC_Scope opcode was added to the DecoderEmitter, resulting in an average reduction in decoder tables of 1.1% (up to 7% in some cases). 3250349 . Clang commits __builtin_*_synthesises_from_spaceship where added, which makes it possible to do some optimisations by using <=> (spaceship) rather than doing multiple comparisons. b1a8089 . Support was added for C2y’s “named loops” feature. e4a1b5f . The alpha.core.CastSize static analyzer checker was removed. 9086590 . As usual there was more forward progress on ClangIR. e.g. adding support for constant record initializer list expressions. 1dbe65a . clang-reorder-fields now supports designated initializers. 4d9578b . Other project commits Load and store operations were implemented for MLIR’s ptr dialect, and core ptr ops now have a conversion to LLVM IR implemented. 77f2560 , d15998f . LLVM’s libc now provides a strlen implementation, implemented using a new simd.h helper header. eb7b162 . SPEC can now be run as part of libc++’s own benchmarks. a40930b . lldb-dap learned the --no-lldbinit flag to skip sources .lldbinit . db3054a . MLIR’s SPIR-V dialect gained support for the SPV_ARM_graph extension. cf3a887 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/607 | LLVM Weekly - #607, August 18th 2025 LLVM Weekly - #607, August 18th 2025 Welcome to the six hundred and seventh issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events The 2025-08 WG21 C++ mailing has been posted. According to the LLVM Calendar in the coming week there will be the following: Office hours with the following hosts: Phoebe Wang, Johannes Doerfert. Online sync-ups on the following topics: Flang, vectorizer improvements, modules, lifetime safety breaktout group, LLVM/Offload, Clang C/C++ language working group, SPIR-V, OpenMP for Flang, HLSL. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Renato Golin noted that the draft document on LLVM maintainer policy has been updated after feedback . Tobias Hieta asked for and received on advice for reducing executable size of a heavily optimised program . Kshitij Jain is surveying people on their opinions on how TableGen could be better utilised in MLIR . Slides and a recording from the MLIR open meeting on properties design are now available . Owen Anderson posted an RFC on adding a riscv32 subarchitecture for CHERIoT . Discussion is currently focused on whether adding a subarchitecture is necessary/best vs e.g. relying on selection via -march flags and/or the target OS in the triple. Trevor Gross is seeking feedback on the idea of adding matrix-style preprocessing to lit to reuse tests across backends . LLVM 21.1.0-rc3 was released . This is intended to be the last release candidate in the 21.x series. There are quite a few meetings / special interest groups within LLVM now, some of them taking notes or minutes and posting them to Discourse. I don’t tend to collect these right now in LLVM Weekly, but made a post to suggest such posts be given a meeting-minutes tag so they can be easily found. Respondents so far seem to be supportive. Sergey Shcherbinin started an RFC thread on extending the ExtBinary sample profile format with TypifiedProfileSection to support attahcing multiple typed profile payloads to each functions. David Spickett kicked off a discussion about removing the PDB plugin from LLDB build using Microsoft’s DIA SDK . Alex Bradbury wrote a quick note sharing some numbers on the impact on dynamic instruction count of SPEC 2017 benchmarks on RISC-V for -ffp-contract=fast (matching GCC) vs LLVM’s default . LLVM commits An initial Content Addressable Storage (CAS) library was added to LLVM. dda996b . Backend calling convention lowering implementations can now access the original IR type. e92b7e9 , 498ef36 . Multiple save/restore points can now be represented in MachineIR. bbde6be . The documentation on cross-compiling builtins for Arm was further updated. dc41571 . Codegen for most significant bit extraction was improved for RISC-V. 18782db . MC level support was added for more instructions from the proposed RISC-V P (packed SIMD) extension. e2eaea4 . The output of the debugify coverage tracking was slimmed down by adding a mechanism to handle cases where single errors end up being reported many times after being propagated to other instructions in later passes. bc216b0 . Clang commits By default, clang-tidy no longer attempts to analyse code form system headers. This improves performance significantly. bd77e9a . __builtin_elementwise_{fshl,fshr} was added. c3bf73b . ClangIR gained initial support for atomic types. 331a5db . The newly introduced cfi_salt attribute can be used to specify a ‘salt’ for control-flow integrity (CFI) checks to distinguish between functions with the same type signature. aa4805a . The list documenting the C features backported to previous C standards was made more accurate. ed6d505 . Other project commits A skeleton of an importer for Wasm binaries was started in MLIR. 6bb8f6f . The Dexter debug experience tester now has Debug Adapter Protocol (DAP) compatibility. d934554 . f[no]-open-simd was added to Flang to enable OpenMP support only for SIMD onstructs. d3d96e2 . LLVM libc’s GPU benchmarking infrastructure was improved. 08ff017 , cf5f311 . LLDB learned to parse the Wasm symbol table. 5be2063 . A new MLIR pass was added for vector to AMX conversion. 7d1b9ca . The XeVM target and XeVM dialect integration tests were added to MLIR. baae949 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/611 | LLVM Weekly - #611, September 15th 2025 LLVM Weekly - #611, September 15th 2025 Welcome to the six hundred and eleventh issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events The September Portland area LLVM social will take place on 18th September . According to the LLVM Calendar in the coming week there will be the following: Office hours with the following hosts: Phoebe Wang, Johannes Doerfert. Online sync-ups on the following topics: Flang, vectoriser improvements, modules, security response group, lifetime safety breakout group, LLVM/Offload, Clang C/C++ language working group, SPIR-V, OpenMP for flang, HLSL working group. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Sebastian Pop started a thread to discuss the gradual removal of types from LLVM IR (e.g. pointers). This spawned a lot of discussion on different somewhat related topics, e.g. the criteria for enabling passes by default, the delinearization pass, and of course the future of LLVM IR. Nikita Popov gives a direct answer to the first question about why type information is being removed . Benoit Meister posted an RFC Ripple: A Compiler-Interpreted API for Efficient SIMD Programming . “We have been working on Ripple, a lean addition to LLVM to support Single-Program, Multiple-Data (SPMD) and loop-annotation-based parallel programming for SIMD hardware. We propose a parallel programming API to support these two models, which departs from GPU-style SPMD programming, in that block computations of different dimensions (including 0) can coexist in the same function. This makes it easier to explicitly express mixes of scalar, vector and tensor computations.” LLVM 21.1.1 was released . Nikita Popov suggested guidelines for adding/enabling new passes . Mehdi Amini queried the IndexType in MLIR and any assumptions about bitwidth . Christopher Di Bella posted a Clang RFC on diagnosing disallowed pointers-to-members . Théo Degioanni started a thread to track the roadmap for completing IRDL including effort estimates. “IRDL is a dialect to represent dialects. The aim is to offer a self-contained and portable way to define dialects, without TableGen. This thread tracks the roadmap towards a first completeness goal.” Prabhu Rajasekaran started an RFC discussion on preserving call graph information from Clang all the way through to a .callgraph ELF section . Shubham Rastyogi proposes the addition of a llvm.dbg.coroframe_entry intrinsic , motivated by the need to describe variables in a coroutine frame prior to the CoroSplitter pass running. LLVM commits LLVM’s lit test suite now uses the lit internal shell by default. 73b24d2 . lit --update-tests learned to update tests using diff . 6c3f18e . The LSP (Language Server Protocol) support library moved into LLVM from MLIR. a3a2599 . SimplifyCFG will replace switch with simpler instructions in more cases than before. 89d86b6 . Profile guided outlining support was added to the MachineOutliner. ada9da7 . AArch64FrameLowering::emitPrologue was refactored as it had grown rather large. 106eb46 . As preparation for the ptradd migration, GEPs with multiple non-zero offsets are now split into two GEPs. a301e1a . Some of the regular administrative tasks in the LLVM project were documented. c883b67 . CHERI capability types where added to MachineValueType. 0f13cae7 . The LDBG_OS() macro was added, which takes a callback function that will be called with raw_ostream . 2832717 . Data layout string computation was moved to TargetParser . f3efbce . Clang commits Atomic compare-and-swap is now supported in ClangIR. 990fe80 . Clang’s Thread Safety Analysis gained a basic alias analysis for capabilities. b4c98fc . Clang’s lifetime safety analysis was extended to support C++ types annotated with gsl::Pointer . 94213a4 . Other project commits The rationale for the MLIR IRDL dialect was documented. a401f46 . BOLT’s previously x86-only pass for inlining memcpy was extended to support AArch64. 244588b . Scudeo grew the beginnings of a tracing framework. 7628abd . Tiling was enabled for Flang’s OpenMP. d452e67 . libcxx now has a new compare-benchmarks script which can be used for printing a summary comparison of any LNT-compatible data. a6af641 . The build of OpenMP device runtimes is now handled much like libc, libc++, compiler-rt etc. be6f110 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/619 | LLVM Weekly - #619, November 10th 2025 LLVM Weekly - #619, November 10th 2025 Welcome to the six hundred and nineteenth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events Arm engineers blogged about their contributions to LLVM 21 . The next Cambridge (UK) LLVM social will take place on November 27th . The next LLVM Meetup Darmstadt will take place on Nov 26th . According to the LLVM Calendar in the coming week there will be the following. Office hours with the following hosts: Aaron Ballman, Johannes Doerfert Online sync-ups on the following topics: Flang, MLIR C/C++ frontend, alias analysis, modules, GlobalISel, lifetime safety, LLVM/Offload, SPIR-V, HLSL. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Peter Rong posted an RFC on providing open evaluation and better onboarding for MLGO (Machine Learning-Guided Optimization) , reflecting on the various challenges people face when trying to use or evaluate things like MLGO for register allocation. Slides are now available from the GPU/Offloading LLVM workshop . Additionally, notes have been posted from the Embedded Toolchain Workshop . Krzysztof Parzyszek posted a Flang RFC that proposes breaking up compound OpenMP directives in the AST intead of doing it during lowering to MLIR . Matthias Springer suggests adding to MLIR the ability to execute operations with unsupported floatoing point types by calling into APFloat . The primary motivation is for testing, and the proposal has had a lot of support so far. Marc Auberer started a discussion about adding new end to end LLVM IR test cases . Farzon Lotfi made a Clang RFC proposal related to supporting the swizzle-style suffix access syntax for HLSL matrix types . Naveen Seth Hanig proposed breaking clangFrontend’s dependency on clangDriver by adding a clangOptions library . Fangrui Song shared thoughts on a compact section header table for ELF . Jan Svoboda started an RFC discussion on having Clang use a file system sandbox by default for developers builds , which would enforce the use of vfs::FileSystem and disallow direct llvm::sys:fs calls. Luke Lau notes that nightly RISC-V performance numbers are now being reported via LNT for the SiFive P550 . LLVM 21.1.5 was released . Tarun Prabhu suggests that Flang should accept and ignore gfortran specific options , printing a warning. LLVM commits The DFAJumpThread pass was enabled by default. This is reported to give a 13% performance improvement on CoreMark on X86. 0ba7bfc . The AArch64 backend got better at coalescing stack adjustments in the prologue or epilogue in the presence of SVE. 33609bd . Work on LLVM’s content addressable storage (CAS) utilities continued with the addition of UnifiedOnDiskCache and OnDiskCAS as well as an llvm-cas command-line tool for debugging. 6747ea0 , ebb61a5 . llvm-dwarfdump learned the --child-tags option to filter by the given DWARF tags. f8656ed . The nocreateundeforpoison IR attribute was introduced. f037f41 . The llvm.reloc.none intrinsic was added, and will emit a no-op relocation against a given operand symbol. 5f08fb4 . llc now has a --save-stats option which works similarly to Clang’s option of the same name. 96a5289 . Clang commits C intrinsics were added for the SiFive RISC-V XSfvfexp and XSfvfexpa extensions. 4cb8f97 . A CMake example for linking against libclang was added to the documentation. 71022d1 . counted_by is now allowed on void* as a GNU extension. f29955a . Other project commits compiler-rt now has a build option for execute-only code generation on AArch64. 9d18e92 . printf error handling was added to LLVM’s libc. 9e2f73f . LLDB gained SBFrameList for iterating over stack frames lazily. d584d00 . The XeGPUOptimizeBlockLoads pass was added to MLIR. 9703bda . The new AccImplicitData MLIR pass will automatically generate data clause operations (e.g. copyin, copyout) for variables used within OpenACC compute constructs that do not already have explicit data clauses. 28c6ed5 . The MLIR pygments lexer was improved. 7ac6a95 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/620 | LLVM Weekly - #620, November 17th 2025 LLVM Weekly - #620, November 17th 2025 Welcome to the six hundred and twentieth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events Save the date for the 2026 EuroLLVM Developers' Meeting, taking place April 13th-15th in Dublin . The next LLVM meetup in Berlin will take place of 20th November . According to the LLVM Calendar in the coming week there will be the following. Office hours with the following hosts: Johannes Doerfert, Aaron Ballman. Online sync-ups on the following topics: ClangIR upstreaming, pointer authentication, vectorizer improvements, Clang C/C++ language working group, Flang, RISC-V, LLVM libc, HLSL. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Justin Stitt started an RFC discussion on supporting “strong typedefs” in Clang , motivated by the desire to reduce type confusion bugs in C code. Reid Kleckner provided a new summary on discussions about 64-bit source locations in Clang . Vladislav Dzhidzhoev provided a detailed update on work to improve DwarfDebug support for imported entities. In response to a user question, Markus Böck provided some pointers to examples of using LLVM’s GC facilities . Aaron Ballman provided an update on expectations for Clang’s new constexpr engine . The change is that if the older ExprConstant.cpp is being modified, a matching patch should also be made to the new constant expression engine. Jeff Bailey queried locking in LLVM’s libc getenv, setenv functions which generated a bit of discussion about what guarantees other libc provide. Notes were shared from the 2025 runtimes workshop . Maksim Levental posted an RFC proposal for regularly bumping the minimum Python version required by MLIR , coinciding with the end of life of that Python version from upstream. This policy is MLIR specific rather than LLVM wide. The proposal has strong support so far. LLVM commits The straight-line strength reduction pass was redesigned, providing infrastructure for later adding partial strength reduction support. f67409c . JITLink gained initial SystemZ support. 8218055 . The register coalescer “terminal rule” was enabled by default for multiple targets. e95f6fa , 793ab6a , 2aa629d , and more. DJAJumpThreading was disabled by default again due to detected miscompiles. 7e04336 . Initial codegen support was added for the RISC-V ‘P’ (packed SIMD) instruction set extension. Currently just for a small set of instructions. dfdc69b , 6b16b31 . The ‘modular-format’ attribute was introduced. This can be used to e.g. keep floating point support out of printf if it can be proven to be unused. c9ff2df . lit’s options for controlling output were revamped with a range of new options added, and existing -q , -s , -v , -a flags defined as aliases of appropriate combinations of these new finer grained options. dbf77e4 . Following the addition of --save-stats to llc , opt now supports the same flag. 35ffe10 . A new target feature was introduced for BPF to allow misaligned memory accesses. fb2563d . Assembler support was added for the AArch64 Permission Overlay Extension 2. 40a9e34 . Clang commits A generic byte swap builtin, __builtin_bswapg was added, supporting all integer types. f210fc1 . clangDriver options-related code was factored into a new clangOptions library. 9a783b6 . ClangIR AddressSpace conversions support was upstreamed. 8f90716 . Other project commits std::optional<T&> was implemented in libcxx. 389a23c . Flang gained parser support for the prefetch directive. cf1f871 . The GTest version used by the runtimes build no longer has a dependency on LLVMSupport, avoiding some bootstrapping issues. 0957656 . An SVE implementation of strlen was added to LLVM’s libc. 8751f26 . The arith-to-apfloat pass was added to MLIR. It will lower floating point arithmetic operations to calls into a runtime library. 7a53d33 . Polly now has its own pipeline manager rather than relying on LLMV’s pass manager. 7a0f7db . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/617 | LLVM Weekly - #617, October 27th 2025 LLVM Weekly - #617, October 27th 2025 Welcome to the six hundred and seventeenth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events Matheus Izvekov blogged about recent changes to make the type representation in the Clang AST smaller . Fangrui Song wrote a blog post about stack walking mechanisms, including the recently developed SFrame format . The call for papers is now out for the ACM SIGPLAN 2026 International Conference on Compiler Construction (CC 2026) . According to the LLVM Calendar in the coming week there will be the following. Special note for this week: with the LLVM dev meeting going on some of these may be canceled but not yet marked as such on the calendar. Additionally, the European countries have exited daylight savings time over the weekend but the US won’t do so until next weekend, so times may be different to usual. Office hours with the following hosts: Johannes Doerfert. Online sync-ups on the following topics: Flang, modules, libc++, LLVM/Offload, SPIR-V, OpenMP for Flang. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Jan Korous posted an RFC on a scalable static analysis framework . This work from Apple comes from a prototype effort to implement a source code rewriting tool that uses static analysis methods to apply security hardening across C++ codebases. Lucile Rose Nihlen announced the availability of a dashboard to monitor code review and contribution rates over time . The dashboard shows status such as LLVM commits per day, what percentage of these commits were landed without pre-commit review etc. whitequark has an update on building LLVM for WebAssembly . A package is now available on NPM. Nikita Popov proposed changing the indirectbr/blockaddress representation so that rather than having blockaddress referring to a basic block, to instead of it refer to a specific indirectbr successor index. LLVM 21.1.4 was released . Rahman Lavaee started an RFC discussion on adding support for inserting code prefetch instructions as a post-link optimisation step .. Petter Berntsson and the LLVM qualification group are collecting community input on the qualification of LLVM tools and libraries . LLVM commits A radix tree LLVM was added to LLVM’s datastructure library. 5fda2a5 . masked.{load,store,gather,scatter} now have alignment as an attribute on the pointer/vector pointers argument rather than as a separate immarg. 573ca36 . The Hexagon backend gained a new optimisation pass for intermediate conversion instructions while performing vector floating point operations. e8b255d . Initial support for a monotonicity check was added to DependenceAnalysis. ab789bef . The unsafe-fp-math attribute was removed, now it is no longer used by any backends. d11b7bd , 76f15ea , 3656f6f2 , 4f020c4 . Additional location tracking was added to the LLMV IR parser, in preparation for an LLVM IR LSP server. 18d4ba5 . Definitions were added for the Armv9.7-A architecture. 7ac2900 , f28224b , 66e8270 , f28224b , and more. Clang commits Clang started to emit llvm.tbaa.errno metadata. efcda54 . ClangIR try/catch statement support was upstreamed. d019a02 . Other project commits libunwind was hardened for platforms that support pointer authentication. e6a1aff . Documentation was added for the MLIR no-rollback conversion driver. 565e9fa . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/608 | LLVM Weekly - #608, August 25th 2025 LLVM Weekly - #608, August 25th 2025 Welcome to the six hundred and eighth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events Fangrui Song has been busy blogging again, with recent posts on understandin alignment and improving sections and symbols in the LLVM integrated assembler . According to the LLVM Calendar in the coming week there will be the following: Office hours with the following hosts: Kristof Beyls, Johannes Doerfert, Amara Emerson. Online sync-ups on the following topics: ClangIR upstreaming, pointer authentication, OpenMP, Flang, RISC-V, LLVM libc, HLSL. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Renato Golin started an MLIR RFC thread on open questions related to linalg forms . Simon Tatham asks if there is interest in the upstreaming of optimised routines for FP arithmetic on hardware without FP support . Mikołaj Piróg proposed to make -flax-vector-conversions=none the default , which is closer to GCC’s behaviour. After discussion on a Google Doc, a PR is now up with a proposed change to the maintainer policy document . Florian Mayer shared a proposal for adding an abseil-unchecked-statusor-use check . Louis Dionne announced a libc++ ABI break was discovered in LLVM 20 and will be fixed in LLVM 21 (necessitating another break). “The ABI break impacts several container types when used with allocator types, comparator types or hasher types that have specific properties” Peter Collingbourne started an RFC thread on enhancing function alignment attributes in order to support a notion of preferred alignment. Respondents so far are supportive of adding the functionality, but suggest not overloading the meaning of the existing alignment attribute (used for minimum alignment) and adding a new one instead. Luke Lau proposed removing codegen support for trivial vector prediction intrinsics in the RISC-V backend , as these have no advantage over the equivalent regular instructions and add burden in terms of additional test cases and code paths. The RFC has support so far. LLVM commits llvm-lit now has an --update-tests option which will attempt to call the appropriate update-test-checks tool for a failing test. e1ff432 . -inline-all-viable-calls can be used to inline all viable calls even if they exceed the inlining threshold. 58de8f2 . MC-layer support was added for the SpacemiT vector dot product vendor specific RISC-V extension. 6842cc5 . DemandedBits now handles non-constant shift operands. c2e7fad4 . The recently added OrigTy availability in call lowering was used to remove all custom logic in the Mips backend for retaining information about the original types. b2fae5b . The llvm-offload-wrapper tool was added. 4c9b7ff . llvm-objcopy can now be used for DXContainer object files. 15babba . Upstreaming of the LLVMCAS (content addressable storage) library continues with the addition of the ActionCache. deab049 . --relative-paths can be used with lit to have it print test case names using paths relative to the current working directory. 5928619 . Clang commits A huge update was made to the documentation on pointer authentication. 62d2a8e . The SpaceInEmptyBraces option was added to clang-format. 6cfedea . As part of the lifetime safety analysis work, a basic use-after-free diagnostic was implemented. 673750f . Clang’s GCC detection logic now takes into account the presence of libstdc++ headers. 50a3368 . Upstreaming of ClangIR continues with support for atomic load/store operations (as well as various other features in separate commits). 318b0dd . The misc-override-with-different-visibility clang-tidy check was added, which will flag virtual function overrides with different visibility from the function in the base class. a0f325b . Other project commits A RemarkEngine was added to MLIR, providing optimisation remark support. 3d41197 . BOLT now has a --dump-dot-func command-line option to allow dumping CFGs for the specified functions. fda24db . A vector strlen implementation was added to LLVM libc for x86_64 and AArch64. 3179200 . MLIR’s WasmSSA dialect importer was extended. 95fbc18 . The ompTest unit testing framework was added to LLVM’s OpenMP implementation. 35f01ce . A new top-level ORC runtime project was introduced, intended to replace the existing runtime in compiler-rt/lib/orc. ee7a6a4 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/606 | LLVM Weekly - #606, August 11th 2025 LLVM Weekly - #606, August 11th 2025 Welcome to the six hundred and sixth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events Rafael Andres Herrera Guaitero wrote on the LLVM blog about CARTS: Enabling Event-Driven Task and Data Block Compilation for Distributed HPC . The next Portland area LLVM social will take place on August 14th . According to the LLVM Calendar in the coming week there will be the following: Office hours with the following hosts: Aaron Ballman, Alexey Bader, Kristof Beyls, Johannes Doerfert. Online sync-ups on the following topics: MLIR C/C++ frontend working group, ClangIR upstreaming, alias analysis, pointer authentication, 64-bit source locations, OpenMP, Flang, BOLT, RISC-V, embedded toolchains, HLSL. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Kelly Kaoudis proposed (along with a number of colleagues) the addition of a Clang “constant-time selection” builtin for use by cryptographers. Respondents so far are concerned that to provide the necessary guarantees a new type would need to be introduced, e.g. a “constant-time value”. Mircea Trofin provided an update on profile information propagation unit testing . James Henderson is seeking feedback on the acceptability of support C++/CLI in clang-format (referring to Microsoft’s C++ language extension). There are no concerns so far. Meeting notes from the MLIR Tensor Compiler Design Group were shared . I don’t currently consistently link to all of the growing number of meeting notes threads from different LLVM sub groups. I wondered if it would make sense to consistently use a certain tag for such posts so that in LLVM Weekly I can either link to that tag, or use it as an aid to generate a listing of groups that shared new meeting notes in the last week. Luke Hutton started an MLIR RFC thread on target-dependent optimisation in TOSA . Benson Chu is seeking feedback on an RFC to introduce compiler support for “skip fault mitigation” . Dmitry Sidorov posted an RFC on representing float8 in LLVM IR motivated by SPIR-V recently adding support for the E5M2 and E4M3 formats. The next MLIR open meeting will discuss MLIR properties . Matthias Springer provided an update that the new MLIR one-shot dialect conversion driver is up for review and summarised the API changes. Aiden Grossman proposed removing support for %T from llvm-lit . Peter Collingbourne posted an RFC on supporting pointer field protection in libc++ . “eeochoalo” would like to change byte-alignment to bit-alignment for the MLIR memref and vector dialects . Jaden Angella posted an MLIR RFC on adding EmitC support for MLGO , to generate C++ from TOSA. LLVM commits Tail folding is now enabled by default for RISC-V vector code generation. 7074471 . The ptrtoaddr instruction was introduced. Unlike ptrtoint , it doesn’t capture provenance and it only extracts the low index-width bits of the pointer (with the latter difference only making a difference on architectures like CHERI). 3a4b351 . spirv-sim was removed. It was intended to be a tool to help with spirv-testing, but had been made obsolete now end-to-end testing is in place via the ofload-test-suite project. d64371b . Finer grained control for the sinking of compares is now possible through the hasMultipleConditionRegisters hook. 94d374a . LLVM can now be built with -fvisibility=hidden on GCC on Linux. 8c9feb7 . Basic RISC-V mapping symbol support was implemented in llvm-objdump. 4e11f89 . The size argument was removed from lifetime intrinsics. c23b4fb . Clang commits clangd now has a doxygen parser. 2c4b876 . Work started on support for variable template and concept template parameters. 28ed57e . The nested name specifier AST representation was improved, leading to measurable compile time wins. 91cdd35 . For RISC-V, -march=unset can be used to cancel and ignore a previous -march value. 92a966b . Other project commits The WasmSSA MLIR dialect was added. a534896 . The FLANG_DEFAULT_LINKER CMake option was introduced. dea50a1 . LLVM libc gained a range of new bf16 math functions. 246f923 , 1ffb995 , 15a705d , a4ff76e . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/603 | LLVM Weekly - #603, July 21st 2025 LLVM Weekly - #603, July 21st 2025 Welcome to the six hundred and third issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events The first LLVM Seattle Meetup of the year will take place on August 2nd featuring talks “High-level overview of MSVC’s Address Sanitizer” by David Justo and “Beginner-friendly introduction to MLIR” by Shubh Pachchigar. According to the LLVM Calendar in the coming week there will be the following: Office hours with the following hosts: Johannes Doerfert, Amara Emerson. Online sync-ups on the following topics: Flang, modules, lifetime safety, LLVM/Offload, SPIR-V, OpenMP for Flang, HLSL, memory safety working group. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums The LLVM 21.x release branch was created and llvm 21.1.0-rc1 was released . The main branch has been bumped to 22.0.0git. Ferdinand Lemaire noted that the first PR for upstreaming the WasmSSA MLIR dialect has been opened . Marco Elver kicked off an RFC discussion on allocator partitioning hints . This could be used to aid a hardened memory allocator, taking advantage of additional semantic information about the managed allocations. On behalf of the LLVM Project Council, Alex Zinenko provided an update on the MLIR Project Maintainers nominations . The proposed category maintainers are moving forward, but proposal of “Lead MLIR Project Maintainers” is awaiting further clarity on the scope of responsibilities. Nicholas Junge shared a public service announcement on changes to MLIR Python type casters . “Leilongjie2000” started an RFC discussion on missed LoopStrengthReduce opportunities for outer loops containing iner loops , including the SPEC CPU 2017 648.exchange2_s benchmark. Vlad Serebrennikov provided data on the volume of historical bug reports related to the source locations limit in Clang as input to the discussion on revisiting 64-bit source locations. “isuckatcs” shared initial results from a Clang summary-based analysis prototype . LLVM commits -opt-disable can now be used for disabling a pass given by name. 81eb7de . Documentation on what is considered a security issue was improved. 145b6cd . The llvm-ir2vec tool now has documentation. f295617 . Structures and constants for the “ SFrame ” unwind info format were committed. ee9b84f . The debuginfo “key instructions” project gained documentation describing the core ideas and some implementation details. acffe83 . Native Client support was removed from the LLVM tree. 0d2e11f . Bundle alignment mode was removed from LLVM’s MC layer, following the Native Client removal. 28e1473 . Debug intrinsic verifier code was deleted now they are no longer produced and are always auto-upgraded. b470ac4 . Minimal big-endian RISC-V support was added to llvm-objcopy and LLVM’s ELF file support code. 742147b . The debugify script gained an acceptance-test mode that produces YAML output rather than a HTML report. This is intended to be friendly to integration to CI. b7c14b6 . Clang commits As part of the lifetime safety work, dataflow analysis for loan propagation was implemented and a script added for performance benchmarking. It was later made more generic. f25fc5f , 7615503 , 752e31c . Documentation was added on C++ type-aware allocators. 7cde974 . Pointer authentication can now be used in Objective-C. 451a9ce . Integrated Distributed ThinLTO (DTLTO) flags were added to Clang. 5004c59 . Clang learned to set the dead_on_return attribute for arguments passed indirectly. 9e0c06d . Infrastructure was added to allow clang-repl to pretty print types. 9bf7d04 . Other project commits compiler-rt now has runtime functions for emulated pointer authentication codes. These are meant to be used in test environments where hardware support isn’t available. de31584 . In compiler-rt’s lit setup, config.host_os was renamed config.target_os in order to reduce confusion. 3fa07ed . Initialisation and destruction in flang was made faster in the flang runtime. 2e53a68 . The -fdiscard-value-names build flag was added to libclc, reducing bitcode size of nvptx64-nvidiacl.bc from 10.6MB to 5.2MB. 9d78eb5 . libcxx now has an implementation of std::ranges::zip_transform_view . d344c383 . LLD gained support for thunks for Hexagon. b42f96b . LLDB’s disassembly of unknown instructions was improved. Especially for the case where the size of the instruction is known but it is unrecognised (as may happen for RISC-V with an unrecognised vendor extension). eb6da94 . LLDB documented running of its tests using qemu-user. 9f364fe . An MLIR “pattern catalog” generator was added. This could be used to make a website that allows looking up patterns from an operation name. 7caf12d . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://hackmd.io/blog/2025/12/31/2025-recap?utm_source=blog&utm_medium=featured-article | 2025 in Review: New Features, Big Wins, and What Lies Ahead - HackMD Blog Product Company Changelog Education Sign in Sign in Get HackMD free Back to blog 2025 in Review: New Features, Big Wins, and What Lies Ahead Dec 31, 2025 By Chaseton Collins # en # company As 2025 comes to a close, we’re taking a moment to reflect on a year that was, quite honestly, incredible. It’s been a journey of real growth and meaningful connection, made even better by welcoming more than 350,000 new teams and innovators into our community. We want to be transparent with you: everything we do is centered on the idea that we’re building this platform specifically for you. Your work, your shared ideas, and the way you use Markdown to bring projects to life are the real reasons we show up every day to ensure your collaboration remains effortless. Looking ahead to 2026, we’re excited to keep streamlining your documentation process and helping you say goodbye to version control headaches for good. We’ve been looking at the features and stats that defined our year together, and we can’t wait to share those highlights—along with a sneak peek at the vision we’re crafting for the months to come. Our goal remains simple: to keep providing an inviting, helpful space where you can connect with your team and innovate together. We’re just people building for people, and we’re so glad you’re here for what’s next. 💜 Everything we shipped in 2025 2025 has been an incredibly busy and rewarding ride for our team. We’ve spent the year focused on making your real-time collaboration feel even more effortless, rolling out several of the highly-requested features you told us were essential to your workflow. We’re proud to have shipped a suite of updates designed to make your real-time collaboration feel more effortless and synchronized. We prioritized your most frequent requests to ensure our editor remains the most flexible space for developers and teams to build together. This year, we introduced Folders and Tag Management to streamline how you organize your knowledge, alongside a Profile Reboot to give your personal and team presence a modern, tech-savvy edge. We also focused on the fine details of the editing experience, launching Paragraph Bookmarks , Paragraph Citations , and Guided Comments to help you navigate and reference complex documentation with precision. To further enhance how you communicate with your community, we rolled out interactive features like Emoji Replies and Link Previews , making every document feel more alive and connected. We’ve also simplified the way you manage and distribute your work with the introduction of Version Links and Shareable Links . Each of these features was built with our “honesty first” philosophy in mind, ensuring that our platform remains a lightweight, trusted place for you to accelerate innovation and bring your projects to life. We’ve made exciting updates to the text editor and an overall updated product experience, empowering you to collaborate and innovate even more effectively. We are committed to building a platform that is both highly functional and a way for like-minded people to connect. Partnering to amplify collaboration and documentation In 2025 we partnered with both open-source industry leaders such as OpenJS and Web3 Startups such as Updraft and Blockfuse Labs ! Beyond the code we write, we know that building a truly connected community requires more than just a great editor; it requires deep collaboration with those who share our values. This is why our partnerships with OpenJS, Updraft, and Blockfuse Labs are so vital to our mission of empowering developers to share ideas. By working alongside these teams, we bridge the gap between open-source standards and decentralized technology, ensuring that HackMD remains a trusted place for you to accelerate innovation. We believe in being authentically ourselves and building for you, and these relationships allow us to bring even more of that collective expertise to your daily workflow. Connecting with the community Aside from updates and partnerships, the HackMD team took some time to connect with the community, dive deeper into our mission, and meet up with the teams that build on our platform in real life. We made a deliberate effort to meet our community where they already build and learn. One highlight was connecting with Tech Stack Nation , a live study group and developer community actively using HackMD as part of their learning and documentation workflows. Sitting down with their team gave us a firsthand look at how HackMD supports real collaboration in educational and technical environments, from shared notes to structured learning resources. These conversations reinforced why staying close to our users matters. By listening, learning, and showing up, we strengthened our connection to the community and gained valuable insight that continues to shape how HackMD evolves for the people building on it every day. HackMD spent time learning from communities like The Commons Economy , who are actively exploring better ways to build and share knowledge together. Their work challenges the idea that collaboration should live inside closed, static documents. Through conversations and shared examples, we saw how HackMD supports their shift away from Google Docs toward workflows built on openness, version history, and collective contribution. Seeing how The Commons Economy uses HackMD reinforced something we believe deeply: collaboration works best when everyone can participate , understand what changed, and build on each other’s work. That mindset shaped how we continued to improve HackMD throughout the year. The team also made time to Step away from our screens and spend time with the developer community in person. One highlight was attending JSConf 2025 , where we had the chance to connect directly with the people building, teaching, and experimenting with JavaScript every day. Being there gave us a clearer picture of how documentation, collaboration, and shared knowledge fit into real development workflows, not just in theory, but in practice. Spending time at JSConf reminded us that collaboration does not start or end with a finished document. HackMD often plays a role much earlier, when ideas are still forming, notes are messy, and conversations are evolving, and talking face-to-face with developers reinforced why we focus so much on creating tools that support thinking together, iteration, and openness, long before anything is considered complete. No longer a tool but a growing ecosystem We say this every year, but it keeps becoming more true. HackMD continues to grow because of the people using it. In 2025, builders, educators, and teams around the world used HackMD to create, share, and collaborate in ways that went far beyond writing notes. What started as a simple editor has grown into an ecosystem shaped by real workflows, real communities, and real collaboration across borders. Everything we built this year was informed by how you use HackMD every day. Here is a snapshot of what that growth looked like in 2025, made possible by the teams and individuals building together on HackMD: But the numbers only tell part of the story. Behind them are teams turning documentation into momentum. One of those teams is Taipei Tech Racing . TTR is just one example of how HackMD empowers communities, but they are far from alone. Across software development, web3, research, education, startups, and non-profits , teams rely on HackMD to collaborate openly and move ideas forward together. Whether it is early brainstorming, shared documentation, or long-term knowledge building, HackMD provides the space for people to come together, spark creativity, and turn ideas into action. That shared momentum is what continues to shape everything we build next. Our vision for 2026 As we head into 2026, our focus is clear. We want to bring collaboration and knowledge closer together. Over the years, HackMD has helped teams work side by side, but we know that great ideas often live beyond a single workspace or document. They live in communities, shared learnings, and the connections between people doing similar work. In the year ahead, we are exploring new ways to help that knowledge travel further. We want it to be easier to discover how others are working, learn from real examples, and build on ideas that already exist. This means creating more intentional paths between collaboration and exploration, so knowledge does not stay siloed once a project is finished. That vision is what guides our next chapter. With upcoming community-focused features, including new ways to explore and surface shared work, HackMD is evolving into a place where collaboration leads naturally into learning and discovery. In 2026, we are focused on bridging the gap between creating together and learning from one another, so ideas can move further, faster, and with more people involved. We are excited to build this next chapter alongside you. Get started for free Play around with it first. Pay and add your team later. Get started for free Related articles Read all posts # en # company 2025 in Review: New Features, Big Wins, and What Lies Ahead 2025 was about building momentum at HackMD. We are focused on refining how teams and communities collaborate, expanding the editor into a true workspace, and turning everyday documentation into shared knowledge that actually moves work forward. Dec 31, 2025 By Chaseton Collins # en # company Touch down at JSConf 2025: HackMD connects with the JavaScript community A fun recap of HackMD’s experience at JSConf 2025 featuring photos, community moments, event highlights, and insights from the people shaping the future of JavaScript. Nov 19, 2025 By Chaseton Collins # en # company Celebrating 2024: HackMD's biggest moments and a look ahead 2024 has been an incredible year at HackMD, filled with growth, innovation, and collaboration thanks to our amazing community. From launching your most-requested features to integrating web3 capabilities through Sign-In with Ethereum, we've worked to make HackMD more powerful and intuitive. Dec 10, 2024 By Rachel Golden Subscribe to our newsletter Build with confidence. Never miss a beat. Learn about the latest product updates, company happenings, and technical guides in our monthly newsletter. Subscribe Build together with the ultimate Markdown editor. Learning Features Tutorial book Resources Blog Changelog Enterprise Pricing Company About Press Kit Trust Center Terms of use Privacy policy English 中文 日本語 © 2026 HackMD. All Rights Reserved. | 2026-01-13T09:30:34 |
https://www.mongodb.com/community/forums/c/community/16 | Latest About the Community topics - MongoDB Community Hub NEW: MongoDB Community Hub is Now in Public Preview! × MongoDB Community Hub About the Community $weeklyUpdate Find information about podcasts, Twitch streams, webinars, virtual sessions, and more from the MongoDB Developer Relations team. The Treehouse A place for connecting with friends and engaging in casual, off-topic discussion. Please remember that the Code of Conduct still applies. Getting Started Guides and tips to improve your experience on the forums. Learn more about trust levels, badges, formatting tips, tracking and using topic tags, and having great discussions. Community Announcements Announcements, news, activities, opportunities to engage and interact, and other relevant information from the MongoDB Community team. Forums Site Feedback Discussion about this site, its organization, how it works, and how we can improve it. Topic Replies Views Activity Moderation and Code of Conduct Enforcement About the Community Reporting If you witness or become aware of behavior that violates our Code of Conduct, please report it promptly to the MongoDB Community Team at moderators@mongodb.com. If you have reported the incident or behavior b… 0 8353 January 13, 2020 About the About the Community category About the Community 0 6993 January 4, 2020 [Atlas] Can I reset all GCP Cloud IDs and access credentials? Forums Site Feedback atlas 0 10 January 2, 2026 Initial sync failing Forums Site Feedback performance , replication , monitoring 4 603 November 19, 2025 How can I save a MongoDB Compass connection without storing the password? Forums Site Feedback compass 0 173 November 6, 2025 GameDev Episode 1: Designing a Strategy for Building a Game with MongoDB and Unity $weeklyUpdate unity 8 7202 October 13, 2025 Community Champions + Creators: January 2025 $weeklyUpdate transactions , java , sharding , weekly-update , laravel 1 394 October 4, 2025 Unable to connect to MongoDB from VS Code Forums Site Feedback 1 344 August 17, 2025 Welcome our new 2025 Community Champions! Community Announcements 1 194 September 14, 2025 MongoDB.local Events The Treehouse vector-search 1 236 December 28, 2025 The Journey of #100DaysOfCode Challenge The Treehouse 100daysofcode , lebanon-mug 2 186 November 15, 2025 MongoDB Connection Failed (Timeout) The Treehouse connecting , queries , node-js , mongoose-odm 6 723 December 15, 2025 Enhance Your MongoDB Workflow in IntelliJ IDEA Ultimate – Public Preview now available! Community Announcements aggregation , schema-validation , java , atlas 1 260 August 16, 2025 Question about MongoDB Internship / Career Contact for APAC The Treehouse 1 130 December 1, 2025 The Journey of #100DaysOfCode(@ZahraaElhek) The Treehouse 100daysofcode 5 217 November 17, 2025 What are the advantages of using MongoDB over traditional relational databases like MySQL or PostgreSQL? The Treehouse queries 2 165 November 16, 2025 How does MongoDB handle transactions and multi-document ACID compliance? The Treehouse queries , transactions 2 219 November 17, 2025 Export excel or csv Through MongoDB Compass UI is free Forums Site Feedback compass 1 330 May 12, 2025 How does MongoDB handle concurrency and locking? The Treehouse queries 1 134 November 3, 2025 MongoDB Authorization Error: Unauthorized to Execute find on Collection The Treehouse node-js 1 166 October 26, 2025 #100DaysOfCodeChallenge The Treehouse 100daysofcode 81 2009 July 12, 2025 100 Days of Code Challenge! The Treehouse 101 2302 July 12, 2025 How to use left join in mongo? Forums Site Feedback aggregation , node-js , mongoose-odm 0 264 March 31, 2025 Is this a duplicate? Should the duplicate be deleted? The Treehouse 3 206 September 20, 2025 MongoNetworkTimeoutError: connection 2 to IP timed out The Treehouse queries 2 167 September 20, 2025 MongoDB Compass "This app can't run on your PC" error Forums Site Feedback dot-net , compass 1 357 March 11, 2025 The Index (Feb 2025): Using LangChain4j, Django, & Terraform With MongoDB $weeklyUpdate java , python , spring-data-odm , weekly-update , relational-migrator 0 270 February 28, 2025 ¿Cómo puedo comunicarme con Frontier en español? The Treehouse mongodb-shell 1 122 August 30, 2025 Mongomirror upcoming EOL on July 31st, 2025 Community Announcements migration , cluster-to-cluster-sync 1 436 May 2, 2025 A Recap of MUGs: February 2025 $weeklyUpdate dublin-mug , thailand-mug , chandigarh-mug , kualalumpur-mug , hanoi-mug 0 308 February 28, 2025 next page → Home Categories Guidelines Terms of Service Privacy Policy Powered by Discourse , best viewed with JavaScript enabled | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/612 | LLVM Weekly - #612, September 22nd 2025 LLVM Weekly - #612, September 22nd 2025 Welcome to the six hundred and twelfth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events Submissions for roundtables at the 2025 US LLVM Developers' Meeting are now open . Additionally, you can book a hotel froom from the conference room block and the call for papers is open for the LLVM/Offload workshop . The next Bay Area monthly meetup will take place on September 29th . The next Darmstadt LLVM Meetup is happening on 24th September . Sam Elliott wrote on Qualcomm’s developer blog about improvements in LLVM 21 for Qualcomm’s platforms . According to the LLVM Calendar in the coming week there will be the following: Office hours with the following hosts: Kristof Beyls, Johannes Doerfert, Amara Emerson. Online sync-ups on the following topics: ClangIR, pointer authentication, OpenMP, Flang, RISC-V, LLVM libc, HLSL. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Sharjeel Khan and co-authors posted an RFC on upstreaming compiler support for Lightweight Fault Isolation (LFI) , a technique for in-process sandboxing. Jon Chesterfield initiated a thread revisiting LLVM’s AI contribution policy which generated a lot of discussion. Reid Kleckner pointed to a WIP PR that updates the current policy and has proposed a roundtable discussion at the LLVM Dev Meeting Pierre van Houtryve is proposing to add a caching system to FullLTO codegen, similar to ThinLTO’s . In the thread discussing a “structural GEP” proposal, Nikita Popov clarified why this proposal doesn’t amount to effectively adding back getelementptr after migrating away from it to ptradd . Britton Watson is looking for mentors for Student Travel Grant recipients . “Mentoring a travel-grant student is mostly about hospitality and belonging: helping students navigate the LLVM Developers' Meeting, making warm introductions to interesting people and subprojects, walking them through the hallway track, and encouraging participation at roundtables We want students to leave with new connections to members in the LLVM community and ideas of how they can contribute back or be more involved.” Tom Stellard proposed enabling the new ‘immutable releases’ feature on GitHub , which would allow the LLVM project to mark a release as immutable with the addition of new assets or modification of existing assets being disallowed. Kiran Chandramohan is running a poll for a future of the Flang technical call , asking attendees whether to continue or not and whether merge with the “general” call. Davide Grohmann posted an MLIR RFC on integrating support for the TOSA extended instruction set into MLIR’s SPIR-V dialect . LLVM commits RISCVVLOptimizer learned to handle recurrences, leading to the removal of further vl toggles. 65ad21d . A new pass was introduced to drop “unnecessary” assumes (those deemed unlikely to be useful for further optimisation). 902ddda . Membership rules for the LLVM Qualification Group were documented. 2b3f80dc . llvm-profgen was extended to generate vtable profiles. 40886fb . LoopStrengthReduce gained the ability to consider all addressing modes when generating potential solutions. 8fab811 . The BPF backend now supports jump tables. c3fb2e1 . The IssueWidth for the Neoverse V1, N1, and N3 scheduling models was reduced as it was found to perform better with this change. a044d61 . LLVM can now emit SFrame frame row entries (FREs). The commit message notes there are some remaining call frame information (CFI) directives to be supported before this is generally usable. 714f032 . A new optimisation was added in SROA (scalar replacement of aggregates) to remove alloca when there are multiple non-overlapping vector stores. 4bc9d29f . You can now assign an arbitrary latency to an instruction using an annotation in llvm-mca. 4a9fdda . TableGen’s DecoderEmitter was reworked, resulting in 12% smaller table sizes (with more improvements expected when additional optimisations are added). 60bdf09 . Clang commits clang-tidy gained support for adding custom checks using clang-query syntax. 584af2f . -Wincompatible-pointer-types was made an error by default. b247698 . A premerge workflow was added for running clang-tidy on clang-tidy. 2ed7b9f . Clang now permits implicit conversions from integral to boolean vectors. 134a58a . ClangIR moved to using TableGen to generate LLVM lowering patterns. 81aaca3 . A new bugprone-derived-method-shadowing-base-method check was added to clang-tidy. 85d2a46 . Other project commits The newly added ConvertComplexPow Flang pass will replace complex.pow operations with calls to Fortran runtime or libm functions. 54677d6 . A helper script was added to libcxx to produce benchmark results for older commits, and to visualise them. 91b0584 , 00333ed . The SPIR-V MLIR dialect testing strategy was documented. 371048e . There are now Python bindings for the MLIR IRDL dialect. e5114a2 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/623 | LLVM Weekly - #623, December 8th 2025 LLVM Weekly - #623, December 8th 2025 Welcome to the six hundred and twenty-third issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at https://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events The call for proposals is now open for EuroLLVM 2026 . Submit your proposal by 11th January 2026. EuroLLVM will take place in April in Dublin. Additional recordings from the 2025 US LLVM Developers' Meeting have been appearing on YouTube and they are now collected together in a handy playlist . Miguel Cárdenas wrote on the LLVM blog about their GSoC project, Making LLVM Compilation Data Accessible: A Unified Visualization Tool for Compiler Artifacts . My Igalia colleague Mikhail R. Gadelha has written up a blog post version of a talk delivered at the RISC-V Summit earlier this year about improving LLVM generated code performance for RISC-V . According to the LLVM Calendar in the coming week there will be the following. Office hours with the following hosts: Aaron Ballman, Alexey Bader, Alina Sbirlea, Kristof Beyls, Johannes Doerfert. Online sync-ups on the following topics: formal specification, Flang, modules, lifetime safety, LLVM/offload, SPIR-V, OpenMP for flang, HLSL. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Alina Sbirlea and others are establishing a working group on formal specification for LLVM IR . The kickoff meeting already took place earlier today, but a recurring meeting will be added to the LLVM calendar and I’m sure notes will be shared. Chad Smith posted an RFC on flow-sensitive nullability , aiming to make Clang catch more null errors earlier. Yitzhak Mandelbaum shared a perspective from Google which featured an interesting (to me) tidbit “From our analysis, Nullable is the wrong default. Of the hundreds of millions of lines of code we’ve analysed, Nonnull is far more prevalent. In third-party libraries we import, the ratio is 4:1 and in our own code (for historical reasons) 6:1. So, Nonnull as the default is preferred from the perspective of reducing syntactic noise.” Stefan Gränitz kicked off an RFC discussion on introducing a reference pass plugin for LLVM , with a goal of using it to improve the infrastructure for pass-plugins in upstream LLVM through things like end-to-end tests running on bots. Stefan points to examples like Polly or Enzyme as existing pass-plugins. Britton Watson is seeking volunteers for the EuroLLVM student travel grant committee . LLVM 21.1.7 was released . Note the comment about a potential ABI break. Felipe de Azevedo Piovezan queried what the current status is for using spr for stacked pull requests . Donát Nagy wrote up plans for improving the Clang checkers that report out-of-bounds errors . Grigory Pastukhov would like to introduce a flatten_depth(N) attribute to Clang to allow inlining up to a specified depth. Keith Smiley started a discussion on llvm-dsymutil performance , receiving some ideas from long time dsymutil maintainer Jonas Devlieghere. Volodymyr Turanskyy shared an RFC on running libc baremetal tests on Arm using qemu . LLVM commits The DebugCounter implementation was optimised and as a result debug counters are now also enabled in non-assert release builds. 042a38f , d0f5a49 . The norms of the current LLVM community RFC process were documented. 4424a58 . The RISC-V backend learned to use the Zicond conditional move instructions for floating point selects once Zfinx/Zdinx is present (i.e. when there is no separate FPR so GPRs are used for floats). 2e21bb8 . A new getOrInsertDeclaration overload was added which allows an intrinsic declaration to be inserted with overload types being deduced from the provided return type and argument types if necessary. 822fc44 . The Hexagon backend gained target-specific passes for widening vector operations and for optimising shuffles. 4da31b6 . An llvm.protected.field.ptr intrinsic was added. 4afc256 . Intrinsics may now be used with callbr . e84fdbe . A new llvm-reduce pass will sink defs to uses, aiming to reduce live range ranges. 097e0e1 . The LLVM IR Module gained a new API to help iterate over just the function definitions (i.e. excluding declarations). 82f7d3c . Clang commits Clang’s lifetime analysis can now suggest the insertion of missing [[clang;:lifetimebound]] annotations. 5a74f7e . The modular_format function attribute can now be specified. This is used to e.g. enable selecting a smaller printf implementation if not all functionality is needed. d041d5d . A document was added detailed plans to deal with code duplication in the ClangIR codegen implementation. 6d8714b . Other project commits A show_descriptor intrinsic was implemented in flang, which will print details of a descriptor (extended Fortran pointer). 4b2714f . LLVM’s libc now has finer grained control over the selection of higher performance function implementations. 8701c2a . libcxx can now be used with newlib as the libc, including locale support. 72402e8 . A new LLDB instrumentation plugin was added for -fbounds-safety soft trap mode. The matching functionality isn’t implemented in Clang yet. e27dec5 . A new breakpoint add subcommand was added to LLDB, aiming to supplant breakpoint set . 2110db0 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://hackmd.io/company?utm_source=blog&utm_medium=nav-bar | The HackMD Blog: Company Blog Product Company Changelog Education Sign in Sign in Get HackMD free Company From company news to our product vision, from exciting launches to requested feature updates, read the latest on HackMD here. # en # company 2025 in Review: New Features, Big Wins, and What Lies Ahead 2025 was about building momentum at HackMD. We are focused on refining how teams and communities collaborate, expanding the editor into a true workspace, and turning everyday documentation into shared knowledge that actually moves work forward. Dec 31, 2025 By Chaseton Collins # en # company Touch down at JSConf 2025: HackMD connects with the JavaScript community A fun recap of HackMD’s experience at JSConf 2025 featuring photos, community moments, event highlights, and insights from the people shaping the future of JavaScript. Nov 19, 2025 By Chaseton Collins # en # company Celebrating 2024: HackMD's biggest moments and a look ahead 2024 has been an incredible year at HackMD, filled with growth, innovation, and collaboration thanks to our amazing community. From launching your most-requested features to integrating web3 capabilities through Sign-In with Ethereum, we've worked to make HackMD more powerful and intuitive. Dec 10, 2024 By Rachel Golden # en # company # announcement A bold new look Check out HackMD's revamped look! After months of hard work and creativity, we are proud to unveil a fresh, modern look that reflects our growth and vision for the future. Aug 27, 2024 By Rachel Golden # en # product # announcement # company Introducing Dark Mode: Plus a new look for your Markdown editor Check out HackMD's new visual updates, including a revamped color scheme, an improved Share menu, and a new native dark mode. And who knows, you may see even more of visual updates in the near future. But that's all we can say for now... Jun 27, 2024 By Rachel Golden # en # product # company # announcement Sign-in with ETH is live on HackMD HackMD introduces Sign-In with Ethereum (SIWE), enabling users to access their accounts using the Ethereum blockchain. SIWE enhances security, streamlines the authentication process, and represents a step forward in decentralized authentication solutions. Feb 26, 2024 By Rachel Golden # en # company See you at ETHDenver 🏔️ HackMD will be at ETHDenver 2024! Our team is gearing up to connect with the brilliant minds shaping the future of web3, technology, and decentralization. Stay tuned for a special announcement during the event. Feb 20, 2024 By Rachel Golden Subscribe to our newsletter Build with confidence. Never miss a beat. Learn about the latest product updates, company happenings, and technical guides in our monthly newsletter. Subscribe Build together with the ultimate Markdown editor. Learning Features Tutorial book Resources Blog Changelog Enterprise Pricing Company About Press Kit Trust Center Terms of use Privacy policy English 中文 日本語 © 2026 HackMD. All Rights Reserved. | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/components.html | Components :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Components Edit this Page Components The Log4j 2 distribution contains the following artifacts: log4j-bom A public Bill-of-Materials that manages all the versions of Log4j artifacts. You can import the BOM in your build tool of preference: Maven Gradle <dependencyManagement> <dependencies> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-bom</artifactId> <version>2.25.3</version> <scope>import</scope> <type>pom</type> </dependency> </dependencies> </dependencyManagement> dependencies { implementation platform('org.apache.logging.log4j:log4j-bom:2.25.3') } log4j A private Bill-of-Materials used during the compilation and testing of the project. Do not use this artifact, since it also manages versions of third-party projects. Use log4j-bom instead. log4j-1.2-api JPMS module org.apache.log4j The log4j-1.2-api artifact contains several tools to help users migrate from Log4j 1 to Log4j 2. See Log4j 1 to Log4j 2 Bridge for details. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-1.2-api</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-1.2-api' log4j-api JPMS module org.apache.logging.log4j The log4j-api artifact contains the Log4j API . See Log4j API for more details. Maven Gradle <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api</artifactId> <version>${log4j-api.version}</version> </dependency> implementation 'org.apache.logging.log4j:log4j-api:${log4j-api.version}' log4j-api-test JPMS module org.apache.logging.log4j.test The log4j-api-test artifact contains test fixtures useful to test Log4j API implementations. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api-test</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-api-test' log4j-appserver JPMS module org.apache.logging.log4j.appserver The log4j-appserver artifact contains: A bridge from Tomcat JULI to the Log4j API. See Replacing Tomcat logging system for more information. A bridge from Jetty 9 logging API to the Log4j API. See Replacing Jetty logging system for more information Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-appserver</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-appserver' log4j-cassandra JPMS module org.apache.logging.log4j.cassandra The log4j-cassandra artifact contains an appender for the Apache Cassandra database. See Cassandra Appender for more information. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-cassandra</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-cassandra' log4j-core JPMS module org.apache.logging.log4j.core The log4j-core artifact contains the reference implementation of the Log4j API . See Reference implementation for more details. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-core' log4j-core-test JPMS module org.apache.logging.log4j.core.test The log4j-core-test artifact contains test fixtures useful to extend the reference implementation . Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core-test</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-core-test' log4j-couchdb JPMS module org.apache.logging.log4j.couchdb The log4j-couchdb artifact contains a provider to connect the NoSQL Appender with the Apache CouchDB database. See CouchDB provider for more information. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-couchdb</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-couchdb' log4j-docker JPMS module org.apache.logging.log4j.docker The log4j-docker artifact contains a lookup for applications running in a Docker container See Docker lookup for more information. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-docker</artifactId> <version>2.25.3</version> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-docker:2.25.3' log4j-flume-ng JPMS module org.apache.logging.log4j.flume The log4j-flume-ng artifact contains an appender for the Apache Flume log data collection service. See Flume Appender for more information. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-flume-ng</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-flume-ng' log4j-iostreams JPMS module org.apache.logging.log4j.iostreams The log4j-iostreams artifact is an extension of the Log4j API to connect with legacy stream-based logging methods. See Log4j IOStreams for more information. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-iostreams</artifactId> </dependency> We assume you use log4j-bom for dependency management. implementation 'org.apache.logging.log4j:log4j-iostreams' log4j-jakarta-smtp JPMS module org.apache.logging.log4j.jakarta.smtp The log4j-jakarta-smtp contains an appender for the Jakarta Mail 2.0 API and later versions. See SMTP Appender for more information. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-jakarta-smtp</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-jakarta-smtp' log4j-jakarta-web JPMS module org.apache.logging.log4j.jakarta.web The log4j-jakarta-web contains multiple utils to run your applications in a Jakarta Servlet 5.0 or later environment: It synchronizes the lifecycle of Log4j Core and your application. See Integrating with web applications for more details. It contains a lookup for the data contained in a Servlet context. See Web Lookup for more details. It contains an appender to forward log event to a Servlet. See Servlet Appender for more details. Don’t deploy this artifact together with log4j-web . Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-jakarta-web</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-jakarta-web' log4j-jcl JPMS module org.apache.logging.log4j.jcl The log4j-jcl artifact contains a bridge from Apache Commons Logging and the Log4j API . See Installing JCL-to-Log4j API bridge for more details. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-jcl</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-jcl' log4j-jdbc-dbcp2 JPMS module org.apache.logging.log4j.jdbc.dbcp2 The log4j-jdbc-dbcp2 artifact contains a data source for the JDBC Appender that uses Apache Commons DBCP . See PoolingDriver connection source for more details. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-jdbc-dbcp2</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-jdbc-dbcp2' log4j-jpa JPMS module org.apache.logging.log4j.jpa The log4j-jpa artifact contains an appender for the Jakarta Persistence 2.2 API or Java Persistence API. See JPA Appender for more details. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-jpa</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-jpa' log4j-jpl JPMS module org.apache.logging.log4j.jpl The log4j-jpl artifact contains a bridge from System.Logger to the Log4j API . See Installing the JPL-to-Log4j API bridge for more details. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-jpl</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-jpl' log4j-jul JPMS module org.apache.logging.log4j.jul The log4j-jul artifact contains a bridge from java.util.logging to the Log4j API . See Installing the JUL-to-Log4j API bridge for more details. Don’t deploy this artifact together with log4j-to-jul . Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-jul</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-jul' log4j-layout-template-json JPMS module org.apache.logging.log4j.json.template.layout The log4j-layout-template-json contains a highly extensible and configurable layout to format log events as JSON. See JSON Template Layout for details. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-layout-template-json</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-layout-template-json' log4j-mongodb JPMS module org.apache.logging.log4j.mongodb The log4j-mongodb artifact contains a provider to connect the NoSQL Appender with the MongoDB database. It is based on the latest version of the Java driver. See MongoDb provider for more information. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-mongodb</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-mongodb4' log4j-mongodb4 JPMS module org.apache.logging.log4j.mongodb4 The log4j-mongodb artifact contains a provider to connect the NoSQL Appender with the MongoDB database. It is based on version 4.x of the Java driver. See MongoDb4 provider for more information. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-mongodb4</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-mongodb4' log4j-slf4j2-impl JPMS module org.apache.logging.log4j.slf4j2.impl The log4j-slf4j2-impl artifact contains a bridge from SLF4J 2 API to the Log4j API . See Installing the SLF4J-to-Log4j API bridge for more details. Don’t deploy this artifact together with either log4j-slf4j-impl or log4j-to-slf4j . Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-slf4j2-impl</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-slf4j2-impl' log4j-slf4j-impl JPMS module org.apache.logging.log4j.slf4j.impl The log4j-slf4j-impl artifact contains a bridge from SLF4J 1 API to the Log4j API . See Installing the SLF4J-to-Log4j API bridge for more details. Don’t deploy this artifact together with either log4j-slf4j2-impl or log4j-to-slf4j . Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-slf4j-impl</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-slf4j-impl' log4j-spring-boot JPMS module org.apache.logging.log4j.spring.boot The log4j-spring-boot artifact contains multiple utils to integrate with Spring Framework 5.x or earlier versions and Spring Boot 2.x or earlier versions. It provides a property source . See Spring Property source for more details. It provides a lookup . See Spring lookup for more details. It provides an arbiter . See Spring arbiter for more details. It provides an alternative LoggingSystem implementation. See Log4j Spring Boot Support for more details. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-spring-boot</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-spring-boot' log4j-spring-cloud-config-client JPMS module org.apache.logging.log4j.spring.cloud.config.client The log4j-spring-cloud-config-client provides utils to integrate with Spring Cloud Config 3.x or earlier versions. See Log4j Spring Cloud Configuration for more details. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-spring-cloud-config-client</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-spring-cloud-config-client' log4j-taglib JPMS module org.apache.logging.log4j.taglib The log4j-taglib provides a Jakarta Servlet Pages 2.3 or earlier library that logs to the Log4j API . See Log4j Taglib for more details. Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-taglib</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-taglib' log4j-to-jul JPMS module org.apache.logging.log4j.to.jul The log4j-jul artifact contains an implementation of the Log4j API that logs to java.util.logging . See Installing JUL for more details. Don’t deploy this artifact together with log4j-jul . Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-to-jul</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-to-jul' log4j-to-slf4j JPMS module org.apache.logging.log4j.to.slf4j The log4j-jul artifact contains an implementation of the Log4j API that logs to SLF4J API . See Installing Logback for more details. Don’t deploy this artifact together with either log4j-slf4j-impl or log4j-slf4j2-impl . Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-to-slf4j</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-to-slf4j' log4j-web JPMS module org.apache.logging.log4j.web The log4j-jakarta-web contains multiple utils to run your applications in a Jakarta Servlet 4.0 or Java EE Servlet environment: It synchronizes the lifecycle of Log4j Core and your application. See Integrating with web applications for more details. It contains a lookup for the data contained in a Servlet context. See Web Lookup for more details. It contains an appender to forward log event to a Servlet. See Servlet Appender for more details. Don’t deploy this artifact together with log4j-jakarta-web . Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-web</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-web' Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
http://docs.buildbot.net/current/manual/upgrading/index.html | 2.12. Upgrading — Buildbot 4.3.0 documentation Buildbot 1. Buildbot Tutorial 2. Buildbot Manual 2.1. Introduction 2.2. Installation 2.3. Concepts 2.4. Secret Management 2.5. Configuration 2.6. Customization 2.7. Command-line Tool 2.8. Resources 2.9. Optimization 2.10. Plugin Infrastructure in Buildbot 2.11. Deployment 2.12. Upgrading 2.12.1. Upgrading to Buildbot 5.0 (not released) 2.12.2. Testing support 2.12.3. Non-reconfigurable services 2.12.4. HTTP service 2.12.5. Build factories 2.12.6. Database connectors 2.12.7. Schedulers 2.12.8. Reporters 2.12.9. Data API 2.12.10. Upgrading to Buildbot 4.0 2.12.11. Upgrading to Buildbot 3.0 2.12.12. Upgrading to Buildbot 2.0 2.12.13. Upgrading to Buildbot 1.0 2.12.14. Upgrading to Buildbot 0.9.0 2.12.15. New-Style Build Steps in Buildbot 0.9.0 2.12.16. Transition to “worker” terminology in BuildBot 0.9.0 3. Buildbot Development 4. Release Notes 5. Older Release Notes 6. API Indices Buildbot 2. Buildbot Manual 2.12. Upgrading View page source 2.12. Upgrading This section describes the process of upgrading the master and workers from old versions of Buildbot. The users of the Buildbot project will be warned about backwards-incompatible changes by warnings produced by the code. Additionally, all backwards-incompatible changes will be done at a major version change (e.g. 1.x to 2.0). Minor version change (e.g. 2.3 to 2.4) will only introduce backwards-incompatible changes only if they affect small part of the users and are absolutely necessary. Direct upgrades between more than two major releases (e.g. 1.x to 3.x) are not supported. The versions of the master and the workers do not need to match, so it’s possible to upgrade them separately. Usually there are no actions needed to upgrade a worker except to install a new version of the code and restart it. Usually the process of upgrading the master is as simple as running the following command: buildbot upgrade-master basedir This command will also scan the master.cfg file for incompatibilities (by loading it and printing any errors or deprecation warnings that occur). It is safe to run this command multiple times. Warning The upgrade-master command may perform database schema modifications. To avoid any data loss or corruption, it should not be interrupted. As a safeguard, it ignores all signals except SIGKILL . To upgrade between major releases the best approach is first to upgrade to the latest minor release on the same major release. Then, fix all deprecation warnings by upgrading the configuration code to the replacement APIs. Finally, upgrade to the next major release. 2.12.1. Upgrading to Buildbot 5.0 (not released) 2.12.2. Testing support 2.12.3. Non-reconfigurable services 2.12.4. HTTP service 2.12.5. Build factories 2.12.6. Database connectors 2.12.7. Schedulers 2.12.8. Reporters 2.12.9. Data API 2.12.10. Upgrading to Buildbot 4.0 2.12.11. Upgrading to Buildbot 3.0 2.12.12. Upgrading to Buildbot 2.0 2.12.13. Upgrading to Buildbot 1.0 2.12.14. Upgrading to Buildbot 0.9.0 2.12.15. New-Style Build Steps in Buildbot 0.9.0 2.12.16. Transition to “worker” terminology in BuildBot 0.9.0 Previous Next © Copyright Buildbot Team Members. Built with Sphinx using a theme provided by Read the Docs . | 2026-01-13T09:30:34 |
https://hackmd.io/changelog?utm_source=blog&utm_medium=Changelog-article#2025-08-05 | The HackMD Blog: Changelog Blog Product Company Changelog Education Sign in Sign in Get HackMD free Changelog Stay up to date on everything we ship. Sep 23, 2025 Improved Tag Management Organize your notes faster than ever. You can now create, rename, and delete tags directly from the Overview sidebar. Select multiple notes to add or remove tags in a single action, streamlining how you categorize knowledge. Sep 4, 2025 Profile Overhaul: Pin Notes, Categories & Connections We’ve completely overhauled profiles. Showcase your best work by pinning notes, organize content with categories, and add your social links (Email, Telegram, Discord, X). Improved sharing helps your knowledge reach a wider audience. Aug 5, 2025 Cite Paragraphs, Stay Connected Highlight great ideas and give credit easily. With Paragraph Citation , just paste a paragraph link and choose Citation to add a quote and an automatic footnote with the source. When others cite your note, you can discover who’s building on your work through citation cards . It’s a simple way to connect thoughts and help knowledge spread and grow. Jul 8, 2025 Use Guided Comments to Spark Better Feedback Comments are great—but sometimes visitors just need a little prompt. With Guided Comments, you can add a custom prompt and quick-reply options to spark better feedback. When someone clicks your avatar, they’ll see your prompt and can jump right into the conversation—no pressure, just a gentle invitation to engage. 1 2 3 11 Build together with the ultimate Markdown editor. Learning Features Tutorial book Resources Blog Changelog Enterprise Pricing Company About Press Kit Trust Center Terms of use Privacy policy English 中文 日本語 © 2026 HackMD. All Rights Reserved. | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/manual/appenders.html | Appenders :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Manual Implementation Configuration Appenders Edit this Page Appenders Appenders are responsible for delivering log events to their destination. Every Appender must implement the Appender interface. While not strictly required by the Log4j Core architecture, most appenders inherit from AbstractAppender and: delegate the filtering of log events to an implementation of Filter . See Filters for more information. delegate the formatting of log events to an implementation of Layout . See Layouts for more information. only directly handle the writing of log event data to the target destination. Appenders always have a name so that they can be referenced from a logger configuration . Common concerns Buffering Appenders that use stream-like resources (such as files, TCP connections) have an internal ByteBuffer that can be used to format each log event, before sending it to the underlying resource. The buffer is used if: the log4j2.enableDirectEncoders configuration property is enabled, or the bufferedIo configuration attribute is enabled. The buffer is flushed to the underlying resource on three occasions: if the buffer is full. at the end of each log event batch, if asynchronous loggers or appenders are used. at the end of each log event, if the immediateFlush configuration attribute is true . These configuration attributes are shared by multiple appenders: bufferSize Type int Default value log4j2.encoderByteBufferSize This configuration attribute specifies the size of the ByteBuffer used by the appender. bufferedIo Type boolean Default value true If set to true , Log4j Core will use an internal ByteBuffer to store log events before sending them. If the log4j2.enableDirectEncoders configuration property is set to true , the internal ByteBuffer will always be used. immediateFlush Type boolean Default value true If set to true , Log4j will flush Log4j Core and Java buffers at the end of each event: the internal ByteBuffer of the appender will be flushed. for appenders based on Java’s OutputStream a call to the OutputStream.flush() method will be performed. This setting only guarantees that a byte representation of the log event is passed to the operating system. It does not ensure that the operating system writes the event to the underlying storage. If you are using asynchronous loggers or appenders , you can set this attribute to false . Log4j Core will still flush the internal buffer whenever the log event queue becomes empty. Exception handling By default, Log4j Core uses Status Logger to report exceptions that occur in appenders. This behavior can be changed using the following configuration property: ignoreExceptions Type boolean Default value true If false logging exceptions will be forwarded to the caller. Otherwise, they will be logged using Status Logger . If logging is important for your business, consider using a Failover Appender to redirect log events to a different appender in case of exceptions. Runtime evaluation of attributes The following configuration attributes are also evaluated at runtime, so can contain escaped $${...} property substitution expressions. Table 1. List of attributes evaluated at runtime Component Parameter Event type Evaluation context HTTP Appender Property/value Log event global Kafka Appender key Log event global NoSQL Appender KeyValuePair/value Log event global PropertiesRewrite Policy Property/value Log event global Routes Container pattern Log event log event Rolling File Appenders filePattern Rollover global Optional Rollover Actions basePath Rollover global The Route component of the Routing Appender is special: its children are evaluated at runtime, but they are not evaluated at configuration time. Inside the Route component you should not use escaped $${...} property substitution expressions, but only unescaped ${...} property substitution expressions. See runtime property substitution for more details. Collection Log4j bundles several predefined appenders to assist in several common deployment use cases. They are documented in separate pages based on their target resource: Console Appender As one might expect, the Console Appender writes its output to either the standard output or standard error output. The appender supports three different ways to access the output streams: direct This mode gives the best performance. It can be enabled by setting the direct attribute to true . default By default, the Console appender uses the values of System.out or System.err present at configuration time . Any changes to those streams at runtime will be ignored. follow This mode always uses the current value of the System.out and System.err streams. It can be enabled by setting the follow attribute to true . This setting might be useful in multi-application environments. Some application servers modify System.out and System.err to always point to the currently running application. Table 2. Console Appender configuration attributes Attribute Type Default value Description Required name String The name of the appender. Optional bufferSize int 8192 The size of the ByteBuffer internally used by the appender. See Buffering for more details. direct boolean false If set to true , log events will be written directly to either FileDescriptor.out or FileDescriptor.err . This setting bypasses the buffering of System.out and System.err and might provide a performance comparable to a file appender . If other logging backends or the application itself uses System.out/System.err , setting this to true might cause interleaved output. This setting is incompatible with the follow attribute . follow boolean false If set to true , the appender will honor reassignments of System.out (resp. System.err ) via System.setOut (resp. System.setErr ). Otherwise, the value of System.out (resp. System.err ) at configuration time will be used. This setting is incompatible with the direct attribute . ignoreExceptions boolean true If false , logging exception will be forwarded to the caller of the logging statement. Otherwise, they will be ignored. Logging exceptions are always also logged to Status Logger immediateFlush boolean true If set to true , the appender will flush its internal buffer and the buffer of the System.out/System.err stream after each log event. See Buffering for more details. target Target SYSTEM_OUT It specifies which standard output stream to use: SYSTEM_OUT It uses the standard output. SYSTEM_ERR It uses the standard error output. Table 3. Common nested elements Type Multiplicity Description Filter zero or one Allows filtering log events just before they are formatted and sent. See also appender filtering stage . Layout zero or one Formats log events. See Layouts for more information. 📖 Plugin reference for Console Configuration examples A typical configuration in a development environment might look like: XML JSON YAML Properties Snippet from an example log4j2.xml <Console name="CONSOLE"> <PatternLayout pattern="%d [%t] %p %c - %m%n"/> </Console> Snippet from an example log4j2.json "Console": { "name": "CONSOLE", "PatternLayout": { "pattern": "%d [%t] %p %c - %m%n" } } Snippet from an example log4j2.yaml Console: name: "CONSOLE" PatternLayout: pattern: "%d [%t] %p %c - %m%n" Snippet from an example log4j2.properties appender.0.type = Console appender.0.name = CONSOLE appender.0.layout.type = PatternLayout appender.0.layout.pattern = %d [%t] %p %c - %m%n A typical configuration for a production environment might look like XML JSON YAML Properties Snippet from an example log4j2.xml <Console name="CONSOLE" direct="true"> (1) <JsonTemplateLayout/> (2) </Console> Snippet from an example log4j2.json "Console": { "name": "CONSOLE", "direct": true, (1) "JsonTemplateLayout": {} (2) } Snippet from an example log4j2.yaml Console: name: "CONSOLE" direct: true (1) JsonTemplateLayout: {} (2) Snippet from an example log4j2.properties appender.0.type = Console appender.0.name = CONSOLE (1) appender.0.direct = true (2) appender.0.layout.type = JsonTemplateLayout 1 Improve performance by setting direct to true . 2 Use a structured layout. Additional dependencies are required, see JSON Template Layout . File appenders File appenders write logs to the filesystem. They can be further split into: Single file appenders See File appenders for details. Rolling file appenders See Rolling file appenders for details. Database appenders The appenders write log events directly to a database. Cassandra appender Sends log events to Apache Cassandra JDBC appender Sends log events to a JDBC driver JPA appender Uses Jakarta Persistence API to deliver log events to a database NoSQL appender Store log events to a document-oriented database See Database appenders for details. Network appenders These appenders use simple network protocols to transmit log events to a remote host. The supported network protocols are: UDP TCP These are handled by the Socket Appender . HTTP This is handled by the HTTP Appender . SMTP This is handled by the SMTP Appender . See Network Appenders for details. Message queue appenders Message queue appenders forward log events to a message broker. The following systems are supported: Flume appender Forwards log events to an Apache Flume server. JMS appender Forwards log events to a Jakarta Message 2.0 broker. Kafka appender Forwards log events to an Apache Kafka server. ZeroMQ/JeroMQ appender Forwards log events to a ZeroMQ broker. See Message queue appenders for details. Servlet Appender The Servlet appender allows users to forward all logging calls to the ServletContext.log() methods. The ServletContext.log(String, Throwable) method predates modern logging APIs. By using Servlet appender, you typically will not be able to differentiate log events by log level or logger name. The Servlet Appender has no configuration attributes. 📖 Plugin reference for Servlet You can use it by declaring an appender of type Servlet in your configuration file: XML JSON YAML Properties <Servlet name="SERVLET"> <PatternLayout pattern="%m%n" alwaysWriteExceptions="false"/> (1) </Servlet> "Servlet": { "name": "SERVLET", "PatternLayout": { "pattern": "%m%n", "alwaysWriteExceptions": false (1) } } Servlet: name: "SERVLET" PatternLayout: pattern: "%m%n" alwaysWriteExceptions: false (1) appender.0.type = Servlet appender.0.name = SERVLET appender.0.layout.type = PatternLayout appender.0.layout.pattern = %m%n (1) appender.0.layout.alwaysWriteExceptions = false 1 Encodes events using Pattern Layout and forwards the call to ServletContext.log() . Setting alwaysWriteExceptions to false prevents the stacktrace from appearing as both part of the message argument and as throwable argument: this usually results in the stacktrace being printed to the log file twice. Additional runtime dependencies are required for using the servlet appender: Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-jakarta-web</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-jakarta-web' Click here if you are you using Jakarta EE 8 or any version of Java EE? Jakarta EE 8 and all Java EE applications servers use the legacy javax package prefix instead of jakarta . If you are using those application servers, you should replace the dependencies above with: Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-web</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-web' See Integrating with Jakarta EE for more information. Delegating appenders Delegating appenders are intended to decorate other appenders: Asynchronous appender Perform all I/O on a dedicated thread Failover appender Provide a backup appender in case an appender fails Rewrite appender Modify log events prior to delivering them to the target Routing appender Dynamically choose a different appender for each log event See Delegating Appenders for details. Extending Appenders are plugins implementing the Appender interface . This section will guide you on how to create custom ones. Implementing a reliable and efficient appender is a challenging task! We strongly advise you to Use existing appenders and/or managers whenever appropriate Share your use case and ask for feedback in a user support channel Plugin preliminaries Log4j plugin system is the de facto extension mechanism embraced by various Log4j components. Plugins provide extension points to components, that can be used to implement new features, without modifying the original component. It is analogous to a dependency injection framework, but curated for Log4j-specific needs. In a nutshell, you annotate your classes with @Plugin and their ( static ) factory methods with @PluginFactory . Last, you inform the Log4j plugin system to discover these custom classes. This is done using running the PluginProcessor annotation processor while building your project. Refer to Plugins for details. Extending appenders Appenders are plugins implementing the Appender interface . We recommend users to extend from AbstractAppender , which provides implementation convenience. While annotating your appender with @Plugin , you need to make sure that It has a unique name attribute across all available Appender plugins The category attribute is set to Node.CATEGORY Most appender implementation use managers , which model an abstraction owning the resources, such as an OutputStream or a socket. When a reconfiguration occurs, a new appender will be created. However, if nothing significant in the previous manager has changed, the new appender will simply reference it instead of creating a new one. This ensures that events are not lost while a reconfiguration is taking place without requiring that logging pause while the reconfiguration takes place. You are strongly advised to study the manager concept in the predefined appenders , and either use an existing manager, or create your own. You can check out the following files for examples: HttpAppender.java – HttpAppender sends log events over HTTP using HttpURLConnectionManager ConsoleAppender.java – Console Appender writes log events to either System.out or System.err using OutputStreamManager Copyright © 1999-2025 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.