url stringlengths 11 2.25k | text stringlengths 88 50k | ts timestamp[s]date 2026-01-13 08:47:33 2026-01-13 09:30:40 |
|---|---|---|
https://penneo.com/da/why-penneo/ | Hvorfor vælge Penneo? Produkter Penneo Sign Validator Hvorfor Penneo Integrationer Løsninger Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Brancher Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Priser Ressourcer Vidensunivers Trust Center Produktopdateringer SIGN Hjælpecenter KYC Hjælpecenter Systemstatus LOG PÅ Penneo Sign Log ind på Penneo Sign. LOG PÅ Penneo KYC Log ind på Penneo KYC. LOG PÅ BOOK ET MØDE GRATIS PRØVEPERIODE DA EN NO FR NL Produkter Penneo Sign Validator Hvorfor Penneo Integrationer Løsninger Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Priser Ressourcer Vidensunivers Trust Center Produktopdateringer SIGN Hjælpecenter KYC Hjælpecenter Systemstatus BOOK ET MØDE GRATIS PRØVEPERIODE LOG PÅ DA EN NO FR NL Penneo Sign Log ind på Penneo Sign. LOG PÅ Penneo KYC Log ind på Penneo KYC. LOG PÅ Effektiv digital signering og fuld overholdelse af EU-lovgivningen Penneo giver dig fuld fart på arbejdsgangene. Du håndterer alle aftaler hurtigt og trygt , altid sikret mod lovbrud. Løsningen sparer dig tid og minimerer fejl, og du kan nemt signere i henhold til alle eIDAS-krav via MitID, itsme®, BankID, .beID og pas . BOOK ET MØDE Derfor vælger andre Penneo 3.000+ virksomheder – herunder de fire største revisionsfirmaer – bruger Penneo. 60 % af de dokumenter, der sendes med Penneo, bliver underskrevet inden for 24 timer. 81 % af alle årsrapporter i Danmark bliver underskrevet med Penneo. Nøglefordele ved Penneo Spar tid og arbejd mere effektivt Giv dine kunder en bedre oplevelse Sikr høj datasikkerhed og overholdelse af lovgivningen Spar tid og arbejd mere effektivt Giv dine kunder en bedre oplevelse Sikr høj datasikkerhed og overholdelse af lovgivningen Integrér Penneo med dine eksisterende systemer og slip for manuelle indtastninger, fejl og skift mellem platforme. Automatisér selv de mest komplekse underskriftsforløb med regler, der sender dokumenter til flere underskrivere i en fastlagt rækkefølge. Undgå forsinkelser – automatiske påmindelser holder dine underskriftsprocesser på sporet. Brug standardiserede e-mail-skabeloner til at kommunikere professionelt og ensartet med underskrivere – ingen grund til at skrive nye beskeder hver gang. Spar tid på opfølgning – automatiske statusopdateringer holder sagsansvarlige informeret om ændringer og underskriveraktivitet. Kunder kan underskrive flere dokumenter på én gang med en enkelt digital signatur – nemt og effektivt. Underskriv og udfyld formularer digitalt med MitID, pas eller eID – ingen print, scanning eller postgang. Den intuitive platform guider underskrivere trin for trin og gør processen let – også for dem, der ikke er teknisk anlagte. Kunderne får automatisk adgang til deres underskrevne dokumenter i et personligt arkiv – de skal ikke længere bede om kopier eller gemme dem lokalt. Underskrivere modtager beskeder og bruger platformen på deres foretrukne sprog – en mere behagelig og tilgængelig oplevelse. Opret kvalificerede elektroniske signaturer med itsme®, norsk BankID pas eller .beID – juridisk gyldige i hele EU og ligestillet med en håndskrevet underskrift. Opret avancerede elektroniske signaturer i overensstemmelse med eIDAS med MitID eller svensk BankID. Penneo lever op til GDPR og er certificeret efter ISO 27001 og 27701 – så dine data er i sikre hænder. Beskyt følsomme dokumenter med adgangskontrol via eID eller SMS – kun de rette personer får adgang. Alle handlinger bliver registreret i en sikker og uforanderlig log, som giver fuld gennemsigtighed og dokumentation. Derfor får du mere end bare en standardløsning med Penneo Penneo Standardløsninger til elektroniske underskrifter Komplekse underskriftsforløb Penneo håndterer selv de mest komplekse forløb – med understøttelse af flere underskrivere, rollebaseret rækkefølge og godkendelsesrunder, hvor intet overlades til tilfældighederne. Standardløsninger er ofte begrænset til simple, lineære forløb og mangler fleksibilitet til at understøtte flere trin, roller og godkendelser. Kvalificerede elektroniske underskrifter (QES) Penneo understøtter kvalificerede elektroniske underskrifter med pas, itsme®, norsk BankID eller .beID – med samme juridiske gyldighed som en håndskrevet underskrift og anerkendt i hele EU. Standardløsninger tilbyder sjældent QES og er typisk begrænset til underskriftsformer med lavere sikkerhedsniveau, som ikke opfylder kravene til fuld EU-anerkendelse. Avancerede elektroniske underskrifter (AES) Penneo understøtter avancerede elektroniske underskrifter med betroede identitetsløsninger som MitID, MitID Erhverv og svensk BankID – så du får både høj sikkerhed og fuld sporbarhed. Standardløsninger kan i nogle tilfælde understøtte AES, men mangler ofte integration med nationale eID’er eller pas. Understøttede sprog Penneo er tilgængelig på 8 sprog og sikrer en brugervenlig oplevelse på tværs af markeder – både for dig og dine kunder. Standardløsninger understøtter ofte kun få sprog, hvilket kan skabe barrierer for internationale brugere og begrænse anvendeligheden. Åben API Penneo tilbyder en fleksibel og veldokumenteret åben API, der gør det nemt at integrere med dine eksisterende systemer og automatisere arbejdsprocesser. Standardløsninger har ofte begrænset API-adgang eller kræver specialtilpasninger, hvilket gør integration mere besværlig og mindre skalerbar. Automatiske påmindelser Penneo sender automatisk påmindelser til underskrivere, så du undgår forsinkelser og sikrer, at underskriftsforløb bliver afsluttet til tiden. Standardløsninger kræver ofte manuelle opfølgninger, hvilket medfører ekstra administrativt arbejde og øger risikoen for oversete frister. Brugervenlighed Penneo er designet med en intuitiv og guidet underskriftsoplevelse, som gør det nemt for alle – uanset teknisk erfaring – at underskrive korrekt og hurtigt. Standardløsninger kan være forvirrende eller teknisk tunge, især for førstegangsbrugere eller personer uden teknisk baggrund. Databeskyttelse og informationssikkerhed Penneo lever op til GDPR og er certificeret efter ISO 27001 og ISO 27701, hvilket sikrer højeste standard for datasikkerhed og beskyttelse af personoplysninger. Sikkerhedsniveauet varierer i standardløsninger, og mange mangler enten relevante certificeringer eller fuld overensstemmelse med GDPR. Skalerer med din forretning Penneo er skabt til at vokse med din virksomhed – fra små teams til store organisationer, med fleksible arbejdsgange og effektiv brugerhåndtering. Standardløsninger har ofte begrænset skalerbarhed og kan have svært ved at håndtere komplekse processer eller store mængder dokumenter. Kundesupport Penneo tilbyder hurtig og personlig support, med dedikeret assistance tilpasset netop din virksomheds behov. Standardløsninger har ofte generisk support, som kan være langsom, upersonlig eller begrænset til basal dokumentation. SE ALLE FUNKTIONER Goldwasser Exchange reducerer kontoåbning fra 10 dage til 24 timer med Penneo Tidligere tog det typisk mellem 5 og 10 dage at få afsluttet en kontoåbning. I dag – takket være Penneo, elektroniske underskrifter og den automatisering vi har implementeret – kan vi åbne konti på under 24 timer. — Jonathan Goldwasser, Administrerende direktør, Goldwasser Exchange Læs kundehistorien Skabt til selv de mest komplekse underskriftsprocesser Uanset om du arbejder med revision, regnskab, ejendomshandel, finans eller HR, giver Penneo dit team mulighed for at underskrive digitalt – sikkert og effektivt. Løsningen er udviklet til at automatisere selv de mest komplekse underskriftsprocesser, så du kan fokusere på det, der virkelig betyder noget. Revision & Regnskab Send aftalebreve, revisionspåtegninger og årsrapporter til underskrift med få klik. Læs mere Ejendomshandel Få hurtigere afslutning på ejendomshandler ved at fjerne behovet for fysiske møder og manuelt papirarbejde. Læs mere Advokater Giv dine klienter mulighed for at underskrive på afstand med sikre digitale signaturer, der lever op til eIDAS-kravene. Læs mere Finans og bank Skær ned på papirarbejde og manuelt arbejde – og giv dine kunder en smidig og professionel oplevelse. Læs mere HR & Rekruttering Få nye medarbejdere hurtigt ombord – send ansættelseskontrakter til digital underskrift på få minutter. Læs mere Hurtig opsætning. Nemt skifte. Ingen afbrydelser. At skifte til Penneo betyder ikke, at du skal starte forfra. Takket være vores færdige integrationer og åbne API kan du nemt forbinde dine eksisterende systemer og overføre data på tværs af platforme – uden at forstyrre den daglige drift. SE ALLE INTEGRATIONER Tusindvis af virksomheder stoler på Penneo til at gøre deres underskriftsprocesser enklere Tid til at blive en del af fællesskabet? BOOK ET MØDE SE PRISER Produkter Penneo Sign Priser Integrationer Åben API Validator Hvorfor Penneo Løsninger Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Ressourcer Vidensunivers Trust Center Produktopdateringer SUPPORT SIGN Hjælpecenter KYC Hjælpecenter Systemstatus Virksomhed Om os Karriere Privatlivspolitik Vilkår Brug af cookies Accessibility Statement Whistleblower Policy Kontakt os PENNEO A/S - Gærtorvet 1-5, DK-1799 København V - CVR: 35633766 | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/576 | LLVM Weekly - #576, January 13th 2025 LLVM Weekly - #576, January 13th 2025 Welcome to the five hundred and seventy-sixth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events The call for proposals for EuroLLVM 2025 is now out, submit by Feb 14th 2025 . The conference will take place in Berlin, April 15-16th (with workshops on the 14th). Fangrui Song blogged about understanding and improving Clang’s -ftime-report . The next Portland area LLVM social will take place on January 16th . According to the LLVM calendar in the coming week there will be the following: Office hours with the following hosts: Aaron Ballman, Alexey Bader, Alina Sbirlea, Phoebe Wang, Johannes Doerfert. Online sync-ups on the following topics: MLIR C/C++ frontend, pointer authentication, C/C++ language working group, Flang, floating point, SPIR-V, RISC-V, LLVM libc. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Chris Bieneman is collecting nominations for LLVM project area teams , as part of the implementation of the new LLVM governance model. David Spickett is attempting to complete a survey of LLDB’s supported platforms and architectures . Erich Keane summarised the state of OpenACC support in Clang at the moment . LLVM Foundation board meeting minutes for November have now been posted . Alexander Richardson started a thread on clarifying the semantics of ptrtoint . Jonas Devlieghere started an RFC discussion on adding a status line to the command line LLDB interface , as an alternative to attempting to display progress events inline. Sunil Srivastava proposed changing the default C++ mode for Clang to C++20 , although the consensus so far seems to be that it’s too early due to some missing features and bugs. Chris Bieneman posted an RFC proposal for adding an offload execution test suite , intended to test execution of programs on hardware accelerators (GPUs, FPGAs, NPUs, etc). Pavel Labath is looking to further speed up DWARF indexing in LLDB and started a thread to discuss this, including results from experimentation so far. Martin Brænne kicked off an RFC thread on adding support for a new annotate_decl attribute in Clang . This would be a general-purpose way of adding annotations for us by static analysis tools. Unlike the annotate attribute, this wouldn’t produce llvm.annotation intrinsics. LLVM commits SPIR-V module analysis was made substantially faster. 83c1d00 . The DirectX resource.load.rawbuffer intrinsic was implemented. cba9bd5 . AArch64 and Arm were migrated to using GenericTable rather than SearchableTable for system registers. 7d53762 , 5c7a696 . MC layer support was added to the RISC-V backend for the Qualcomm Xqcicm (conditional move) extension. 737d6ca . LangRef documentation was added for ABI and call-site attributes. 38565da . The -wasm-use-legacy-eh option can be used with the WebAssembly backend to use the legacy exception handling proposal. a8e1135 . The NVPTX backend gained intrinsics for asynchronous copy of tensor data using ‘TMA. 372044e . Clang commits Initial support was added for SYCL offload compilation. d00f65c . __builtin_sincos was added. e4e2f53 . The mechanism for deciding the priority of functions selected in function mutli-versioning for AArch64 was updated to match changes to the ACLE. 8e65940 . Documentation was added on the level of support for OpenMP 6.0 features. c85d516 . Temporal profiling for IRPGO can now be enabled with -ftemporal-profile . 91892e8 . The output of -ftime-report was reorganised to improve clarity. 0de18e7 . Other project commits New utility scripts were added to libcxx to help compare benchmark results between builds. 292c135 . std::stable_sort now utilises radix sort on integers, improving performance in many cases. 69b54c1 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/577 | LLVM Weekly - #577, January 20th 2025 LLVM Weekly - #577, January 20th 2025 Welcome to the five hundred and seventy-seventh issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback via email: asb@asbradbury.org , or Mastodon: @llvmweekly@fosstodon.org / @asb@fosstodon.org , or Bluesky: @llvmweekly.org / @asbradbury.org . News and articles from around the web and events LLVM 19.1.7 was released . According to the LLVM calendar in the coming week there will be the following: Office hours with the following hosts: Kristof Beyls, Amara Emerson, Johannes Doerfert. Online sync-ups on the following topics: Flang, vectoriser improvements, libc++, security response group, LLVM/Offload, classic Flang, OpenMP for Flang. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Renato Golin, on behalf of a range of MLIR contributors, shared a proposed new governance model for MLIR . Maxim Kuvyrkov kicked off an RFC thread on the possibility of an LLVM long-term support release . Tom Stellard suggested requiring contributors have at least 3 commits and acks from 2 current committers in order to be granted commit access . Jacques Pienaar started a second RFC thread on incubating the MLIR tensor compute primitives (TCP) dialect . Donát Nagy started a discussion on getting rid of unnecessary name handling boilerplate in Clang checker implementations . Franklin Zhang proposed upstreaming enhancements to BOLT for Linux kernel AArch64 , reporting improvements of 8% on an nginx benchmark. LLVM commits The captures(...) attribute was introduced to LLVM IR, and can be used to indicate the ways in which the callee may capture the pointer. 22e9024 . GlobalOpt gained the ability to statically resolve calls to versioned functions in some circumstances. 831527a . The SPIR-V backend gained a pre-legalisation instruction combining pass. eddeb36 . Assembler/disassembler support was added to the RISC-V backend for the Qualcomm Xqciint (interrupts) extension. 171d3ed . LLVM can now detect if the host is an Apple M4. a082cc1 . Stack clash protection for dynamic alloca was implemented for RISC-V. 01d7f43 . Function record iteration for coverage was sped up significantly. e899930 . The -print-loop-func-scope option can be used with -print-after-all to ensure loop passes always print the full function IR. 5b6a26c . Clang commits The new [[clang:explicit]] attribute can be used to ensure that fields are initialised explicitly. 1594413 . Clang gained a release note explaining how it will now more aggressively use undefined behaviour on pointer addition overflow. c2979c5 . clang-format of Verilog input was dramatically sped up. fbef1f8 . Module-level lookup was implemented for C++20 modules. c5e4afe . Clang’s multilib logic gained support for the selection of library variants which don’t correspond to existing command-line options. 226a9d7 . The HeuristicResolver from clangd was upstreamed to Clang. As noted in the relevant RFC , the hope is to use this to improve things such as SemaCodeComplete. ae932be . Other project commits MLIR’s python bindings now support free-threading (no-GIL) mode. f136c80 . flang gained support for -ftime-report . 310c281 . LLVM’s libc documentation was reorganised to be more beginner friendly. 692c77f . MLIR’s FloatType was turned into a type interface. c24ce32 , f023da1 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://docs.aws.amazon.com/id_id/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-permissions.html#lambda-edge-permissions-required | Siapkan izin dan peran IAM untuk Lambda @Edge - Amazon CloudFront Siapkan izin dan peran IAM untuk Lambda @Edge - Amazon CloudFront Dokumentasi Amazon CloudFront Panduan Developerr Izin IAM diperlukan untuk mengaitkan fungsi Lambda @Edge dengan distribusi CloudFront Peran eksekusi fungsi untuk prinsipal layanan Peran terkait layanan untuk Lambda @Edge Terjemahan disediakan oleh mesin penerjemah. Jika konten terjemahan yang diberikan bertentangan dengan versi bahasa Inggris aslinya, utamakan versi bahasa Inggris. Siapkan izin dan peran IAM untuk Lambda @Edge Untuk mengonfigurasi Lambda @Edge, Anda harus memiliki izin dan peran IAM berikut untuk: AWS Lambda Izin IAM — Izin ini memungkinkan Anda membuat fungsi Lambda dan mengaitkannya dengan distribusi Anda. CloudFront Peran eksekusi fungsi Lambda (peran IAM) — Prinsipal layanan Lambda mengasumsikan peran ini untuk menjalankan fungsi Anda. Peran terkait layanan untuk Lambda @Edge — Peran terkait layanan memungkinkan spesifik untuk Layanan AWS mereplikasi fungsi Lambda ke dan mengaktifkan penggunaan file log. Wilayah AWS CloudWatch CloudFront Izin IAM diperlukan untuk mengaitkan fungsi Lambda @Edge dengan distribusi CloudFront Selain izin IAM yang Anda perlukan untuk Lambda, Anda memerlukan izin berikut untuk mengaitkan fungsi Lambda dengan distribusi: CloudFront lambda:GetFunction — Memberikan izin untuk mendapatkan informasi konfigurasi untuk fungsi Lambda Anda dan URL yang telah ditentukan sebelumnya untuk mengunduh file .zip yang berisi fungsi tersebut. lambda:EnableReplication* — Memberikan izin ke kebijakan sumber daya sehingga layanan replikasi Lambda bisa mendapatkan kode fungsi dan konfigurasi. lambda:DisableReplication* — Memberikan izin untuk kebijakan sumber daya sehingga layanan replikasi Lambda dapat menghapus fungsi. penting Anda harus menambahkan tanda bintang ( * ) di akhir lambda:EnableReplication * dan lambda:DisableReplication * tindakan. Untuk sumber daya, tentukan ARN dari versi fungsi yang ingin Anda jalankan ketika suatu CloudFront peristiwa terjadi, seperti contoh berikut: arn:aws:lambda:us-east-1:123456789012:function: TestFunction :2 iam:CreateServiceLinkedRole — Memberikan izin untuk membuat peran terkait layanan yang digunakan Lambda @Edge untuk mereplikasi fungsi Lambda. CloudFront Setelah Anda mengonfigurasi Lambda @Edge untuk pertama kalinya, peran terkait layanan akan dibuat secara otomatis untuk Anda. Anda tidak perlu menambahkan izin ini ke distribusi lain yang menggunakan Lambda @Edge. cloudfront:UpdateDistribution atau cloudfront:CreateDistribution — Memberikan izin untuk memperbarui atau membuat distribusi. Untuk informasi selengkapnya, lihat topik berikut: Identity and Access Management untuk Amazon CloudFront Izin akses sumber daya Lambda di Panduan Pengembang AWS Lambda Peran eksekusi fungsi untuk prinsipal layanan Anda harus membuat peran IAM yang dapat diasumsikan oleh kepala sekolah lambda.amazonaws.com dan edgelambda.amazonaws.com layanan ketika mereka menjalankan fungsi Anda. Tip Saat membuat fungsi di konsol Lambda, Anda dapat memilih untuk membuat peran eksekusi baru dengan menggunakan templat AWS kebijakan. Langkah ini secara otomatis menambahkan izin Lambda @Edge yang diperlukan untuk menjalankan fungsi Anda. Lihat Langkah 5 dalam Tutorial: Membuat fungsi Lambda @Edge sederhana . Untuk informasi selengkapnya tentang membuat peran IAM secara manual, lihat Membuat peran dan melampirkan kebijakan (konsol) di Panduan Pengguna IAM . contoh Contoh: Kebijakan kepercayaan peran Anda dapat menambahkan peran ini di bawah tab Trust Relationship di konsol IAM. Jangan tambahkan kebijakan ini di bawah tab Izin . JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "lambda.amazonaws.com", "edgelambda.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] } Untuk informasi selengkapnya tentang izin yang perlu Anda berikan ke peran eksekusi, lihat Izin akses sumber daya Lambda di AWS Lambda Panduan Pengembang. Catatan Secara default, setiap kali CloudFront peristiwa memicu fungsi Lambda, data ditulis CloudWatch ke Log. Jika Anda ingin menggunakan log ini, peran eksekusi memerlukan izin untuk menulis data ke CloudWatch Log. Anda dapat menggunakan standar AWSLambdaBasicExecutionRole untuk memberikan izin ke peran eksekusi. Untuk informasi selengkapnya tentang CloudWatch Log, lihat Log fungsi tepi . Jika kode fungsi Lambda Anda mengakses AWS sumber daya lain, seperti membaca objek dari bucket S3, peran eksekusi memerlukan izin untuk melakukan tindakan tersebut. Peran terkait layanan untuk Lambda @Edge Lambda @Edge menggunakan peran terkait layanan IAM. Peran yang terhubung dengan layanan adalah jenis peran IAM unik yang terhubung langsung ke layanan. Peran yang ditautkan dengan layanan ditentukan sebelumnya oleh layanan dan mencakup semua izin yang diperlukan layanan untuk menghubungi layanan AWS lainnya atas nama Anda. Lambda @Edge menggunakan peran terkait layanan IAM berikut: AWSServiceRoleForLambdaReplicator — Lambda @Edge menggunakan peran ini untuk memungkinkan Lambda @Edge mereplikasi fungsi. Wilayah AWS Saat Anda pertama kali menambahkan pemicu Lambda @Edge CloudFront, peran bernama dibuat AWSServiceRoleForLambdaReplicator secara otomatis untuk memungkinkan Lambda @Edge mereplikasi fungsi. Wilayah AWS Peran ini diperlukan untuk menggunakan fungsi Lambda @Edge. ARN untuk AWSServiceRoleForLambdaReplicator peran tersebut terlihat seperti contoh berikut: arn:aws:iam::123456789012:role/aws-service-role/replicator.lambda.amazonaws.com/AWSServiceRoleForLambdaReplicator AWSServiceRoleForCloudFrontLogger — CloudFront menggunakan peran ini untuk mendorong file log ke CloudWatch. Anda dapat menggunakan file log untuk men-debug kesalahan validasi Lambda @Edge. AWSServiceRoleForCloudFrontLoggerPeran dibuat secara otomatis saat Anda menambahkan asosiasi fungsi Lambda @Edge CloudFront untuk memungkinkan mendorong file log kesalahan Lambda @Edge ke. CloudWatch ARN untuk AWSServiceRoleForCloudFrontLogger peran yang terlihat seperti ini: arn:aws:iam::account_number:role/aws-service-role/logger.cloudfront.amazonaws.com/AWSServiceRoleForCloudFrontLogger Peran yang terhubung dengan layanan memudahkan pengaturan dan penggunaan Lambda@Edge karena Anda tidak perlu menambahkan izin yang diperlukan secara manual. Lambda@Edge mendefinisikan izin peran yang terhubung ke layanan, dan hanya Lambda@Edge yang dapat memegang peran tersebut. Izin yang ditentukan mencakup kebijakan kepercayaan dan kebijakan izin. Kebijakan izin tidak dapat dilampirkan ke entitas IAM lainnya. Anda harus menghapus sumber daya terkait CloudFront atau Lambda @Edge sebelum dapat menghapus peran terkait layanan. Ini membantu melindungi sumber daya Lambda @Edge Anda sehingga Anda tidak menghapus peran terkait layanan yang masih diperlukan untuk mengakses sumber daya aktif. Untuk mengetahui informasi selengkapnya tentang peran terkait layanan, lihat Peran terkait layanan untuk CloudFront . Izin peran terkait layanan untuk Lambda @Edge Lambda @Edge menggunakan dua peran terkait layanan, bernama dan. AWSServiceRoleForLambdaReplicator AWSServiceRoleForCloudFrontLogger Bagian berikut menjelaskan izin untuk masing-masing peran ini. Daftar Isi Izin peran terkait layanan untuk replikator Lambda Izin peran terkait layanan untuk logger CloudFront Izin peran terkait layanan untuk replikator Lambda Peran terkait layanan ini memungkinkan Lambda untuk mereplikasi fungsi Lambda @Edge. Wilayah AWS Peran terkait layanan AWSServiceRoleForLambdaReplicator memercayai layanan replicator.lambda.amazonaws.com untuk menjalankan peran. Kebijakan izin peran memungkinkan Lambda@Edge menyelesaikan tindakan berikut pada sumber daya yang ditentukan: lambda:CreateFunction pada arn:aws:lambda:*:*:function:* lambda:DeleteFunction pada arn:aws:lambda:*:*:function:* lambda:DisableReplication pada arn:aws:lambda:*:*:function:* iam:PassRole pada all AWS resources cloudfront:ListDistributionsByLambdaFunction pada all AWS resources Izin peran terkait layanan untuk logger CloudFront Peran terkait layanan ini memungkinkan CloudFront untuk mendorong file log ke dalam CloudWatch sehingga Anda dapat men-debug kesalahan validasi Lambda @Edge. Peran terkait layanan AWSServiceRoleForCloudFrontLogger memercayai layanan logger.cloudfront.amazonaws.com untuk menjalankan peran. Kebijakan izin peran memungkinkan Lambda @Edge menyelesaikan tindakan berikut pada sumber daya yang ditentukan: arn:aws:logs:*:*:log-group:/aws/cloudfront/* logs:CreateLogGroup logs:CreateLogStream logs:PutLogEvents Anda harus mengonfigurasi izin untuk mengizinkan entitas IAM (seperti pengguna, grup, atau peran) untuk menghapus peran yang ditautkan oleh layanan Lambda@Edge. Untuk informasi selengkapnya, lihat Izin peran terkait layanan dalam Panduan Pengguna IAM . Membuat peran terkait layanan untuk Lambda @Edge Anda biasanya tidak membuat peran terkait layanan secara manual untuk Lambda@Edge. Layanan ini membuat peran untuk Anda secara otomatis dalam skenario berikut: Saat pertama kali membuat pemicu, layanan akan membuat AWSServiceRoleForLambdaReplicator peran (jika belum ada). Peran ini memungkinkan Lambda untuk mereplikasi fungsi Lambda @Edge ke. Wilayah AWS Jika Anda menghapus peran layanan yang ditautkan, peran tersebut akan dibuat lagi saat Anda menambahkan pemicu baru untuk Lambda@Edge dalam distribusi. Saat Anda memperbarui atau membuat CloudFront distribusi yang memiliki asosiasi Lambda @Edge, layanan akan membuat AWSServiceRoleForCloudFrontLogger peran (jika peran tersebut belum ada). Peran ini memungkinkan CloudFront untuk mendorong file log Anda ke CloudWatch. Jika Anda menghapus peran terkait layanan, peran akan dibuat lagi saat Anda memperbarui atau membuat CloudFront distribusi yang memiliki asosiasi Lambda @Edge. Untuk membuat peran terkait layanan ini secara manual, Anda dapat menjalankan perintah AWS Command Line Interface (AWS CLI) berikut: Untuk membuat AWSServiceRoleForLambdaReplicator peran Jalankan perintah berikut. aws iam create-service-linked-role --aws-service-name replicator.lambda.amazonaws.com Untuk membuat AWSServiceRoleForCloudFrontLogger peran Jalankan perintah berikut. aws iam create-service-linked-role --aws-service-name logger.cloudfront.amazonaws.com Mengedit peran terkait layanan Lambda @Edge Lambda @Edge tidak mengizinkan Anda mengedit AWSServiceRoleForLambdaReplicator atau peran terkait AWSServiceRoleForCloudFrontLogger layanan. Setelah layanan membuat peran terkait layanan, Anda tidak dapat mengubah nama peran karena berbagai entitas mungkin mereferensikan peran tersebut. Namun, Anda dapat menggunakan IAM untuk mengedit deskripsi peran. Untuk informasi selengkapnya, lihat Mengedit peran terkait layanan dalam Panduan Pengguna IAM . Didukung Wilayah AWS untuk peran terkait layanan Lambda @Edge CloudFront mendukung penggunaan peran terkait layanan untuk Lambda @Edge sebagai berikut: Wilayah AWS AS Timur (Virginia Utara)– us-east-1 AS Timur (Ohio)– us-east-2 AS Barat (California Utara)– us-west-1 AS Barat (Oregon)– us-west-2 Asia Pasifik (Mumbai)– ap-south-1 Asia Pasifik (Seoul)– ap-northeast-2 Asia Pasifik (Singapura)– ap-southeast-1 Asia Pasifik (Sydney)– ap-southeast-2 Asia Pasifik (Tokyo) – ap-northeast-1 Eropa (Frankfurt) – eu-central-1 Eropa (Irlandia)– eu-west-1 Eropa (London) – eu-west-2 Amerika Selatan (São Paulo) – sa-east-1 Javascript dinonaktifkan atau tidak tersedia di browser Anda. Untuk menggunakan Dokumentasi AWS, Javascript harus diaktifkan. Lihat halaman Bantuan browser Anda untuk petunjuk. Konvensi Dokumen Tutorial: Fungsi dasar Lambda @Edge Tulis dan buat fungsi Lambda @Edge Apakah halaman ini membantu Anda? - Ya Terima kasih telah memberitahukan bahwa hasil pekerjaan kami sudah baik. Jika Anda memiliki waktu luang, beri tahu kami aspek apa saja yang sudah bagus, agar kami dapat menerapkannya secara lebih luas. Apakah halaman ini membantu Anda? - Tidak Terima kasih telah memberi tahu kami bahwa halaman ini perlu ditingkatkan. Maaf karena telah mengecewakan Anda. Jika Anda memiliki waktu luang, beri tahu kami bagaimana dokumentasi ini dapat ditingkatkan. | 2026-01-13T09:30:34 |
https://releases.llvm.org/18.1.8/tools/flang/docs/index.html | Welcome to Flang’s documentation — The Flang Compiler Navigation index next | Flang Home | Documentation » Welcome to Flang’s documentation Documentation Getting Started Getting Involved Mailing Lists Slack Calls Additional Links Github Repository Bug Reports Code Review Doxygen API Quick search Welcome to Flang’s documentation ¶ Flang is LLVM’s Fortran frontend that can be found here . It is often referred to as “LLVM Flang” to differentiate itself from “Classic Flang” - these are two separate and independent Fortran compilers. LLVM Flang is under active development. While it is capable of generating executables for a number of examples, some functionality is still missing. See GettingInvolved for tips on how to get in touch with us and to learn more about the current status. Flang |version| (In-Progress) Release Notes Contributing to Flang ¶ C++14/17 features used in f18 Flang C++ Style Guide Design Guideline Fortran For C Programmers Getting Involved Getting Started How to implement a Sematic Check in Flang Pull request checklist Design Documents ¶ Aliasing in Fortran Aliasing analysis in FIR Aliasing rules Array Composition Assumed-Rank Objects Bijective Internal Name Uniquing Representation of Fortran function calls Implementation of CHARACTER types in f18 Complex Operations Control Flow Graph Compiler directives supported by Flang DO CONCURRENT isn’t necessarily concurrent Fortran Extensions supported by Flang A first take on Fortran 202X features for LLVM Flang Design: FIR Array operations array_merge_store FIR Language Reference Flang command line argument reference Flang drivers CMake Support Testing Frontend Driver Plugins LLVM Pass Plugins A Fortran feature history cheat sheet Design: Fortran IR Fortran Tests in the LLVM Test Suite Variable and Expression value concepts in HLFIR Alternatives that were not retained Fortran I/O Runtime Library Internal Design Trampolines for pointers to internal procedures. A categorization of standard (2018) and extended Fortran intrinsic procedures Implementation of Intrinsic types in f18 Semantics: Resolving Labels and Construct Names Module Files OpenACC in Flang OpenMP 4.5 Grammar OpenMP Semantic Analysis Compiler options comparison Overview of Compiler Phases Analysis Lowering Object code generation and linking Parameterized Derived Types (PDTs) Fortran standard Testing Current TODOs Parser Combinators The F18 Parser Polymorphic Entities Testing Current TODOs Fortran Preprocessing Procedure Pointer Testing Current TODOs Runtime Descriptors The derived type runtime information table Semantic Analysis Fortran 2018 Grammar -fstack-arrays Indices and tables ¶ Index Module Index Search Page Navigation index next | Flang Home | Documentation » Welcome to Flang’s documentation © Copyright 2017-2024, The Flang Team. Last updated on Jun 19, 2024. Created using Sphinx 7.1.2. | 2026-01-13T09:30:34 |
http://docs.buildbot.net/current/manual/cmdline.html#buildbot-worker | 2.7. Command-line Tool — Buildbot 4.3.0 documentation Buildbot 1. Buildbot Tutorial 2. Buildbot Manual 2.1. Introduction 2.2. Installation 2.3. Concepts 2.4. Secret Management 2.5. Configuration 2.6. Customization 2.7. Command-line Tool 2.7.1. buildbot 2.7.1.1. Administrator Tools 2.7.1.2. Developer Tools 2.7.1.3. Other Tools 2.7.1.4. .buildbot config directory 2.7.2. buildbot-worker 2.7.2.1. create-worker 2.7.2.2. start 2.7.2.3. restart 2.7.2.4. stop 2.8. Resources 2.9. Optimization 2.10. Plugin Infrastructure in Buildbot 2.11. Deployment 2.12. Upgrading 3. Buildbot Development 4. Release Notes 5. Older Release Notes 6. API Indices Buildbot 2. Buildbot Manual 2.7. Command-line Tool View page source 2.7. Command-line Tool This section describes command-line tools available after buildbot installation. The two main command-line tools are buildbot and buildbot-worker . The former handles a Buildbot master and the former handles a Buildbot worker. Every command-line tool has a list of global options and a set of commands which have their own options. One can run these tools in the following way: buildbot [global options] command [command options] buildbot-worker [global options] command [command options] The buildbot command is used on the master, while buildbot-worker is used on the worker. Global options are the same for both tools which perform the following actions: --help Print general help about available commands and global options and exit. All subsequent arguments are ignored. --verbose Set verbose output. --version Print current buildbot version and exit. All subsequent arguments are ignored. You can get help on any command by specifying --help as a command option: buildbot command --help You can also use manual pages for buildbot and buildbot-worker for quick reference on command-line options. The remainder of this section describes each buildbot command. See Command Line Index for a full list. 2.7.1. buildbot The buildbot command-line tool can be used to start or stop a buildmaster or buildbot, and to interact with a running buildmaster. Some of its subcommands are intended for buildmaster admins, while some are for developers who are editing the code that the buildbot is monitoring. 2.7.1.1. Administrator Tools The following buildbot sub-commands are intended for buildmaster administrators: create-master buildbot create-master -r {BASEDIR} This creates a new directory and populates it with files that allow it to be used as a buildmaster’s base directory. You will usually want to use the option -r option to create a relocatable buildbot.tac . This allows you to move the master directory without editing this file. upgrade-master buildbot upgrade-master {BASEDIR} This upgrades a previously created buildmaster’s base directory for a new version of buildbot master source code. This will copy the web server static files, and potentially upgrade the db. start buildbot start [--nodaemon] {BASEDIR} This starts a buildmaster which was already created in the given base directory. The daemon is launched in the background, with events logged to a file named twistd.log . The option –nodaemon option instructs Buildbot to skip daemonizing. The process will start in the foreground. It will only return to the command-line when it is stopped. Additionally, the user can set the environment variable START_TIMEOUT to specify the amount of time the script waits for the master to start until it declares the operation as failure. restart buildbot restart [--nodaemon] {BASEDIR} Restart the buildmaster. This is equivalent to stop followed by start The option –nodaemon option has the same meaning as for start . stop buildbot stop {BASEDIR} This terminates the daemon (either buildmaster or worker) running in the given directory. The --clean option shuts down the buildmaster cleanly. With --no-wait option buildbot stop command will send buildmaster shutdown signal and will immediately exit, not waiting for complete buildmaster shutdown. sighup buildbot sighup {BASEDIR} This sends a SIGHUP to the buildmaster running in the given directory, which causes it to re-read its master.cfg file. checkconfig buildbot checkconfig {BASEDIR|CONFIG_FILE} This checks if the buildmaster configuration is well-formed and contains no deprecated or invalid elements. If no arguments are used or the base directory is passed as the argument the config file specified in buildbot.tac is checked. If the argument is the path to a config file then it will be checked without using the buildbot.tac file. cleanupdb buildbot cleanupdb {BASEDIR|CONFIG_FILE} [-q] This command is frontend for various database maintenance jobs: optimiselogs: This optimization groups logs into bigger chunks to apply higher level of compression. This script runs for as long as it takes to finish the job including the time needed to check master.cfg file. copy-db buildbot copy-db {DESTINATION_URL} {BASEDIR} [-q] This command copies all buildbot data from source database configured in the buildbot configuration file to the destination database. The URL of the destination database is specified on the command line. The destination database may have different type from the source database. The destination database must be empty. The script will initialize it in the same way as if a new Buildbot installation was created. Source database must be already upgraded to the current Buildbot version by the buildbot upgrade-master command. 2.7.1.2. Developer Tools These tools are provided for use by the developers who are working on the code that the buildbot is monitoring. try This lets a developer to ask the question What would happen if I committed this patch right now? . It runs the unit test suite (across multiple build platforms) on the developer’s current code, allowing them to make sure they will not break the tree when they finally commit their changes. The buildbot try command is meant to be run from within a developer’s local tree, and starts by figuring out the base revision of that tree (what revision was current the last time the tree was updated), and a patch that can be applied to that revision of the tree to make it match the developer’s copy. This (revision, patch) pair is then sent to the buildmaster, which runs a build with that SourceStamp . If you want, the tool will emit status messages as the builds run, and will not terminate until the first failure has been detected (or the last success). There is an alternate form which accepts a pre-made patch file (typically the output of a command like svn diff ). This --diff form does not require a local tree to run from. See try –diff concerning the --diff command option. For this command to work, several pieces must be in place: the Try_Jobdir or : Try_Userpass , as well as some client-side configuration. Locating the master The try command needs to be told how to connect to the try scheduler, and must know which of the authentication approaches described above is in use by the buildmaster. You specify the approach by using --connect=ssh or --connect=pb (or try_connect = 'ssh' or try_connect = 'pb' in .buildbot/options ). For the PB approach, the command must be given a option –master argument (in the form HOST : PORT ) that points to TCP port that you picked in the Try_Userpass scheduler. It also takes a option –username and option –passwd pair of arguments that match one of the entries in the buildmaster’s userpass list. These arguments can also be provided as try_master , try_username , and try_password entries in the .buildbot/options file. For the SSH approach, the command must be given option –host and option –username , to get to the buildmaster host. It must also be given option –jobdir , which points to the inlet directory configured above. The jobdir can be relative to the user’s home directory, but most of the time you will use an explicit path like ~buildbot/project/trydir . These arguments can be provided in .buildbot/options as try_host , try_username , try_password , and try_jobdir . If you need to use something different from the default ssh command for connecting to the remote system, you can use –ssh command line option or try_ssh in the configuration file. The SSH approach also provides a option –buildbotbin argument to allow specification of the buildbot binary to run on the buildmaster. This is useful in the case where buildbot is installed in a virtualenv on the buildmaster host, or in other circumstances where the buildbot command is not on the path of the user given by option –username . The option –buildbotbin argument can be provided in .buildbot/options as try_buildbotbin The following command line arguments are deprecated, but retained for backward compatibility: --tryhost is replaced by option –host --trydir is replaced by option –jobdir --master is replaced by option –masterstatus Likewise, the following .buildbot/options file entries are deprecated, but retained for backward compatibility: try_dir is replaced by try_jobdir masterstatus is replaced by try_masterstatus Waiting for results If you provide the option –wait option (or try_wait = True in .buildbot/options ), the buildbot try command will wait until your changes have either been proven good or bad before exiting. Unless you use the option –quiet option (or try_quiet=True ), it will emit a progress message every 60 seconds until the builds have completed. The SSH connection method does not support waiting for results. Choosing the Builders A trial build is performed on multiple Builders at the same time, and the developer gets to choose which Builders are used (limited to a set selected by the buildmaster admin with the TryScheduler ’s builderNames= argument). The set you choose will depend upon what your goals are: if you are concerned about cross-platform compatibility, you should use multiple Builders, one from each platform of interest. You might use just one builder if that platform has libraries or other facilities that allow better test coverage than what you can accomplish on your own machine, or faster test runs. The set of Builders to use can be specified with multiple option –builder arguments on the command line. It can also be specified with a single try_builders option in .buildbot/options that uses a list of strings to specify all the Builder names: try_builders = [ "full-OSX" , "full-win32" , "full-linux" ] If you are using the PB approach, you can get the names of the builders that are configured for the try scheduler using the get-builder-names argument: buildbot try --get-builder-names --connect = pb --master = ... --username = ... --passwd = ... Specifying the VC system The try command also needs to know how to take the developer’s current tree and extract the (revision, patch) source-stamp pair. Each VC system uses a different process, so you start by telling the try command which VC system you are using, with an argument like option –vc=cvs or option –vc=git . This can also be provided as try_vc in .buildbot/options . The following names are recognized: bzr cvs darcs hg git mtn p4 svn Finding the top of the tree Some VC systems (notably CVS and SVN) track each directory more-or-less independently, which means the try command needs to move up to the top of the project tree before it will be able to construct a proper full-tree patch. To accomplish this, the try command will crawl up through the parent directories until it finds a marker file. The default name for this marker file is .buildbot-top , so when you are using CVS or SVN you should touch .buildbot-top from the top of your tree before running buildbot try . Alternatively, you can use a filename like ChangeLog or README , since many projects put one of these files in their top-most directory (and nowhere else). To set this filename, use --topfile=ChangeLog , or set it in the options file with try_topfile = 'ChangeLog' . You can also manually set the top of the tree with --topdir=~/trees/mytree , or try_topdir = '~/trees/mytree' . If you use try_topdir , in a .buildbot/options file, you will need a separate options file for each tree you use, so it may be more convenient to use the try_topfile approach instead. Other VC systems which work on full projects instead of individual directories (Darcs, Mercurial, Git, Monotone) do not require try to know the top directory, so the option –try-topfile and option –try-topdir arguments will be ignored. If the try command cannot find the top directory, it will abort with an error message. The following command line arguments are deprecated, but retained for backward compatibility: --try-topdir is replaced by option –topdir --try-topfile is replaced by option –topfile Determining the branch name Some VC systems record the branch information in a way that try can locate it. For the others, if you are using something other than the default branch, you will have to tell the buildbot which branch your tree is using. You can do this with either the option –branch argument, or a try_branch entry in the .buildbot/options file. Determining the revision and patch Each VC system has a separate approach for determining the tree’s base revision and computing a patch. CVS try pretends that the tree is up to date. It converts the current time into a option -D time specification, uses it as the base revision, and computes the diff between the upstream tree as of that point in time versus the current contents. This works, more or less, but requires that the local clock be in reasonably good sync with the repository. SVN try does a svn status -u to find the latest repository revision number (emitted on the last line in the Status against revision: NN message). It then performs an svn diff -r NN to find out how your tree differs from the repository version, and sends the resulting patch to the buildmaster. If your tree is not up to date, this will result in the try tree being created with the latest revision, then backwards patches applied to bring it back to the version you actually checked out (plus your actual code changes), but this will still result in the correct tree being used for the build. bzr try does a bzr revision-info to find the base revision, then a bzr diff -r$base.. to obtain the patch. Mercurial hg parents --template '{node}\n' emits the full revision id (as opposed to the common 12-char truncated) which is a SHA1 hash of the current revision’s contents. This is used as the base revision. hg diff then provides the patch relative to that revision. For try to work, your working directory must only have patches that are available from the same remotely-available repository that the build process’ source.Mercurial will use. Perforce try does a p4 changes -m1 ... to determine the latest changelist and implicitly assumes that the local tree is synced to this revision. This is followed by a p4 diff -du to obtain the patch. A p4 patch differs slightly from a normal diff. It contains full depot paths and must be converted to paths relative to the branch top. To convert the following restriction is imposed. The p4base (see P4Source ) is assumed to be //depot Darcs try does a darcs changes --context to find the list of all patches back to and including the last tag that was made. This text file (plus the location of a repository that contains all these patches) is sufficient to re-create the tree. Therefore the contents of this context file are the revision stamp for a Darcs-controlled source tree. It then does a darcs diff -u to compute the patch relative to that revision. Git git branch -v lists all the branches available in the local repository along with the revision ID it points to and a short summary of the last commit. The line containing the currently checked out branch begins with “* “ (star and space) while all the others start with “ “ (two spaces). try scans for this line and extracts the branch name and revision from it. Then it generates a diff against the base revision. Todo I’m not sure if this actually works the way it’s intended since the extracted base revision might not actually exist in the upstream repository. Perhaps we need to add a –remote option to specify the remote tracking branch to generate a diff against. Monotone mtn automate get_base_revision_id emits the full revision id which is a SHA1 hash of the current revision’s contents. This is used as the base revision. mtn diff then provides the patch relative to that revision. For try to work, your working directory must only have patches that are available from the same remotely-available repository that the build process’ source.Monotone will use. patch information You can provide the option –who=dev to designate who is running the try build. This will add the dev to the Reason field on the try build’s status web page. You can also set try_who = dev in the .buildbot/options file. Note that option –who=dev will not work on version 0.8.3 or earlier masters. Similarly, option –comment=COMMENT will specify the comment for the patch, which is also displayed in the patch information. The corresponding config-file option is try_comment . Sending properties You can set properties to send with your change using either the option –property=key=value option, which sets a single property, or the option –properties=key1=value1,key2=value2… option, which sets multiple comma-separated properties. Either of these can be specified multiple times. Note that the option –properties option uses commas to split on properties, so if your property value itself contains a comma, you’ll need to use the option –property option to set it. try –diff Sometimes you might have a patch from someone else that you want to submit to the buildbot. For example, a user may have created a patch to fix some specific bug and sent it to you by email. You’ve inspected the patch and suspect that it might do the job (and have at least confirmed that it doesn’t do anything evil). Now you want to test it out. One approach would be to check out a new local tree, apply the patch, run your local tests, then use buildbot try to run the tests on other platforms. An alternate approach is to use the buildbot try --diff form to have the buildbot test the patch without using a local tree. This form takes a option –diff argument which points to a file that contains the patch you want to apply. By default this patch will be applied to the TRUNK revision, but if you give the optional option –baserev argument, a tree of the given revision will be used as a starting point instead of TRUNK. You can also use buildbot try --diff=- to read the patch from stdin . Each patch has a patchlevel associated with it. This indicates the number of slashes (and preceding pathnames) that should be stripped before applying the diff. This exactly corresponds to the option -p or option –strip argument to the patch utility. By default buildbot try --diff uses a patchlevel of 0, but you can override this with the option -p argument. When you use option –diff , you do not need to use any of the other options that relate to a local tree, specifically option –vc , option –try-topfile , or option –try-topdir . These options will be ignored. Of course you must still specify how to get to the buildmaster (with option –connect , option –tryhost , etc). 2.7.1.3. Other Tools These tools are generally used by buildmaster administrators. sendchange This command is used to tell the buildmaster about source changes. It is intended to be used from within a commit script, installed on the VC server. It requires that you have a PBChangeSource ( PBChangeSource ) running in the buildmaster (by being set in c['change_source'] ). buildbot sendchange --master {MASTERHOST}:{PORT} --auth {USER}:{PASS} --who {USER} {FILENAMES..} The option –auth option specifies the credentials to use to connect to the master, in the form user:pass . If the password is omitted, then sendchange will prompt for it. If both are omitted, the old default (username “change” and password “changepw”) will be used. Note that this password is well-known, and should not be used on an internet-accessible port. The option –master and option –username arguments can also be given in the options file (see .buildbot config directory ). There are other (optional) arguments which can influence the Change that gets submitted: --branch (or option branch ) This provides the (string) branch specifier. If omitted, it defaults to None , indicating the default branch . All files included in this Change must be on the same branch. --category (or option category ) This provides the (string) category specifier. If omitted, it defaults to None , indicating no category . The category property can be used by schedulers to filter what changes they listen to. --project (or option project ) This provides the (string) project to which this change applies, and defaults to ‘’. The project can be used by schedulers to decide which builders should respond to a particular change. --repository (or option repository ) This provides the repository from which this change came, and defaults to '' . --revision This provides a revision specifier, appropriate to the VC system in use. --revision_file This provides a filename which will be opened and the contents used as the revision specifier. This is specifically for Darcs, which uses the output of darcs changes --context as a revision specifier. This context file can be a couple of kilobytes long, spanning a couple lines per patch, and would be a hassle to pass as a command-line argument. --property This parameter is used to set a property on the Change generated by sendchange . Properties are specified as a name : value pair, separated by a colon. You may specify many properties by passing this parameter multiple times. --comments This provides the change comments as a single argument. You may want to use option –logfile instead. --logfile This instructs the tool to read the change comments from the given file. If you use - as the filename, the tool will read the change comments from stdin. --encoding Specifies the character encoding for all other parameters, defaulting to 'utf8' . --vc Specifies which VC system the Change is coming from, one of: cvs , svn , darcs , hg , bzr , git , mtn , or p4 . Defaults to None . user Note that in order to use this command, you need to configure a CommandlineUserManager instance in your master.cfg file, which is explained in Users Options . This command allows you to manage users in buildbot’s database. No extra requirements are needed to use this command, aside from the Buildmaster running. For details on how Buildbot manages users, see Users . --master The user command can be run virtually anywhere provided a location of the running buildmaster. The option –master argument is of the form MASTERHOST : PORT . --username PB connection authentication that should match the arguments to CommandlineUserManager . --passwd PB connection authentication that should match the arguments to CommandlineUserManager . --op There are four supported values for the option –op argument: add , update , remove , and get . Each are described in full in the following sections. --bb_username Used with the option –op=update option, this sets the user’s username for web authentication in the database. It requires option –bb_password to be set along with it. --bb_password Also used with the option –op=update option, this sets the password portion of a user’s web authentication credentials into the database. The password is first encrypted prior to storage for security reasons. --ids When working with users, you need to be able to refer to them by unique identifiers to find particular users in the database. The option –ids option lets you specify a comma separated list of these identifiers for use with the user command. The option –ids option is used only when using option –op=remove or option –op=get . --info Users are known in buildbot as a collection of attributes tied together by some unique identifier (see Users ). These attributes are specified in the form {TYPE}={VALUE} when using the option –info option. These {TYPE}={VALUE} pairs are specified in a comma separated list, so for example: --info=svn=jdoe,git='John Doe <joe@example.com>' The option –info option can be specified multiple times in the user command, as each specified option will be interpreted as a new user. Note that option –info is only used with option –op=add or with option –op=update , and whenever you use option –op=update you need to specify the identifier of the user you want to update. This is done by prepending the option –info arguments with {ID:} . If we were to update 'jschmo' from the previous example, it would look like this: --info=jdoe:git='Joe Doe <joe@example.com>' Note that option –master , option –username , option –passwd , and option –op are always required to issue the user command. The option –master , option –username , and option –passwd options can be specified in the option file with keywords user_master , user_username , and user_passwd , respectively. If user_master is not specified, then option –master from the options file will be used instead. Below are examples of how each command should look. Whenever a user command is successful, results will be shown to whoever issued the command. For option –op=add : buildbot user --master={MASTERHOST} --op=add \ --username={USER} --passwd={USERPW} \ --info={TYPE}={VALUE},... For option –op=update : buildbot user --master={MASTERHOST} --op=update \ --username={USER} --passwd={USERPW} \ --info={ID}:{TYPE}={VALUE},... For option –op=remove : buildbot user --master={MASTERHOST} --op=remove \ --username={USER} --passwd={USERPW} \ --ids={ID1},{ID2},... For option –op=get : buildbot user --master={MASTERHOST} --op=get \ --username={USER} --passwd={USERPW} \ --ids={ID1},{ID2},... A note on option –op=update : when updating the option –bb_username and option –bb_password , the option –info doesn’t need to have additional {TYPE}={VALUE} pairs to update and can just take the {ID} portion. 2.7.1.4. .buildbot config directory Many of the buildbot tools must be told how to contact the buildmaster that they interact with. This specification can be provided as a command-line argument, but most of the time it will be easier to set them in an options file. The buildbot command will look for a special directory named .buildbot , starting from the current directory (where the command was run) and crawling upwards, eventually looking in the user’s home directory. It will look for a file named options in this directory, and will evaluate it as a Python script, looking for certain names to be set. You can just put simple name = 'value' pairs in this file to set the options. For a description of the names used in this file, please see the documentation for the individual buildbot sub-commands. The following is a brief sample of what this file’s contents could be. # for status-reading tools masterstatus = 'buildbot.example.org:12345' # for 'sendchange' or the debug port master = 'buildbot.example.org:18990' Note carefully that the names in the options file usually do not match the command-line option name. master Equivalent to option –master for sendchange . It is the location of the pb.PBChangeSource for `sendchange . username Equivalent to option –username for the sendchange command. branch Equivalent to option –branch for the sendchange command. category Equivalent to option –category for the sendchange command. try_connect Equivalent to option –connect , this specifies how the try command should deliver its request to the buildmaster. The currently accepted values are ssh and pb . try_builders Equivalent to option –builders , specifies which builders should be used for the try build. try_vc Equivalent to option –vc for try , this specifies the version control system being used. try_branch Equivalent to option –branch , this indicates that the current tree is on a non-trunk branch. try_topdir try_topfile Use try_topdir , equivalent to option –try-topdir , to explicitly indicate the top of your working tree, or try_topfile , equivalent to option –try-topfile to name a file that will only be found in that top-most directory. try_host try_username try_dir When try_connect is ssh , the command will use try_host for option –tryhost , try_username for option –username , and try_dir for option –trydir . Apologies for the confusing presence and absence of ‘try’. try_username try_password try_master Similarly, when try_connect is pb , the command will pay attention to try_username for option –username , try_password for option –passwd , and try_master for option –master . try_wait masterstatus try_wait and masterstatus (equivalent to option –wait and master , respectively) are used to ask the try command to wait for the requested build to complete. 2.7.2. buildbot-worker buildbot-worker command-line tool is used for worker management only and does not provide any additional functionality. One can create, start, stop and restart the worker. 2.7.2.1. create-worker This creates a new directory and populates it with files that let it be used as a worker’s base directory. You must provide several arguments, which are used to create the initial buildbot.tac file. The option -r option is advisable here, just like for create-master . buildbot-worker create-worker -r {BASEDIR} {MASTERHOST}:{PORT} {WORKERNAME} {PASSWORD} The create-worker options are described in Worker Options . 2.7.2.2. start This starts a worker which was already created in the given base directory. The daemon is launched in the background, with events logged to a file named twistd.log . buildbot-worker start [--nodaemon] BASEDIR The option –nodaemon option instructs Buildbot to skip daemonizing. The process will start in the foreground. It will only return to the command-line when it is stopped. 2.7.2.3. restart buildbot-worker restart [--nodaemon] BASEDIR This restarts a worker which is already running. It is equivalent to a stop followed by a start . The option –nodaemon option has the same meaning as for start . 2.7.2.4. stop This terminates the daemon worker running in the given directory. buildbot stop BASEDIR Previous Next © Copyright Buildbot Team Members. Built with Sphinx using a theme provided by Read the Docs . | 2026-01-13T09:30:34 |
http://docs.buildbot.net/current/manual/configuration/multicodebase.html | 2.5.24. Multiple-Codebase Builds — Buildbot 4.3.0 documentation Buildbot 1. Buildbot Tutorial 2. Buildbot Manual 2.1. Introduction 2.2. Installation 2.3. Concepts 2.4. Secret Management 2.5. Configuration 2.5.1. Configuring Buildbot 2.5.2. Global Configuration 2.5.3. Change Sources and Changes 2.5.4. Changes 2.5.5. Schedulers 2.5.6. Workers 2.5.7. Builder Configuration 2.5.8. Projects 2.5.9. Codebases 2.5.10. Build Factories 2.5.11. Build Sets 2.5.12. Properties 2.5.13. Build Steps 2.5.14. Interlocks 2.5.15. Report Generators 2.5.16. Reporters 2.5.17. Web Server 2.5.18. Change Hooks 2.5.19. Custom Services 2.5.20. DbConfig 2.5.21. Configurators 2.5.22. Manhole 2.5.23. Multimaster 2.5.24. Multiple-Codebase Builds 2.5.25. Miscellaneous Configuration 2.5.26. Testing Utilities 2.6. Customization 2.7. Command-line Tool 2.8. Resources 2.9. Optimization 2.10. Plugin Infrastructure in Buildbot 2.11. Deployment 2.12. Upgrading 3. Buildbot Development 4. Release Notes 5. Older Release Notes 6. API Indices Buildbot 2. Buildbot Manual 2.5. Configuration 2.5.24. Multiple-Codebase Builds View page source 2.5.24. Multiple-Codebase Builds What if an end-product is composed of code from several codebases? Changes may arrive from different repositories within the tree-stable-timer period. Buildbot will not only use the source-trees that contain changes but also needs the remaining source-trees to build the complete product. For this reason, a Scheduler can be configured to base a build on a set of several source-trees that can (partly) be overridden by the information from incoming Change s. As described in Source-Stamps , the source for each codebase is identified by a source stamp, containing its repository, branch and revision. A full build set will specify a source stamp set describing the source to use for each codebase. Configuring all of this takes a coordinated approach. A complete multiple repository configuration consists of: a codebase generator Every relevant change arriving from a VC must contain a codebase. This is done by a codebaseGenerator that is defined in the configuration. Most generators examine the repository of a change to determine its codebase, using project-specific rules. some schedulers Each scheduler has to be configured with a set of all required codebases to build a product. These codebases indicate the set of required source-trees. In order for the scheduler to be able to produce a complete set for each build, the configuration can give a default repository, branch, and revision for each codebase. When a scheduler must generate a source stamp for a codebase that has received no changes, it applies these default values. multiple source steps - one for each codebase A Builder ’s build factory must include a source step for each codebase. Each of the source steps has a codebase attribute which is used to select an appropriate source stamp from the source stamp set for a build. This information comes from the arrived changes or from the scheduler’s configured default values. Note Each source step has to have its own workdir set in order for the checkout to be done for each codebase in its own directory. Note Ensure you specify the codebase within your source step’s Interpolate() calls (e.g. http://.../svn/%(src:codebase:branch)s ). See Interpolate for details. Warning Defining a codebaseGenerator that returns non-empty (not '' ) codebases will change the behavior of all the schedulers. Previous Next © Copyright Buildbot Team Members. Built with Sphinx using a theme provided by Read the Docs . | 2026-01-13T09:30:34 |
https://support.microsoft.com/et-ee/windows/turve-ohutus-ja-privaatsus-308c5778-c3fe-46ad-9424-e6a10489e005 | Turve,ohutus ja privaatsus - Microsofti tugiteenus Seotud teemad × Windowsi turve, ohutus ja privaatsus Overview Turbe, ohutuse ja privaatsuse ülevaade Windowsi turve Windowsi turbe kasutajaabi Windowsi turve tagab kaitse Enne Xboxi või Windowsi arvuti müümist, kinkimist või taaskasutusse andmist Ründevara eemaldamine Windowsi arvutist Windowsi ohutus Windowsi ohutuse kasutajaabi Brauseriajaloo kuvamine ja kustutamine Microsoft Edge’is Küpsiste kustutamine ja haldamine Windowsi uuesti installimisel saate väärtusliku sisu ohutult eemaldada Kaotsiläinud Windowsi seadme leidmine ja lukustamine Windowsi privaatsus Windowsi privaatsuse kasutajaabi Rakenduste kasutatavad Windowsi privaatsussätted Andmete vaatamine privaatsussätete armatuurlaual Põhisisu juurde Microsoft Tugi Tugi Tugi Avaleht Microsoft 365 Office Tooted Microsoft 365 Outlook Microsoft Teams OneDrive Microsoft Copilot OneNote Windows rohkem… Seadmed Surface Arvuti tarvikud Xbox Arvutimängud HoloLens Surface Hub Riistvara garantiid Windowsi teemad Hangi Windows 11 Mis on uut? Ressursid Kogukonnafoorumid Microsoft 365 administraatorid Väikeettevõtete portaal Arendaja Haridus Teatage tehnilise toega seotud pettusest Tooteohutus Rohkem Osta Microsoft 365 Kogu Microsoft Global Microsoft 365 Teams Copilot Windows Surface Xbox Tugi Tarkvara Tarkvara Windowsi rakendused AI OneDrive Outlook Üleminek Skype'ilt Teamsile OneNote Microsoft Teams Arvutid ja seadmed Arvutid ja seadmed Accessories Meelelahutus Meelelahutus PC-mängud Äri Äri Microsofti turve Azure Dynamics 365 Microsoft 365 ettevõtteversioon Microsoft Industry Microsoft Power Platform Windows 365 Arendaja ja IT Arendaja ja IT Microsofti arendaja Microsoft Learn Tehisintellekti-turuplatsi rakenduste tugi Microsofti tehnoloogiakogukond Microsoft Marketplace Visual Studio Marketplace Rewards Muud Muud Tasuta allalaadimised ja turve Kuva saidikaart Otsing Spikri otsing Tulemid puuduvad Loobu Logi sisse Logige sisse Microsofti kontoga Logige sisse või looge konto. Tere! Valige mõni muu konto. Teil on mitu kontot Valige konto, millega soovite sisse logida. Seotud teemad Windowsi turve, ohutus ja privaatsus Overview Turbe, ohutuse ja privaatsuse ülevaade Windowsi turve Windowsi turbe kasutajaabi Windowsi turve tagab kaitse Enne Xboxi või Windowsi arvuti müümist, kinkimist või taaskasutusse andmist Ründevara eemaldamine Windowsi arvutist Windowsi ohutus Windowsi ohutuse kasutajaabi Brauseriajaloo kuvamine ja kustutamine Microsoft Edge’is Küpsiste kustutamine ja haldamine Windowsi uuesti installimisel saate väärtusliku sisu ohutult eemaldada Kaotsiläinud Windowsi seadme leidmine ja lukustamine Windowsi privaatsus Windowsi privaatsuse kasutajaabi Rakenduste kasutatavad Windowsi privaatsussätted Andmete vaatamine privaatsussätete armatuurlaual Turve,ohutus ja privaatsus Rakenduskoht Windows Windowsi turbeabi Me oleme sellele mõelnud. Ärge sattuge võrgupettuste ja -rünnakute ohvriks, kui ostlete veebis, loete oma meile või sirvite veebi. Meie põhjalikud kaitselahendused tagavad turvalisuse . Windowsi ohutusspikker Meie põhjaliku juhendi abil saate tutvuda peamiste strateegiatega, mis aitavad teil kaitsta ennast ja oma lähedasi veebiohtude eest. Olenemata sellest, kas olete lapsevanem, noor täiskasvanu, õpetaja või üksikisik, annavad meie asjatundlikud näpunäited ja ülevaated teile vajaliku teabe, mis aitab teil end veebis kaitsta. Windowsi privaatsusspikker Microsoft hindab teie privaatsust. Me anname teile kontrolli oma andmete üle, pakkudes teile Windowsi privaatsussätteid , mida saate igal ajal üle vaadata ja muuta. Kaitske oma arvutit BitLockeri abil BitLocker on Windowsi turbefunktsioon, mis kaitseb teie andmeid teie draivide krüptimise teel. Krüptimine tagab, et kui keegi proovib kettale juurde pääseda ühenduseta režiimis, ei saa ta lugeda selle sisu. TELLIGE RSS-KANALID Kas vajate veel abi? Kas soovite rohkem valikuvariante? Tutvustus Kogukonnafoorum Siin saate tutvuda tellimusega kaasnevate eelistega, sirvida koolituskursusi, õppida seadet kaitsma ja teha veel palju muud. Microsoft 365 tellimuse eelised Microsoft 365 koolitus Microsofti turbeteenus Hõlbustuskeskus Kogukonnad aitavad teil küsimusi esitada ja neile vastuseid saada, anda tagasisidet ja saada nõu rikkalike teadmistega asjatundjatelt. Nõu küsimine Microsofti kogukonnafoorumis Microsofti spetsialistide kogukonnafoorum Windows Insideri programmis osalejad Microsoft 365 Insideri programmis osalejad Kas sellest teabest oli abi? Jah Ei Aitäh! Veel tagasisidet Microsoftile? Kas saaksite aidata meil teenust paremaks muuta? (Saatke Microsoftile tagasisidet, et saaksime aidata.) Kui rahul te keelekvaliteediga olete? Mis mõjutas teie hinnangut? Leidsin oma probleemile lahenduse Juhised olid selged Tekstist oli lihtne aru saada Tekstis pole žargooni Piltidest oli abi Tõlkekvaliteet Tekst ei vastanud minu ekraanipildile Valed juhised Liiga tehniline Pole piisavalt teavet Pole piisavalt pilte Tõlkekvaliteet Kas soovite anda veel tagasisidet? (Valikuline) Saada tagasiside Kui klõpsate nuppu Edasta, kasutatakse teie tagasisidet Microsofti toodete ja teenuste täiustamiseks. IT-administraator saab neid andmeid koguda. Privaatsusavaldus. Täname tagasiside eest! × Mis on uut? Copilot organisatsioonidele Copilot isiklikuks kasutuseks Microsoft 365 Windows 11 rakendused Microsoft Store Konto profiil Allalaadimiskeskus Tagastused Tellimuse jälgimine Ringlussevõtt Commercial Warranties Haridus Microsoft Education Haridusseadmed Microsoft Teams haridusasutustele Microsoft 365 Education Office Education Haridustöötajate koolitus ja arendus Pakkumised õpilastele ja vanematele Azure õpilastele Äri Microsofti turve Azure Dynamics 365 Microsoft 365 Microsoft Advertising Microsoft 365 Copilot Microsoft Teamsi jaoks Arendaja ja IT Microsofti arendaja Microsoft Learn Tehisintellekti-turuplatsi rakenduste tugi Microsofti tehnoloogiakogukond Microsoft Marketplace Microsoft Power Platform Marketplace Rewards Visual Studio Ettevõte Töökohad Teave Microsofti kohta Privaatsus Microsoftis Investorid Jätkusuutlikkus Eesti (Eesti) Teie privaatsusvalikutest loobumise ikoon Teie privaatsusvalikud Teie privaatsusvalikutest loobumise ikoon Teie privaatsusvalikud Tarbijaseisundi privaatsus Võtke Microsoftiga ühendust Privaatsus Halda küpsiseid Kasutustingimused Kaubamärgid Reklaamide kohta EU Compliance DoCs © Microsoft 2026 | 2026-01-13T09:30:34 |
https://www.php.net/manual/en/ref.session.php | PHP: Session Functions - Manual update page now Downloads Documentation Get Involved Help Search docs Getting Started Introduction A simple tutorial Language Reference Basic syntax Types Variables Constants Expressions Operators Control Structures Functions Classes and Objects Namespaces Enumerations Errors Exceptions Fibers Generators Attributes References Explained Predefined Variables Predefined Exceptions Predefined Interfaces and Classes Predefined Attributes Context options and parameters Supported Protocols and Wrappers Security Introduction General considerations Installed as CGI binary Installed as an Apache module Session Security Filesystem Security Database Security Error Reporting User Submitted Data Hiding PHP Keeping Current Features HTTP authentication with PHP Cookies Sessions Handling file uploads Using remote files Connection handling Persistent Database Connections Command line usage Garbage Collection DTrace Dynamic Tracing Function Reference Affecting PHP's Behaviour Audio Formats Manipulation Authentication Services Command Line Specific Extensions Compression and Archive Extensions Cryptography Extensions Database Extensions Date and Time Related Extensions File System Related Extensions Human Language and Character Encoding Support Image Processing and Generation Mail Related Extensions Mathematical Extensions Non-Text MIME Output Process Control Extensions Other Basic Extensions Other Services Search Engine Extensions Server Specific Extensions Session Extensions Text Processing Variable and Type Related Extensions Web Services Windows Only Extensions XML Manipulation GUI Extensions Keyboard Shortcuts ? This help j Next menu item k Previous menu item g p Previous man page g n Next man page G Scroll to bottom g g Scroll to top g h Goto homepage g s Goto search (current page) / Focus search box session_abort » « Securing Session INI Settings PHP Manual Function Reference Session Extensions Sessions Change language: English German Spanish French Italian Japanese Brazilian Portuguese Russian Turkish Ukrainian Chinese (Simplified) Other Session Functions Table of Contents session_abort — Discard session array changes and finish session session_cache_expire — Get and/or set current cache expire session_cache_limiter — Get and/or set the current cache limiter session_commit — Alias of session_write_close session_create_id — Create new session id session_decode — Decodes session data from a session encoded string session_destroy — Destroys all data registered to a session session_encode — Encodes the current session data as a session encoded string session_gc — Perform session data garbage collection session_get_cookie_params — Get the session cookie parameters session_id — Get and/or set the current session id session_module_name — Get and/or set the current session module session_name — Get and/or set the current session name session_regenerate_id — Update the current session id with a newly generated one session_register_shutdown — Session shutdown function session_reset — Re-initialize session array with original values session_save_path — Get and/or set the current session save path session_set_cookie_params — Set the session cookie parameters session_set_save_handler — Sets user-level session storage functions session_start — Start new or resume existing session session_status — Returns the current session status session_unset — Free all session variables session_write_close — Write session data and end session Found A Problem? Learn How To Improve This Page • Submit a Pull Request • Report a Bug + add a note User Contributed Notes 20 notes up down 15 pautzomat at web dot de ¶ 22 years ago Be aware of the fact that absolute URLs are NOT automatically rewritten to contain the SID. Of course, it says so in the documentation ('Passing the Session Id') and of course it makes perfectly sense to have that restriction, but here's what happened to me: I have been using sessions for quite a while without problems. When I used a global configuration file to be included in all my scripts, it contained a line like this: $sHomeDirectory = ' http://my.server.com/one/of/my/projects ' which was used to make sure that all automatically generated links had the right prefix (just like $cfg['PmaAbsoluteUri'] works in phpMyAdmin). After introducing that variable, no link would pass the SID anymore, causing every script to return to the login page. It took me hours (!!) to recognize that this wasn't a bug in my code or some misconfiguration in php.ini and then still some more time to find out what it was. The above restriction had completely slipped from my mind (if it ever was there...) Skipping the 'http:' did the job. OK, it was my own mistake, of course, but it just shows you how easily one can sabotage his own work for hours... Just don't do it ;) up down 14 Edemilson Lima <pulstar at gmail dot com> ¶ 18 years ago Sessions and browser's tabs May you have noticed when you open your website in two or more tabs in Firefox, Opera, IE 7.0 or use 'Control+N' in IE 6.0 to open a new window, it is using the same cookie or is passing the same session id, so the another tab is just a copy of the previous tab. What you do in one will affect the another and vice-versa. Even if you open Firefox again, it will use the same cookie of the previous session. But that is not what you need mostly of time, specially when you want to copy information from one place to another in your web application. This occurs because the default session name is "PHPSESSID" and all tabs will use it. There is a workaround and it rely only on changing the session's name. Put these lines in the top of your main script (the script that call the subscripts) or on top of each script you have: <?php if( version_compare ( phpversion (), '4.3.0' )>= 0 ) { if(! ereg ( '^SESS[0-9]+$' , $_REQUEST [ 'SESSION_NAME' ])) { $_REQUEST [ 'SESSION_NAME' ]= 'SESS' . uniqid ( '' ); } output_add_rewrite_var ( 'SESSION_NAME' , $_REQUEST [ 'SESSION_NAME' ]); session_name ( $_REQUEST [ 'SESSION_NAME' ]); } ?> How it works: First we compare if the PHP version is at least 4.3.0 (the function output_add_rewrite_var() is not available before this release). After we check if the SESSION_NAME element in $_REQUEST array is a valid string in the format "SESSIONxxxxx", where xxxxx is an unique id, generated by the script. If SESSION_NAME is not valid (ie. not set yet), we set a value to it. uniqid('') will generate an unique id for a new session name. It don't need to be too strong like uniqid(rand(),TRUE), because all security rely in the session id, not in the session name. We only need here a different id for each session we open. Even getmypid() is enough to be used for this, but I don't know if this may post a treat to the web server. I don't think so. output_add_rewrite_var() will add automatically a pair of 'SESSION_NAME=SESSxxxxx' to each link and web form in your website. But to work properly, you will need to add it manually to any header('location') and Javascript code you have, like this: <?php header ( 'location: script.php?' . session_name (). '=' . session_id () . '&SESSION_NAME=' . session_name ()); ?> <input type="image" src="button.gif" onClick="javascript:open_popup('script.php? <?php echo session_name (); ?> = <?php echo session_id (); ?> &SESSION_NAME= <?php echo session_name (); ?> ')" /> The last function, session_name() will define the name of the actual session that the script will use. So, every link, form, header() and Javascript code will forward the SESSION_NAME value to the next script and it will know which is the session it must use. If none is given, it will generate a new one (and so, create a new session to a new tab). May you are asking why not use a cookie to pass the SESSION_NAME along with the session id instead. Well, the problem with cookie is that all tabs will share the same cookie to do it, and the sessions will mix anyway. Cookies will work partially if you set them in different paths and each cookie will be available in their own directories. But this will not make sessions in each tab completly separated from each other. Passing the session name through URL via GET and POST is the best way, I think. up down 5 Csar ¶ 17 years ago There's a bug in Internet explorer in which sessions do not work if the name of the server is not a valid name. For example...if your server is called web_server (_ isn't a valid character), if you call a page which uses sessions like http://web_server/example.php your sessions won't work but sessions will work if you call the script like this [IP NUMBER]/example.php up down 3 hinom - iMasters ¶ 17 years ago simple session test <?php /* [EDIT by danbrown AT php DOT net: The author of this note named this file tmp.php in his/her tests. If you save it as a different name, simply update the links at the bottom to reflect the change.] */ session_start (); $sessPath = ini_get ( 'session.save_path' ); $sessCookie = ini_get ( 'session.cookie_path' ); $sessName = ini_get ( 'session.name' ); $sessVar = 'foo' ; echo '<br>sessPath: ' . $sessPath ; echo '<br>sessCookie: ' . $sessCookie ; echo '<hr>' ; if( !isset( $_GET [ 'p' ] ) ){ // instantiate new session var $_SESSION [ $sessVar ] = 'hello world' ; }else{ if( $_GET [ 'p' ] == 1 ){ // printing session value and global cookie PHPSESSID echo $sessVar . ': ' ; if( isset( $_SESSION [ $sessVar ] ) ){ echo $_SESSION [ $sessVar ]; }else{ echo '[not exists]' ; } echo '<br>' . $sessName . ': ' ; if( isset( $_COOKIE [ $sessName ] ) ){ echo $_COOKIE [ $sessName ]; }else{ if( isset( $_REQUEST [ $sessName ] ) ){ echo $_REQUEST [ $sessName ]; }else{ if( isset( $_SERVER [ 'HTTP_COOKIE' ] ) ){ echo $_SERVER [ 'HTTP_COOKIE' ]; }else{ echo 'problem, check your PHP settings' ; } } } }else{ // destroy session by unset() function unset( $_SESSION [ $sessVar ] ); // check if was destroyed if( !isset( $_SESSION [ $sessVar ] ) ){ echo '<br>' ; echo $sessName . ' was "unseted"' ; }else{ echo '<br>' ; echo $sessName . ' was not "unseted"' ; } } } ?> <hr> <a href=tmp.php?p=1>test 1 (printing session value)</a> <br> <a href=tmp.php?p=2>test 2 (kill session)</a> up down 3 Jeremy Speer ¶ 15 years ago When working on a project, I found a need to switch live sessions between two different pieces of software. The documentation to do this is scattered all around different sites, especially in comments sections rather than examples. One difficulty I encountered was the session save handler for one of the applications was set, whereas the other was not. Now, I didn't code in the function session_set_save_handler(), instead I utilize that once I'm done with the function (manually), however this function could easily be extended to include that functionality. Basically, it is only overriding the system's default session save handler. To overcome this after you have used getSessionData(), just call session_write_close(), session_set_save_handler() with the appropriate values, then re-run session_name(), session_id() and session_start() with their appropriate values. If you don't know the session id, it's the string located in $_COOKIE[session_name], or $_REQUEST[session_name] if you are using trans_sid. [note: use caution with trusting data from $_REQUEST, if at all possible, use $_GET or $_POST instead depending on the page]. <?php function getSessionData ( $session_name = 'PHPSESSID' , $session_save_handler = 'files' ) { $session_data = array(); # did we get told what the old session id was? we can't continue it without that info if ( array_key_exists ( $session_name , $_COOKIE )) { # save current session id $session_id = $_COOKIE [ $session_name ]; $old_session_id = session_id (); # write and close current session session_write_close (); # grab old save handler, and switch to files $old_session_save_handler = ini_get ( 'session.save_handler' ); ini_set ( 'session.save_handler' , $session_save_handler ); # now we can switch the session over, capturing the old session name $old_session_name = session_name ( $session_name ); session_id ( $session_id ); session_start (); # get the desired session data $session_data = $_SESSION ; # close this session, switch back to the original handler, then restart the old session session_write_close (); ini_set ( 'session.save_handler' , $old_session_save_handler ); session_name ( $old_session_name ); session_id ( $old_session_id ); session_start (); } # now return the data we just retrieved return $session_data ; } ?> up down 2 hinom06 [at] hotmail.co.jp ¶ 15 years ago simple session test version 1.1 <?php /* [EDIT by danbrown AT php DOT net: The author of this note named this file tmp.php in his/her tests. If you save it as a different name, simply update the links at the bottom to reflect the change.] */ error_reporting ( E_ALL ); ini_set ( 'display_errors' , 1 ); date_default_timezone_set ( 'Asia/Tokyo' ); //ini_set( 'session.save_path', '/tmp' ); // for debug purposes session_start (); // check if session_id() exists. /* for example, if exists and session wont read, must send session.name as parameter in URL. Some servers configurations may have problem to recognize PHPSESSID, even if transid value is 0 or 1. So, this test is usefull to identify any causes. */ if( session_id () == '' ) { echo 'session_id() empty' ; }else{ echo session_id (); } echo '<hr>' ; $sessPath = ini_get ( 'session.save_path' ); $sessCookie = ini_get ( 'session.cookie_path' ); $sessName = ini_get ( 'session.name' ); $sessVar = 'foo' ; echo '<br>sessPath: ' . $sessPath ; echo '<br>sessCookie: ' . $sessCookie ; echo '<hr>' ; if( !isset( $_GET [ 'p' ] ) ){ // instantiate new session var $_SESSION [ $sessVar ] = 'hello world' ; }else{ if( $_GET [ 'p' ] == 1 ){ // printing session value and global cookie PHPSESSID echo $sessVar . ': ' ; if( isset( $_SESSION [ $sessVar ] ) ){ echo $_SESSION [ $sessVar ]; }else{ echo '[not exists]' ; } echo '<br>' . $sessName . ': ' ; if( isset( $_COOKIE [ $sessName ] ) ){ echo $_COOKIE [ $sessName ]; }else{ if( isset( $_REQUEST [ $sessName ] ) ){ echo $_REQUEST [ $sessName ]; }else{ if( isset( $_SERVER [ 'HTTP_COOKIE' ] ) ){ echo $_SERVER [ 'HTTP_COOKIE' ]; }else{ echo 'problem, check your PHP settings' ; } } } }else{ // destroy session by unset() function unset( $_SESSION [ $sessVar ] ); // check if was destroyed if( !isset( $_SESSION [ $sessVar ] ) ){ echo '<br>' ; echo $sessName . ' was "unseted"' ; }else{ echo '<br>' ; echo $sessName . ' was not "unseted"' ; } } } ?> <hr> <a href=tmp.php?p=1& <?php echo $sessName . '=' . session_id (); ?> >test 1 (printing session value)</a> <br> <a href=tmp.php?p=2& <?php echo $sessName . '=' . session_id (); ?> >test 2 (kill session)</a> up down 2 Sam Yong - hellclanner at live dot com ¶ 14 years ago The following has been tested true in PHP 5.3.5. Setting the session variables after the execution of the script i.e. in __destruct function, will not work. <?php class Example { function __destruct (){ $_SESSION [ 'test' ] = true ; session_write_close (); } } ?> The above example will write nothing into the temporary session file, as I observed through a custom Session Save Handler. up down 0 paul at shirron dot net ¶ 17 years ago In php.ini, I have: session.save_path="C:\DOCUME~1\pjs9486\LOCALS~1\Temp\php\session" I was cleaning out the temp directory, and deleted the php directory. Session stuff quit working. I re-created the php directory. Still no luck. I re-created the session directory in the php directory, and session stuff resumed working. I would have expected session_start() to have re-created directories in the path, if they didn't exist, but, it doesn't. Note to self: Don't do that again!!!! up down 0 Nigel Barlass ¶ 18 years ago Lima's note on sessions and browser's tabs needs to be modified for my version of php as the call to uniqid('') will return an alphanumeric string. Hence the ereg statement should be: if(!ereg('^SESS[0-9a-z]+$',$_REQUEST['SESSION_NAME'])) {... up down -1 ted at tedmurph dot com ¶ 15 years ago I was having problems with $_SESSION information not being written or being lost in a seemingly random way. There was a Location: call being made deep in a Zend OAuth module, I am using an IIS server with PHP as a CGI, etc. The answer was simply that you need to have the domain be consistent for sessions to work consistently. In my case, I was switching back and forth between www.EXAMPLE.com:888 and EXAMPLE.com:888. The unusual port, the hidden Location: call, the handoff with OAuth, etc all served to confuse me, but the intermitent error was caused by this simple goof of keeping the domain consistent. up down -1 edA-qa at disemia dot com ¶ 16 years ago WARNING for Debian users. Just to drive you completely crazy Debian does its own form of session management and will completely ignore all alterations to the values who do within your PHP script. Debian sets up a <crontab /etc/cron.d/php5> which deletes all files, including those in subdirectories, which exceed the gc_maxlifetime specified in the <php.ini> file only. That is, on Debian (and likely variants like Ubuntu) modifying the session expiration settings (like gc_maxlifetime) does *NOTHING*. You *HAVE* to modify the global <php.ini>. Not even a <.htaccess> file will help you. up down -2 LaurentT ¶ 17 years ago For UNIX : One might encounter some problems with sessions, having different sites on the same server : sessions would either merge if one is using more than one site at a time or crash if sites are owned by different system users. For instance : www.example.com is stored in /home/site1/www www.example.net is stored in /home/site2/www Using both www.example.com and www.example.net would cause sessions to act weird. If you're using PHP as an Apache module, you can easely use php_value in the http.conf to set a unique session.name depending on the site. If you're using suPHP though (PHP as CGI) you can't use php_value, though you can use suPHP_ConfigPath. Here's an example : <VirtualHost 10.10.10.10:8081> DocumentRoot /home/site1/www ServerName www.example.com suPHP_ConfigPath /home/site1/server_config </VirtualHost> <VirtualHost 10.10.10.10:8082> DocumentRoot /home/site2/www ServerName www.example.net suPHP_ConfigPath /home/site2/server_config </VirtualHost> Each server_config folder contain a php.ini file specific to the vHost. You then just have to change the values of each session.name to unique ones and you're done ! up down -2 session a emailaddress d cjb d net ¶ 17 years ago It doesn't appear in the documentation, or in anyone's comment here, but setting session.gc_maxlifetime to 0 means the session will not expire until the browser is closed. Of course this still doesn't fix the problems associated with the garbage collector doing it's own thing. The best solution to that still appears to be changing session.save_path up down -2 brfelipe08 at hotmail dot com ¶ 16 years ago If you need to use sessions, and some accents required for some Latin-based languages, you should encode your files in ISO-8859-1. You will run into some problems if you try to use UTF-8 - with or without BOM -, and ANSI will not support accents. ISO-8859-1 will both support sessions and the accents. up down -3 jitchavan at gmail dot com ¶ 14 years ago IE issue :- when form target set to iframe source and after posting form content you are setting session variables, In this scenario if parent page having image src blank then session values set in iframe action page will be LOST surprisingly in IE ONLY.Solution is quite simple don't keep Image src blank. up down -3 carl /a/ suchideas /o/ com ¶ 18 years ago Another gotcha to add to this list is that using a relative session.save_path is a VERY BAD idea. You can just about pull it off, if you're very careful, but note two related points: 1) The path is taken relative to the directory of the ORIGINALLY executed script, so unless all pages are run from the same directory, you'll have to set the directory separately in each individual subfolder 2) If you call certain functions, such as session_regenerate_id(), PHP will try to take the session directory relative to the exectuable, or something like that, creating an error IN the executable. This provides slightly cryptic error messages, like this: Warning: Unknown: open(relative_path\ilti9oq3j9ks0jvih1fmiq4sv1.session, O_RDWR) failed: No such file or directory (2) in Unknown on line 0 Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (relative_path) in Unknown on line 0 ... so don't even bother. Just use <?php ini_set ( "session.save_path" , dirname ( __FILE__ ). "/relative_path" ); ?> (or equivalent) in a file which you know is always in the same place relative to the file. {PHP version 5.1.6} up down -3 Trevor Brown ¶ 15 years ago A confirmation of behaviour, just in case this saves anyone else some time... There is (potentially) a special case when session.gc_probability = 0 and session.gc_divisor = 0. Depending on how they wanted to work their maths underneath, the coders *might* have opted to interpret 0/0 as 1 (which is sometimes assumed in certain maths proofs), but thankfully it seems they haven't. With both values set to 0, the session code will follow the spirit of the directives and invoke the garbage collector with zero probability. At least this is true with php 5.2.5 (which, to be certain, I inspected the source of). To be safe, however, setting both to zero is *probably* not a good idea because usually 0/0 is undefined, so it could presumably mean anything (some arguments exist that claim 0/0 is equal to every fraction). Wouldn't you rather know for sure what your probability was set to? In other words, don't set gc_divisor = 0 up down -4 farhad dot pd at gmail dot com ¶ 8 years ago <?php class Session { public static $seesionFlashName = '__FlashBack' ; /** * [__construct description] */ public function __construct () { } public static function start () { ini_set ( 'session.use_only_cookies' , 'Off' ); ini_set ( 'session.use_cookies' , 'On' ); ini_set ( 'session.use_trans_sid' , 'Off' ); ini_set ( 'session.cookie_httponly' , 'On' ); if (isset( $_COOKIE [ session_name ()]) && ! preg_match ( '/^[a-zA-Z0-9,\-]{22,52}$/' , $_COOKIE [ session_name ()])) { exit( 'Error: Invalid session ID!' ); } session_set_cookie_params ( 0 , '/' ); session_start (); } public static function id () { return sha1 ( session_id ()); } public static function regenerate () { session_regenerate_id ( true ); } /** * [exists description] * @param [type] $name [description] * @return [type] [description] */ public static function exists ( $name ) { if(isset( $name ) && $name != '' ) { if(isset( $_SESSION [ $name ])) { return true ; } } return false ; } /** * [set description] * @param [type] $name [description] * @param [type] $value [description] */ public static function set ( $name = '' , $value = '' ) { if( $name != '' && $value != '' ) { $_SESSION [ $name ] = $value ; } } /** * [get description] * @param [type] $name [description] * @return [type] [description] */ public static function get ( $name ) { if( self :: exists ( $name )) { return $_SESSION [ $name ]; } return false ; } /** * [delete description] * @param [type] $name [description] * @return [type] [description] */ public static function delete ( $name ) { if( self :: exists ( $name )) { unset( $_SESSION [ $name ]); } return false ; } /** * [setFlash description] * @param string $value [description] */ public static function setFlash ( $value = '' ) { if( $value != '' ) { self :: set ( self :: $seesionFlashName , $value ); } } /** * [getFlash description] * @return [type] [description] */ public static function getFlash () { if( self :: exists ( self :: $seesionFlashName )) { ob_start (); echo self :: get ( self :: $seesionFlashName ); $content = ob_get_contents (); ob_end_clean (); self :: delete ( self :: $seesionFlashName ); return $content ; } return false ; } /** * [flashExists description] * @return [type] [description] */ public static function flashExists () { return self :: exists ( self :: $seesionFlashName ); } /** * [destroy description] * @return void [description] */ public static function destroy () { foreach( $_SESSION as $sessionName ) { self :: delete ( $sessionName ); } session_destroy (); } } ?> iranian php programmer farhad zand moghadam up down -3 shanemayer42 at yahoo dot com ¶ 25 years ago Session Garbage Collection Observation: It appears that session file garbage collection occurs AFTER the current session is loaded. This means that: even if session.gc_maxlifetime = 1 second, if someone starts a session A and no one starts a session for an hour, that person can reconnect to session A and all of their previous session values will be available (That is, session A will not be cleaned up even though it is older than gc_maxlifetime). up down -4 edA-qa at disemia dot com ¶ 16 years ago Sessions may be deleted before the time limit you set in gc_maxlifetime. If you have multiple pages on the same server, each using the session (the same or distinct named sessions, it doesn't matter), the *minimum* gc_maxlifetime of any of those scripts ends up being the effective lifetime of the session files. This can also apply to the lifetime coming from a CLI invocation of a PHP script on that machine which happens to use the session. This can be bothersome since even though you think all your pages include the same file which sets up the script, even a single PHP page which doesn't can invoke the GC and have the scripts deleted. Thus, if you need to set a long gc_maxlifetime you are best doing this through the INI file or in a .htaccess file for the entire directory. + add a note Sessions Introduction Installing/Configuring Predefined Constants Examples Session Upload Progress Sessions and Security Session Functions SessionHandler SessionHandlerInterface SessionIdInterface SessionUpdateTimestampHandlerInterface Copyright © 2001-2026 The PHP Documentation Group My PHP.net Contact Other PHP.net sites Privacy policy ↑ and ↓ to navigate • Enter to select • Esc to close • / to open Press Enter without selection to search using Google | 2026-01-13T09:30:34 |
https://support.microsoft.com/ar-sa/windows/%D8%A5%D8%B9%D8%AF%D8%A7%D8%AF%D8%A7%D8%AA-%D8%A7%D9%84%D8%AE%D8%B5%D9%88%D8%B5%D9%8A%D8%A9-%D9%81%D9%8A-windows-%D8%A7%D9%84%D8%AA%D9%8A-%D8%AA%D8%B3%D8%AA%D8%AE%D8%AF%D9%85%D9%87%D8%A7-%D8%A7%D9%84%D8%AA%D8%B7%D8%A8%D9%8A%D9%82%D8%A7%D8%AA-8b7f2cf4-c359-bf99-0f69-2123cc9ddfc1 | إعدادات الخصوصية في Windows التي تستخدمها التطبيقات - دعم Microsoft المواضيع ذات الصلة × الأمن والأمان والخصوصية في Windows نظرة عامة نظرة عامة على الأمان والسلامة والخصوصية أمان Windows الحصول على تعليمات حول أمان Windows المحافظة على الحماية باستخدام أمان Windows قبل إعادة تدوير أو بيع أو إهداء جهاز Xbox أو جهاز الكمبيوتر الشخصي الذي يعمل بنظام Windows إزالة البرامج الضارة من كمبيوتر Windows أمان Windows الحصول على تعليمات حول أمان Windows عرض محفوظات المستعرض وحذفها في Microsoft Edge حذف ملفات تعريف الارتباط وإدارتها إزالة المحتوى القيم بأمان عند إعادة تثبيت Windows البحث عن جهاز Windows مفقود وتأمينه خصوصية Windows الحصول على تعليمات حول خصوصية Windows إعدادات الخصوصية في Windows التي تستخدمها التطبيقات عرض بياناتك على لوحة معلومات الخصوصية تخطي إلى المحتوى الرئيسي Microsoft الدعم الدعم الدعم الصفحة الرئيسية Microsoft 365 Office المنتجات Microsoft 365 Outlook Microsoft Teams OneDrive Microsoft Copilot OneNote Windows المزيد ... الأجهزة Surface ملحقات الكمبيوتر Xbox ألعاب الكمبيوتر HoloLens Surface Hub ضمانات الأجهزة الحساب & والفوترة حساب Microsoft Store &والفوترة الموارد أحدث الميزات منتديات المجتمعات Microsoft 365 للمسؤولين مدخل الشركات الصغيرة المطور التعليم الإبلاغ عن دعم احتيالي أمان المنتج المزيد شراء Microsoft 365 Microsoft بالكامل Global Microsoft 365 Office Copilot Windows Surface Xbox الدعم Software Software تطبيقات Windows الذكاء الاصطناعي OneDrive Outlook انتقال من Skype إلى Teams OneNote Microsoft Teams PCs & Devices PCs & Devices تسوق للحصول على Xbox Accessories Entertainment Entertainment Xbox Game Pass Ultimate Xbox Game Pass Essential Xbox والألعاب ألعاب الكمبيوتر الشخصي اعمال اعمال الأمان من Microsoft Azure Dynamics 365 Microsoft 365 for business صناعة Microsoft Microsoft Power Platform Windows 365 المطور وذلك المطور وذلك مطور Microsoft Microsoft Learn دعم تطبيقات مواقع تسوق الذكاء الاصطناعي مجتمع Microsoft Tech Microsoft Marketplace Visual Studio Marketplace Rewards أخرى أخرى الأمان والتنزيلات المجانية التعليم بطاقات الهدايا عرض خريطة الموقع بحث بحث عن التعليمات لا نتائج إلغاء تسجيل الدخول تسجيل الدخول باستخدام حساب Microsoft تسجيل الدخول أو إنشاء حساب. مرحباً، تحديد استخدام حساب مختلف! لديك حسابات متعددة اختر الحساب الذي تريد تسجيل الدخول باستخدامه. المواضيع ذات الصلة الأمن والأمان والخصوصية في Windows نظرة عامة نظرة عامة على الأمان والسلامة والخصوصية أمان Windows الحصول على تعليمات حول أمان Windows المحافظة على الحماية باستخدام أمان Windows قبل إعادة تدوير أو بيع أو إهداء جهاز Xbox أو جهاز الكمبيوتر الشخصي الذي يعمل بنظام Windows إزالة البرامج الضارة من كمبيوتر Windows أمان Windows الحصول على تعليمات حول أمان Windows عرض محفوظات المستعرض وحذفها في Microsoft Edge حذف ملفات تعريف الارتباط وإدارتها إزالة المحتوى القيم بأمان عند إعادة تثبيت Windows البحث عن جهاز Windows مفقود وتأمينه خصوصية Windows الحصول على تعليمات حول خصوصية Windows إعدادات الخصوصية في Windows التي تستخدمها التطبيقات عرض بياناتك على لوحة معلومات الخصوصية إعدادات الخصوصية في Windows التي تستخدمها التطبيقات ينطبق على Privacy Windows 11 Windows 10 يوفر Windows قدرات كثيرة للوصول إلى البيانات لجعل التطبيقات مفيدة وقيّمة لك. تتضمن هذه الإمكانات، وهي بنيات أمان تتيح الوصول إلى البيانات الشخصية، أشياء مثل التقويم وجهات الاتصال ومحفوظات المكالمات والمزيد. هناك صفحة تتعلق بإعدادات الخصوصية لكل قدرة مما يتيح لك التحكم فيها وفي التطبيقات والخدمات التي يمكنها استخدام هذه القدرة. للتحكم في التطبيقات التي يمكنها استخدام كل قدرة في Windows 10 انتقل إلى البدء ، ثم حدد الإعدادات > الخصوصية . حدد الإمكانية التي تريد السماح للتطبيقات باستخدامها، مثل التقويم أو جهات الاتصال. اختر الإعداد المفضل لديك للسماح للتطبيقات باستخدام الإمكانية أو الوصول إليها أو التحكم فيها أو قراءتها اختر التطبيقات التي يمكنها استخدام القدرة أو الوصول إليها أو التحكم فيها أو قراءتها عن طريق تشغيل التطبيقات والخدمات الفردية أو إيقاف تشغيلها. للتحكم في التطبيقات التي يمكنها استخدام كل قدرة في Windows 11 انتقل إلى البدء ، ثم حدد الإعدادات > أمان & الخصوصية . حدد الإمكانية التي تريد السماح للتطبيقات باستخدامها، مثل التقويم أو جهات الاتصال. قم بتشغيل الإعداد أو إيقاف تشغيله الذي يسمح لأي شخص يستخدم الجهاز بالوصول إلى الإمكانية. اختر التطبيقات التي يمكنها الوصول إلى الإمكانية عن طريق تشغيل التطبيقات والخدمات الفردية أو إيقاف تشغيلها. استثناءات إعدادات الخصوصية لن تظهر تطبيقات سطح المكتب في قوائم التطبيقات والخدمات التي يمكنك تشغيلها وإيقاف تشغيلها ولا تتأثر بالإعداد الذي يسمح للتطبيقات بالوصول إلى القدرة. للسماح بتطبيقات سطح المكتب أو منعها، استخدم الإعدادات في تلك التطبيقات. ملاحظة: كيف يمكنك معرفه ما إذا كان أحد التطبيقات من تطبيقات سطح المكتب؟ يتم عادة تنزيل تطبيقات سطح المكتب من الإنترنت أو يتم تثبيتها باستخدام بعض أنواع الوسائط (مثل CD أو DVD أو جهاز تخزين USB). يتم تشغيلها باستخدام ملف EXE. أو DLL.، ويتم تشغيلها عادةً على جهازك بخلاف التطبيقات المستندة إلى الويب (التي يتم تشغيلها في السحابة). يُمكنك أيضًا العثور على تطبيقات سطح المكتب في Microsoft Store. الاشتراك في موجز ويب لـ RSS هل تحتاج إلى مزيد من المساعدة؟ الخروج من الخيارات إضافية؟ اكتشاف المجتمع الاتصال بنا استكشف مزايا الاشتراك، واستعرض الدورات التدريبية، وتعرف على كيفية تأمين جهازك، والمزيد. ميزات اشتراك Microsoft 365 تدريب Microsoft 365 أمان من Microsoft مركز إمكانية وصول ذوي الاحتياجات الخاصة تساعدك المجتمعات على طرح الأسئلة والإجابة عليها، وتقديم الملاحظات، وسماعها من الخبراء ذوي الاطلاع الواسع. طرح أسئلة في Microsoft Community مجتمع Microsoft التقني مشتركو Windows Insider المشاركون في برنامج Microsoft 365 Insider ابحث عن حلول للمشاكل الشائعة أو احصل على المساعدة من وكيل دعم. الدعم عبر الإنترنت هل كانت المعلومات مفيدة؟ نعم لا شكراً لك! هل لديك أي ملاحظات إضافية لـ Microsoft? هل يمكنك مساعدتنا على التحسين؟ (أرسل ملاحظات إلى Microsoft حتى نتمكن من المساعدة.) ما مدى رضاك عن جودة اللغة؟ ما الذي أثّر في تجربتك؟ ساعد على حل مشكلتي مسح الإرشادات سهل المتابعة لا توجد لغة غير مفهومة كانت الصور مساعِدة جودة الترجمة غير متطابق مع شاشتي إرشادات غير صحيحة تقني بدرجة كبيرة معلومات غير كافية صور غير كافية جودة الترجمة هل لديك أي ملاحظات إضافية؟ (اختياري) إرسال الملاحظات بالضغط على "إرسال"، سيتم استخدام ملاحظاتك لتحسين منتجات Microsoft وخدماتها. سيتمكن مسؤول تكنولوجيا المعلومات لديك من جمع هذه البيانات. بيان الخصوصية. نشكرك على ملاحظاتك! × الجديد Surface Pro Surface Laptop Copilot للمؤسسات Copilot للاستخدام الشخصي Microsoft 365 استكشف منتجات Microsoft Microsoft Store ملف تعريف الحساب مركز التنزيل تعقب الطلب التعليم Microsoft Education أجهزة التعليم Microsoft Teams للتعليم Microsoft 365 Education Office Education تدريب المعلمين وتطويرهم عروض للطلاب وأولياء الأمور Azure للطلاب الأعمال الأمان من Microsoft Azure Dynamics 365 Microsoft 365 Microsoft Advertising Microsoft 365 Copilot Microsoft Teams المطور وتكنولوجيا المعلومات مطور Microsoft Microsoft Learn دعم تطبيقات مواقع تسوق الذكاء الاصطناعي مجتمع Microsoft Tech Microsoft Marketplace Microsoft Power Platform Marketplace Rewards Visual Studio الشركة الوظائف نبذة عن Microsoft الخصوصية في Microsoft المستثمرون الاستدامة العربية (المملكة العربية السعودية) أيقونة إلغاء الاشتراك في اختيارات خصوصيتك خيارات خصوصيتك أيقونة إلغاء الاشتراك في اختيارات خصوصيتك خيارات خصوصيتك خصوصية صحة المستهلك الاتصال بشركة Microsoft الخصوصية إدارة ملفات تعريف الارتباط بنود الاستخدام العلامات التجارية حول إعلاناتنا © Microsoft 2026 | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/536 | LLVM Weekly - #536, April 8th 2024 LLVM Weekly - #536, April 8th 2024 Welcome to the five hundred and thirty-sixth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org , @llvmweekly or @asbradbury on Twitter, or @llvmweekly@fosstodon.org or @asb@fosstodon.org . EuroLLVM is taking place this week - hopefully I’ll see many of you there! News and articles from around the web and events LLVM 18.1.3 was released . David Malcolm blogged about improvements to static analysis in the GCC 14 compiler . Do check out the lovely text-based diagrams depicting a potential buffer overflow. Rubén Pérez performed benchmarks and analysis of using C++20 modules with Boost . According to the LLVM calendar in the coming week there will be the following: Office hours with the following hosts: Aaron Ballman, Alexey Bader, Alina Sbirlea, Johannes Doerfert. Online sync-ups on the following topics: pointer authentication, SPIR-V, new contributors, OpenMP, Clang C/C++ working group, Flang, BOLT, LLVM libc. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Tobias Hieta kicked off a discussion on improving binary security in LLVM , noting that anyone with commit access can currently add artifacts to an LLVM release. This generated lots of discussion that Tobias very helpfully summarised . Johannes Doerfert announced that the offload/ subfolder has been created and work is underway to move/rename libomptarget . Kristof Beyls put up an RFC on a new BOLT-based binary analysis tool to verify the correctness of security hardening . Lawrence Benson proposed adding __builtin_scatter and __builtin_gather to the Clang frontend. 33992ea . Cassie Jones suggested having clang --version print more information about the build config , especially whether the config had expension checks or assertions. Alex Zinenko shared the program for the MLIR workshop prior to the EuroLLVM Developer meeting . Björn Pettersson wondered what is allowed to be done with FREEZE(UNDEF) in SelectionDAG and started to get some answers. Vy Nguyen looped back on the RFC thread on LLDB telemetry and metrics to note that the original proposal has been updated . LLVM commits The AArch64 backend was tweaked to recognised that folding lsl into load/store operations is cheap on almost all cores. c83f23d . atomicrmw floating point operations with vector types are now supported. 4cb110a . llvm.allow.{runtime,ubsan}.check intrinsics were introduced. 90c738e . Binary instrumentation for type profiling was implemented. 1351d17 . New InstCombines were added for select combined with and/or/xor. 4ef22fc . The --preserve-input-debuginfo-format flag was added. 379628d . The loop vectoriser started to learn to generate VP intrinsics. 413a66f . Clang commits A readability-enum-initial-value clang-tidy check was added. 3365d62 . Array temporary support for HLSL was implemented. 9434c08 . ExtractAPI can now create multiple symbol graphs. b31414b . Other project commits libcxx’s “current status” description was updated with information on the OSes that use it as the default implementation and an estimate of the number of user (1B daily active users!). 53d256b . Flang’s ExternalNameConversion pass was dramatically sped up. 2d14ea6 . LLVM’s libc gained an atan2f implementation. 2be7225 . libclc’s CMake build system was improved, but more changes are still needed to properly support -in-tree builds. 61efea7 . LLD now supports AArch64 AUTH relocations. cca9115 . The offload subfolder and README were added. 33992ea . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://support.microsoft.com/el-gr/microsoft-edge/microsoft-edge-%CE%B4%CE%B5%CE%B4%CE%BF%CE%BC%CE%AD%CE%BD%CE%B1-%CF%80%CE%B5%CF%81%CE%B9%CE%AE%CE%B3%CE%B7%CF%83%CE%B7%CF%82-%CE%BA%CE%B1%CE%B9-%CF%80%CF%81%CE%BF%CF%83%CF%84%CE%B1%CF%83%CE%AF%CE%B1-%CF%80%CF%81%CE%BF%CF%83%CF%89%CF%80%CE%B9%CE%BA%CF%8E%CE%BD-%CE%B4%CE%B5%CE%B4%CE%BF%CE%BC%CE%AD%CE%BD%CF%89%CE%BD-bb8174ba-9d73-dcf2-9b4a-c582b4e640dd | Microsoft Edge, δεδομένα περιήγησης και προστασία προσωπικών δεδομένων - Υποστήριξη της Microsoft Μετάβαση στο κύριο περιεχόμενο Microsoft Υποστήριξη Υποστήριξη Υποστήριξη Αρχική Microsoft 365 Office Προϊόντα Microsoft 365 Outlook Microsoft Teams OneDrive Microsoft Copilot OneNote Windows περισσότερα ... Συσκευές Surface Αξεσουάρ υπολογιστή Xbox Παιχνίδι σε υπολογιστή HoloLens Surface Hub Εγγυήσεις υλικού Λογαριασμός και χρέωση λογαριασμός Microsoft Store και χρέωση Πόροι Τι νέο υπάρχει Φόρουμ κοινότητας Διαχειριστές του Microsoft 365 Πύλη για μικρές επιχειρήσεις Προγραμματιστής Εκπαίδευση Αναφορά απάτης υποστήριξης Ασφάλεια προϊόντος Περισσότερα Αγορά του Microsoft 365 Όλη η Microsoft Global Microsoft 365 Teams Copilot Windows Surface Xbox Υποστήριξη Λογισμικό Λογισμικό Εφαρμογές Windows Τεχνητή νοημοσύνη OneDrive Outlook Μετάβαση από το Skype στο Teams OneNote Microsoft Teams Υπολογιστές και συσκευές Υπολογιστές και συσκευές Αξεσουάρ υπολογιστή Ψυχαγωγία Ψυχαγωγία Xbox Game Pass Ultimate Xbox Game Pass Essential Xbox και παιχνίδια Παιχνίδια για υπολογιστή Επαγγελματίες Επαγγελματίες Ασφάλεια Microsoft Azure Dynamics 365 Microsoft 365 για Επιχειρήσεις Microsoft Industry Microsoft Power Platform Windows 365 Προγραμματιστής και IT Προγραμματιστής και IT Πρόγραμμα για προγραμματιστές της Microsoft Microsoft Learn Υποστήριξη για εφαρμογές αγοράς AI Τεχνολογική κοινότητα της Microsoft Microsoft Marketplace Visual Studio Marketplace Rewards Άλλα Άλλα Δωρεάν στοιχεία λήψης και ασφάλεια Εκπαίδευση Δωροκάρτες Προβολή χάρτη τοποθεσίας Αναζήτηση Αναζήτηση βοήθειας Δεν υπάρχουν αποτελέσματα Άκυρο Είσοδος Είσοδος με Microsoft Είσοδος ή δημιουργία λογαριασμού. Γεια σας, Επιλέξτε διαφορετικό λογαριασμό. Έχετε πολλούς λογαριασμούς Επιλέξτε τον λογαριασμό με τον οποίο θέλετε να εισέλθετε. Microsoft Edge, δεδομένα περιήγησης και προστασία προσωπικών δεδομένων Ισχύει για Privacy Microsoft Edge Windows 10 Windows 11 Ο Microsoft Edge σάς βοηθά να πραγματοποιείτε περιήγηση, αναζήτηση, ηλεκτρονικές αγορές και πολλά άλλα. Όπως όλα τα σύγχρονα προγράμματα περιήγησης, το Microsoft Edge σάς δίνει τη δυνατότητα να συλλέγετε και να αποθηκεύετε συγκεκριμένα δεδομένα στη συσκευή σας, όπως cookie, ενώ σας επιτρέπει να μας στέλνετε πληροφορίες, όπως το ιστορικό περιήγησης, ώστε η εμπειρία σας να είναι όσο το δυνατόν πιο εμπλουτισμένη, γρήγορη και εξατομικευμένη. Όποτε συλλέγουμε δεδομένα, θέλουμε να είμαστε βέβαιοι ότι είναι η σωστή επιλογή για εσάς. Ορισμένοι χρήστες ανησυχούν για τη συλλογή του ιστορικού περιήγησής τους στο web. Για αυτόν το λόγο, σας ενημερώνουμε για τα δεδομένα που αποθηκεύονται στη συσκευή σας ή συλλέγονται από εμάς. Σας παρέχουμε επιλογές, για να ελέγχετε ποια δεδομένα συλλέγονται. Για περισσότερες πληροφορίες σχετικά με την προστασία προσωπικών δεδομένων στο Microsoft Edge, συνιστούμε να διαβάσετε τη Δήλωση προστασίας προσωπικών δεδομένων . Τύποι δεδομένων που συλλέγονται ή αποθηκεύονται και γιατί Η Microsoft χρησιμοποιεί διαγνωστικά δεδομένα, για να βελτιώνει τα προϊόντα και τις υπηρεσίες της. Χρησιμοποιούμε αυτά τα δεδομένα, για να κατανοούμε καλύτερα πώς λειτουργούν τα προϊόντα μας και πού πρέπει να γίνουν βελτιώσεις. Το Microsoft Edge συλλέγει ένα σύνολο απαιτούμενων διαγνωστικών δεδομένων για να διατηρεί το Microsoft Edge ασφαλή, ενημερωμένο και να λειτουργεί με τον αναμενόμενο τρόπο. Η Microsoft υποστηρίζει και εφαρμόζει πρακτικές ελαχιστοποίησης συλλογής πληροφοριών. Προσπαθούμε να συλλέγουμε μόνο τις πληροφορίες που χρειαζόμαστε και να τις αποθηκεύουμε μόνο για όσο χρονικό διάστημα απαιτείται για την παροχή μιας υπηρεσίας ή για σκοπούς ανάλυσης. Επιπροσθέτως, μπορείτε να ελέγχετε αν τα προαιρετικά διαγνωστικά δεδομένα που σχετίζονται με τη συσκευή σας θα κοινοποιούνται στη Microsoft για την επίλυση προβλημάτων προϊόντων και για τη βελτίωση των προϊόντων και των υπηρεσιών της Microsoft. Καθώς χρησιμοποιείτε δυνατότητες και υπηρεσίες στο Microsoft Edge, αποστέλλονται στη Microsoft διαγνωστικά δεδομένα σχετικά με τον τρόπο με τον οποίο χρησιμοποιείτε αυτές τις δυνατότητες. Το Microsoft Edge αποθηκεύει το ιστορικό περιήγησής σας, δηλαδή, πληροφορίες σχετικά με τις τοποθεσίες web που επισκέπτεστε, στη συσκευή σας. Ανάλογα με τις ρυθμίσεις σας, αυτό το ιστορικό περιήγησης αποστέλλεται στη Microsoft και τη βοηθά να εντοπίζει και να επιλύει προβλήματα, καθώς και να βελτιώνει τα προϊόντα και τις υπηρεσίες της για όλους τους χρήστες. Μπορείτε να διαχειριστείτε τη συλλογή προαιρετικών διαγνωστικών δεδομένων στο πρόγραμμα περιήγησης, επιλέγοντας Ρυθμίσεις και πολλά άλλα > Ρυθμίσεις > Προστασία προσωπικών δεδομένων, αναζήτηση και υπηρεσίες > Προστασία προσωπικών δεδομένων και ενεργοποιώντας ή απενεργοποιώντας την επιλογή Αποστολή προαιρετικών διαγνωστικών δεδομένων για τη βελτίωση των προϊόντων της Microsoft . Αυτό περιλαμβάνει δεδομένα από τις δοκιμές νέων εμπειριών. Για να ολοκληρωθεί η εφαρμογή των αλλαγών σε αυτήν τη ρύθμιση, επανεκκινήστε το Microsoft Edge. Η ενεργοποίηση αυτών των ρυθμίσεων επιτρέπει την κοινοποίηση αυτών των προαιρετικών διαγνωστικών δεδομένων στη Microsoft από διαφορετικές εφαρμογές που χρησιμοποιούν το Microsoft Edge, όπως μια εφαρμογή ροής βίντεο που φιλοξενεί την πλατφόρμα Microsoft Edge στο web για τη ροή του βίντεο. Η πλατφόρμα web του Microsoft Edge θα αποστέλλει πληροφορίες σχετικά με τον τρόπο που χρησιμοποιείτε την πλατφόρμα web και των τοποθεσιών που επισκέπτεστε στην εφαρμογή στη Microsoft. Αυτή η συλλογή δεδομένων καθορίζεται βάση των ρυθμίσεών σας σχετικά με τα "Προαιρετικά διαγνωστικά δεδομένα" στις "Ρυθμίσεις προστασίας προσωπικών δεδομένων, αναζήτησης και υπηρεσιών" στο Microsoft Edge. Στα Windows 10, αυτές οι ρυθμίσεις προσδιορίζονται από τη ρύθμιση διαγνωστικών των Windows που έχετε επιλέξει. Για να αλλάξετε τη ρύθμιση για τα διαγνωστικά δεδομένα, επιλέξτε Έναρξη > Ρυθμίσεις > Τα σχόλια για τον Διαγνωστικό έλεγχο> προστασίας προσωπικών δεδομένων &. Από τις 6 Μαρτίου 2024, τα διαγνωστικά δεδομένα του Microsoft Edge συλλέγονται ξεχωριστά από τα διαγνωστικά δεδομένα των Windows σε Windows 10 (έκδοση 22H2 και νεότερες) και Windows 11 (έκδοση 23H2 και νεότερες) συσκευές στον Ευρωπαϊκό Οικονομικό Χώρο. Για αυτές τις εκδόσεις των Windows και σε όλες τις άλλες πλατφόρμες, μπορείτε να αλλάξετε τις ρυθμίσεις σας στον Microsoft Edge επιλέγοντας Ρυθμίσεις και πολλά άλλα > Ρυθμίσεις > Προστασία προσωπικών δεδομένων, αναζήτηση και υπηρεσίες . Σε ορισμένες περιπτώσεις, η διαχείριση των ρυθμίσεων διαγνωστικών δεδομένων ενδέχεται να γίνεται από τον οργανισμό σας. Όταν κάνετε κάποια αναζήτηση, ο Microsoft Edge μπορεί να σας προσφέρει προτάσεις σχετικά με αυτό που αναζητάτε. Για να ενεργοποιήσετε αυτήν τη δυνατότητα, επιλέξτε Ρυθμίσεις και πολλά άλλα > Ρυθμίσεις> Προστασία προσωπικών δεδομένων, αναζήτηση και υπηρεσίες > Αναζήτηση και συνδεδεμένες εμπειρίες > γραμμή διευθύνσεων και αναζητήστε > Προτάσεις αναζήτησης και φίλτρα και ενεργοποιήστε την επιλογή Εμφάνιση προτάσεων αναζήτησης και τοποθεσιών με χρήση των χαρακτήρων που πληκτρολογώ . Καθώς ξεκινάτε να πληκτρολογείτε, οι πληροφορίες που καταχωρείτε στη γραμμή διευθύνσεων αποστέλλονται στην προεπιλεγμένη υπηρεσία παροχής αναζήτησης, για να λαμβάνετε άμεσα προτάσεις αναζήτησης ή τοποθεσιών web. Όταν χρησιμοποιείτε την περιήγηση InPrivate ή τη λειτουργία επισκέπτη, ο Microsoft Edge συλλέγει ορισμένες πληροφορίες σχετικά με το πώς χρησιμοποιείτε το πρόγραμμα περιήγησης, ανάλογα με τη ρύθμιση διαγνωστικών δεδομένων των Windows ή τις ρυθμίσεις προστασίας προσωπικών δεδομένων του Microsoft Edge, αλλά οι αυτόματες προτάσεις απενεργοποιούνται και δεν συλλέγονται πληροφορίες σχετικά με τις τοποθεσίες web που επισκέπτεστε. Ο Microsoft Edge θα διαγράφει το ιστορικό περιήγησης, τα cookies και τα δεδομένα τοποθεσίας, καθώς και τυχόν κωδικούς πρόσβασης, διευθύνσεις και δεδομένα φορμών, όταν κλείνετε όλα τα παράθυρα InPrivate. Μπορείτε να ξεκινήσετε μια νέα περίοδο λειτουργίας InPrivate επιλέγοντας Ρυθμίσεις και πολλά άλλα σε έναν υπολογιστή ή Καρτέλες σε μια κινητή συσκευή. Ο Microsoft Edge διαθέτει επίσης δυνατότητες που συμβάλλουν στην προστασία της ασφάλειας στο Internet, τόσο της δικής σας όσο και του περιεχομένου σας. Το Windows Defender SmartScreen αποκλείει αυτόματα τοποθεσίες web και στοιχεία λήψης περιεχομένου που έχουν αναφερθεί ως κακόβουλα. Το Windows Defender SmartScreen ελέγχει τη διεύθυνση της ιστοσελίδας που επισκέπτεστε σε αντιπαραβολή με μια λίστα διευθύνσεων ιστοσελίδων που είναι αποθηκευμένη στη συσκευή σας και τις οποίες η Microsoft θεωρεί αξιόπιστες. Οι διευθύνσεις που δεν υπάρχουν στη λίστα της συσκευής σας και οι διευθύνσεις των αρχείων που κατεβάζετε θα αποστέλλονται στη Microsoft και θα ελέγχονται με βάση μια λίστα ιστοσελίδων και λήψεων που ενημερώνονται συχνά και που έχουν αναφερθεί στη Microsoft ως μη ασφαλείς ή ύποπτες. Για την επίσπευση κουραστικών εργασιών, όπως τη συμπλήρωση φορμών και την καταχώρηση κωδικών πρόσβασης, ο Microsoft Edge μπορεί να αποθηκεύει πληροφορίες και να σας βοηθά. Εάν επιλέξετε να χρησιμοποιήσετε αυτές τις δυνατότητες, το Microsoft Edge αποθηκεύει τις πληροφορίες στη συσκευή σας. Αν έχετε ενεργοποιήσει τον συγχρονισμό για τη συμπλήρωση φορμών, όπως διευθύνσεων ή κωδικών πρόσβασης, αυτές οι πληροφορίες θα αποστέλλονται στο cloud της Microsoft και θα αποθηκεύονται με τον λογαριασμό Microsoft, ώστε να συγχρονίζονται με όλες τις εκδόσεις του Microsoft Edge στις οποίες έχετε πραγματοποιήσει είσοδο. Μπορείτε να διαχειριστείτε αυτά τα δεδομένα από τις Ρυθμίσεις και πολλά άλλα > Ρυθμίσεις > Προφίλ > Συγχρονισμός . Για να ενσωματώσει την εμπειρία περιήγησής σας με άλλες δραστηριότητες που κάνετε στη συσκευή σας, ο Microsoft Edge κοινοποιεί το ιστορικό περιήγησής σας με τα Microsoft Windows μέσω του Ευρετηρίου. Αυτές οι πληροφορίες αποθηκεύονται τοπικά στη συσκευή. Περιλαμβάνει διευθύνσεις URL, μια κατηγορία στην οποία η διεύθυνση URL ενδέχεται να είναι σχετική, όπως "με τις περισσότερες επισκέψεις", "πρόσφατα επισκέψιμες" ή "πρόσφατα κλειστή", καθώς και μια σχετική συχνότητα ή επανωφορία σε κάθε κατηγορία. Οι τοποθεσίες web που επισκέπτεστε ενώ βρίσκεστε σε λειτουργία InPrivate δεν θα κοινοποιούνται. Στη συνέχεια, αυτές οι πληροφορίες είναι διαθέσιμες σε άλλες εφαρμογές στη συσκευή, όπως το μενού "Έναρξη" ή τη γραμμή εργασιών. Μπορείτε να διαχειριστείτε αυτήν τη δυνατότητα επιλέγοντας Ρυθμίσεις και πολλά άλλα > Ρυθμίσεις > Προφίλ και ενεργοποιώντας ή απενεργοποιώντας την επιλογή Κοινή χρήση δεδομένων περιήγησης με άλλες δυνατότητες των Windows . Εάν είναι απενεργοποιημένη, τα δεδομένα που είχατε κοινοποιήσει προηγουμένως θα διαγραφούν. Για την προστασία ορισμένου περιεχομένου βίντεο και μουσικής από αντιγραφή, ορισμένες τοποθεσίες web ροής αποθηκεύουν δεδομένα διαχείρισης δικαιωμάτων ψηφιακού περιεχομένου (DRM) στη συσκευή σας, συμπεριλαμβανομένων ενός μοναδικού αναγνωριστικού (ID) και αδειών χρήσης πολυμέσων. Όταν επισκέπτεστε μία από αυτές τις τοποθεσίες web, ανακτά τις πληροφορίες DRM, για να βεβαιωθεί ότι έχετε άδεια χρήσης του περιεχομένου. Ο Microsoft Edge αποθηκεύει επίσης cookies, μικρά αρχεία που τοποθετούνται στη συσκευή σας καθώς περιηγείστε στο web. Πολλές τοποθεσίες web χρησιμοποιούν cookies για την αποθήκευση πληροφοριών σχετικά με τις προτιμήσεις και τις ρυθμίσεις σας, όπως, για παράδειγμα, την αποθήκευση των στοιχείων στο καρότσι αγορών σας, για να μην χρειάζεται να τα προσθέτετε κάθε φορά που τις επισκέπτεστε. Ορισμένες τοποθεσίες web χρησιμοποιούν επίσης cookies για τη συλλογή πληροφοριών σχετικά με τη δραστηριότητά σας στο Internet για την εμφάνιση διαφημίσεων βάσει ενδιαφερόντων. Ο Microsoft Edge σάς παρέχει επιλογές για την απαλοιφή των cookies και τον αποκλεισμό τοποθεσιών web από την αποθήκευση cookies στο μέλλον. Ο Microsoft Edge θα αποστέλλει αιτήματα "Χωρίς παρακολούθηση" σε τοποθεσίες web, όταν είναι ενεργοποιημένη η ρύθμιση Αποστολή αιτημάτων χωρίς παρακολούθηση . Αυτή η ρύθμιση είναι διαθέσιμη στην περιοχή Ρυθμίσεις και πολλά άλλα > Ρυθμίσεις > Προστασία προσωπικών δεδομένων, αναζήτηση και υπηρεσίες > Προστασία προσωπικών δεδομένων > Αποστολή αιτημάτων "Χωρίς παρακολούθηση". Ωστόσο, οι τοποθεσίες web ίσως συνεχίσουν να παρακολουθούν τις δραστηριότητές σας, ακόμη και όταν έχετε αποστείλει ένα αίτημα "Χωρίς παρακολούθηση". Πώς να κάνετε απαλοιφή των δεδομένων που συλλέγονται ή αποθηκεύονται από το Microsoft Edge Για την απαλοιφή των πληροφοριών περιήγησης που είναι αποθηκευμένες στη συσκευή σας, όπως αποθηκευμένους κωδικούς πρόσβασης ή cookies: Στον Microsoft Edge, επιλέξτε Ρυθμίσεις και πολλά άλλα > Ρυθμίσεις > Προστασία προσωπικών δεδομένων, αναζήτηση και υπηρεσίες > Απαλοιφή δεδομένων περιήγησης . Επιλέξτε Επιλέξτε τι θα απαλείψετε δίπλα στην επιλογή Απαλοιφή δεδομένων περιήγησης τώρα. Στην περιοχή Χρονικό διάστημα , επιλέξτε ένα χρονικό διάστημα. Επιλέξτε το πλαίσιο ελέγχου δίπλα σε κάθε τύπο δεδομένων που θέλετε να απαλείψετε και, στη συνέχεια, επιλέξτε Απαλοιφή τώρα . Εάν θέλετε, μπορείτε να επιλέξετε Επιλέξτε τι θα απαλείφεται κάθε φορά που κλείνετε το πρόγραμμα περιήγησης και να επιλέξετε τους τύπους δεδομένων που θα πρέπει να απαλείφονται. Μάθετε περισσότερα σχετικά με το τι διαγράφεται για κάθε στοιχείο στο ιστορικό του προγράμματος περιήγησης . Για απαλοιφή του ιστορικού περιήγησης που συλλέγεται από τη Microsoft: Για να δείτε το ιστορικό περιήγησης που σχετίζεται με τον λογαριασμό σας, εισέλθετε στον λογαριασμό σας στην τοποθεσία account.microsoft.com . Επιπλέον, έχετε την επιλογή να απαλείψετε τα δεδομένα περιήγησης που έχει συλλέξει η Microsoft χρησιμοποιώντας τον Πίνακα προστασίας προσωπικών δεδομένων της Microsoft . Για να διαγράψετε το ιστορικό περιήγησης και άλλα διαγνωστικά δεδομένα που σχετίζονται με τη συσκευή Windows 10, επιλέξτε Έναρξη > Ρυθμίσεις > Διαγνωστικός έλεγχος προστασίας προσωπικών δεδομένων> & σχόλια και, στη συνέχεια, επιλέξτε Διαγραφή στην περιοχή Διαγραφή διαγνωστικών δεδομένων . Για να απαλείψετε το ιστορικό περιήγησης που έχει κοινοποιηθεί σε άλλες δυνατότητες της Microsoft στην τοπική συσκευή: Στον Microsoft Edge, επιλέξτε Ρυθμίσεις και πολλά άλλα > Ρυθμίσεις > Προφίλ . Επιλέξτε Κοινή χρήση δεδομένων περιήγησης με άλλες δυνατότητες των Windows . Ενεργοποιήστε αυτή τη ρύθμιση για να την απενεργοποιήσετε . Πώς να διαχειρίζεστε τις ρυθμίσεις προστασίας προσωπικών δεδομένων στο Microsoft Edge Για να ελέγξετε και να προσαρμόσετε τις ρυθμίσεις προστασίας προσωπικών δεδομένων σας, επιλέξτε Ρυθμίσεις και πολλά άλλα > Ρυθμίσεις > Προστασία προσωπικών δεδομένων, αναζήτηση και υπηρεσίες . > Προστασία προσωπικών δεδομένων. Για να μάθετε περισσότερα σχετικά με την προστασία προσωπικών δεδομένων στο Microsoft Edge, διαβάστε τη Λευκή Βίβλο για την προστασία προσωπικών δεδομένων του Microsoft Edge . ΕΓΓΡΑΦΗ ΣΤΙΣ ΤΡΟΦΟΔΟΣΙΕΣ RSS Χρειάζεστε περισσότερη βοήθεια; Θέλετε περισσότερες επιλογές; Ανακαλύψτε Κοινότητα Επικοινωνήστε μαζί μας Εξερευνήστε τα πλεονεκτήματα της συνδρομής, περιηγηθείτε σε εκπαιδευτικά σεμινάρια, μάθετε πώς μπορείτε να προστατεύσετε τη συσκευή σας και πολλά άλλα. Πλεονεκτήματα συνδρομής Microsoft 365 Εκπαίδευση Microsoft 365 Ασφάλεια της Microsoft Κέντρο προσβασιμότητας Οι κοινότητες σάς βοηθούν να κάνετε και να απαντάτε σε ερωτήσεις, να δίνετε σχόλια και να ακούτε από ειδικούς με πλούσια γνώση. Ρωτήστε την κοινότητα της Microsoft Τεχνική Κοινότητα Microsoft Windows Insiders Microsoft 365 Insiders Βρείτε λύσεις σε συνηθισμένα προβλήματα ή λάβετε βοήθεια από έναν συνεργάτη υποστήριξης. Ηλεκτρονική υποστήριξη Σας βοήθησαν αυτές οι πληροφορίες; Ναι Όχι Ευχαριστούμε! Έχετε άλλα σχόλια για τη Microsoft; Μπορείτε να μας βοηθήσετε να βελτιωθούμε; (Στείλτε σχόλια στη Microsoft, ώστε να μπορέσουμε να βοηθήσουμε.) Πόσο ικανοποιημένοι είστε με τη γλωσσική ποιότητα; Τι επηρέασε την εμπειρία σας; Το ζήτημά μου επιλύθηκε Απαλοιφή οδηγιών Ευνόητο Χωρίς τεχνική ορολογία Οι εικόνες βοήθησαν Ποιότητα μετάφρασης Δεν συμφωνούσε με την οθόνη μου Εσφαλμένες οδηγίες Πολύ τεχνικό Ανεπαρκείς πληροφορίες Δεν υπάρχουν αρκετές εικόνες Ποιότητα μετάφρασης Έχετε πρόσθετα σχόλια; (Προαιρετικό) Υποβολή σχολίων Πατώντας "Υποβολή" τα σχόλια σας θα χρησιμοποιηθούν για τη βελτίωση των προϊόντων και των υπηρεσιών της Microsoft. Ο διαχειριστής IT θα έχει τη δυνατότητα να συλλέξει αυτά τα δεδομένα. Δήλωση προστασίας προσωπικών δεδομένων. Σας ευχαριστούμε για τα σχόλιά σας! × Τι νέο υπάρχει Copilot για οργανισμούς Copilot για προσωπική χρήση Microsoft 365 Εφαρμογές των Windows 11 Microsoft Store Προφίλ λογαριασμού Κέντρο λήψης Επιστροφές Παρακολούθηση παραγγελίας Ανακύκλωση Commercial Warranties Εκπαίδευση Microsoft για εκπαιδευτικά ιδρύματα Συσκευές για εκπαιδευτικά ιδρύματα Microsoft Teams για εκπαιδευτικά ιδρύματα Microsoft 365 για εκπαιδευτικά ιδρύματα Office για εκπαιδευτικά ιδρύματα Εκπαίδευση και ανάπτυξη εκπαιδευτικών Προσφορές για σπουδαστές και γονείς Azure για σπουδαστές Επιχειρήσεις Ασφάλεια Microsoft Azure Dynamics 365 Microsoft 365 Microsoft Advertising Microsoft 365 Copilot Microsoft Teams Προγραμματιστής και IT Πρόγραμμα για προγραμματιστές της Microsoft Microsoft Learn Υποστήριξη για εφαρμογές αγοράς AI Τεχνολογική κοινότητα της Microsoft Microsoft Marketplace Microsoft Power Platform Marketplace Rewards Visual Studio Εταιρεία Σταδιοδρομίες Εταιρικά νέα Προστασία προσωπικών δεδομένων στη Microsoft Επενδυτές Βιωσιμότητα Ελληνικά (Ελλάδα) Εικονίδιο εξαίρεσης σχετικά με τις επιλογές προστασίας προσωπικών δεδομένων σας Οι επιλογές προστασίας προσωπικών δεδομένων σας Εικονίδιο εξαίρεσης σχετικά με τις επιλογές προστασίας προσωπικών δεδομένων σας Οι επιλογές προστασίας προσωπικών δεδομένων σας Προστασία προσωπικών δεδομένων για την υγεία των καταναλωτών Επικοινωνήστε με τη Microsoft Προστασία δεδομένων Διαχείριση cookies Όροι χρήσης Εμπορικά σήματα Σχετικά με τις διαφημίσεις μας EU Compliance DoCs © Microsoft 2026 | 2026-01-13T09:30:34 |
https://www.php.net/manual/ja/function.session-name.php | PHP: session_name - Manual update page now Downloads Documentation Get Involved Help Search docs Getting Started Introduction A simple tutorial Language Reference Basic syntax Types Variables Constants Expressions Operators Control Structures Functions Classes and Objects Namespaces Enumerations Errors Exceptions Fibers Generators Attributes References Explained Predefined Variables Predefined Exceptions Predefined Interfaces and Classes Predefined Attributes Context options and parameters Supported Protocols and Wrappers Security Introduction General considerations Installed as CGI binary Installed as an Apache module Session Security Filesystem Security Database Security Error Reporting User Submitted Data Hiding PHP Keeping Current Features HTTP authentication with PHP Cookies Sessions Handling file uploads Using remote files Connection handling Persistent Database Connections Command line usage Garbage Collection DTrace Dynamic Tracing Function Reference Affecting PHP's Behaviour Audio Formats Manipulation Authentication Services Command Line Specific Extensions Compression and Archive Extensions Cryptography Extensions Database Extensions Date and Time Related Extensions File System Related Extensions Human Language and Character Encoding Support Image Processing and Generation Mail Related Extensions Mathematical Extensions Non-Text MIME Output Process Control Extensions Other Basic Extensions Other Services Search Engine Extensions Server Specific Extensions Session Extensions Text Processing Variable and Type Related Extensions Web Services Windows Only Extensions XML Manipulation GUI Extensions Keyboard Shortcuts ? This help j Next menu item k Previous menu item g p Previous man page g n Next man page G Scroll to bottom g g Scroll to top g h Goto homepage g s Goto search (current page) / Focus search box session_regenerate_id » « session_module_name PHP マニュアル 関数リファレンス セッション関連 Sessions セッション関数 Change language: English German Spanish French Italian Japanese Brazilian Portuguese Russian Turkish Ukrainian Chinese (Simplified) Other session_name (PHP 4, PHP 5, PHP 7, PHP 8) session_name — 現在のセッション名を取得または設定する 説明 session_name ( ? string $name = null ): string | false session_name() は、現在のセッション名を返します。 name を渡すと、 session_name() はセッション名を上書きして 元の セッション名を返します。 新しいセッションの name を指定すると、 session_name() 関数は HTTPクッキー を 変更します(そして、 session.use_trans_sid が有効なときは、出力内容も変更します)。 HTTPクッキー が一度送信されると、 session_name() 関数は E_WARNING を発生させます。 セッションを適切に動作させるためには、 session_start() の前に session_name() をコールしなければいけません。 リクエストが開始された際にセッション名はリセットされ、 session.name に保存されたデフォルト値に戻ります。 よって、各リクエスト毎に(そして session_start() をコールする前に) session_name() をコールする必要があります。 パラメータ name セッションの名前を参照します。これは、クッキーや URL (例: PHPSESSID ) で使われます。 セッション名は英数字のみで構成されている必要があり、また、 短く、その内容が分かるようなものである必要があります (これは、クッキー警告を有効にしているユーザー用です)。 name が指定され、 null でない場合、 現在のセッションの名前が、指定された値に置き換えられます。 警告 セッション名は数字だけで構成することはできません。 少なくとも文字がひとつ以上現れる必要があります。そうでない場合、 新規セッション ID が毎回生成されます。 戻り値 現在のセッションの名前を返します。 name を渡すと、 session_name() はセッション名を上書きして 元の セッション名を返します。 失敗した場合に false を返します 変更履歴 バージョン 説明 7.2.0 name は、nullable になりました。 7.2.0 session_name() 関数は、 セッションの状態をチェックするようになりました。 これより前のバージョンでは、 クッキー の状態をチェックするだけでした。 そのため、古い session_name() 関数は session_start() 関数の後に session_name() 関数を呼び出すことを許して しまっており、それが PHP のクラッシュや不具合を起こす可能性がありました。 例 例1 session_name() の例 <?php /* セッション名をWebsiteIDに設定する */ $previous_name = session_name ( "WebsiteID" ); echo "前回のセッション名は $previous_name でした<br />" ; ?> 参考 設定ディレクティブ session.name Found A Problem? Learn How To Improve This Page • Submit a Pull Request • Report a Bug + add a note User Contributed Notes 9 notes up down 146 Hongliang Qiang ¶ 21 years ago This may sound no-brainer: the session_name() function will have no essential effect if you set session.auto_start to "true" in php.ini . And the obvious explanation is the session already started thus cannot be altered before the session_name() function--wherever it is in the script--is executed, same reason session_name needs to be called before session_start() as documented. I know it is really not a big deal. But I had a quite hard time before figuring this out, and hope it might be helpful to someone like me. up down 65 php at wiz dot cx ¶ 17 years ago if you try to name a php session "example.com" it gets converted to "example_com" and everything breaks. don't use a period in your session name. up down 40 relsqui at chiliahedron dot com ¶ 16 years ago Remember, kids--you MUST use session_name() first if you want to use session_set_cookie_params() to, say, change the session timeout. Otherwise it won't work, won't give any error, and nothing in the documentation (that I've seen, anyway) will explain why. Thanks to brandan of bildungsroman.com who left a note under session_set_cookie_params() explaining this or I'd probably still be throwing my hands up about it. up down 21 Joseph Dalrymple ¶ 14 years ago For those wondering, this function is expensive! On a script that was executing in a consistent 0.0025 seconds, just the use of session_name("foo") shot my execution time up to ~0.09s. By simply sacrificing session_name("foo"), I sped my script up by roughly 0.09 seconds. up down 10 Victor H ¶ 10 years ago As Joseph Dalrymple said, adding session_name do slow down a little bit the execution time. But, what i've observed is that it decreased the fluctuation between requests. Requests on my script fluctuated between 0,045 and 0,022 seconds. With session_name("myapp"), it goes to 0,050 and 0,045. Not a big deal, but that's a point to note. For those with problems setting the name, when session.auto_start is set to 1, you need to set the session.name on php.ini! up down 3 mmulej at gmail dot com ¶ 4 years ago Hope this is not out of php.net noting scope. session_name('name') must be set before session_start() because the former changes ini settings and the latter reads them. For the same reason session_set_cookie_params($options) must be set before session_start() as well. I find it best to do the following. function is_session_started() { if (php_sapi_name() === 'cli') return false; if (version_compare(phpversion(), '5.4.0', '>=')) return session_status() === PHP_SESSION_ACTIVE; return session_id() !== ''; } if (!is_session_started()) { session_name($session_name); session_set_cookie_params($cookie_options); session_start(); } up down 0 tony at marston-home dot demon dot co dot uk ¶ 7 years ago The description that session_name() gets and/or sets the name of the current session is technically wrong. It does nothing but deal with the value originally supplied by the session.name value within the php.ini file. Thus:- $name = session_name(); is functionally equivalent to $name = ini_get('session.name'); and session_name('newname); is functionally equivalent to ini_set('session.name','newname'); This also means that: $old_name = session_name('newname'); is functionally equivalent to $old_name = ini_set('session.name','newname'); The current value of session.name is not attached to a session until session_start() is called. Once session_start() has used session.name to lookup the session_id() in the cookie data the name becomes irrelevant as all further operations on the session data are keyed by the session_id(). Note that changing session.name while a session is currently active will not update the name in any session cookie. The new name does not take effect until the next call to session_start(), and this requires that the current session, which was created with the previous value for session.name, be closed. up down -4 tony at marston-home dot demon dot co dot uk ¶ 7 years ago The description has recently been modified to contain the statement "When new session name is supplied, session_name() modifies HTTP cookie". This is not correct as session_name() has never modified any cookie data. A change in session.name does not become effective until session_start() is called, and it is session_start() that creates the cookie if it does not already exist. See the following bug report for details: https://bugs.php.net/bug.php?id=76413 up down -3 descartavel1+php at gmail dot com ¶ 2 years ago Always try to set the prefix for your session name attribute to either `__Host-` or `__Secure-` to benefit from Browsers improved security. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie#attributes Also, if you have auto_session enabled, you must set this name in session.name in your config (php.ini, htaccess, etc) + add a note セッション関数 session_​abort session_​cache_​expire session_​cache_​limiter session_​commit session_​create_​id session_​decode session_​destroy session_​encode session_​gc session_​get_​cookie_​params session_​id session_​module_​name session_​name session_​regenerate_​id session_​register_​shutdown session_​reset session_​save_​path session_​set_​cookie_​params session_​set_​save_​handler session_​start session_​status session_​unset session_​write_​close Copyright © 2001-2026 The PHP Documentation Group My PHP.net Contact Other PHP.net sites Privacy policy ↑ and ↓ to navigate • Enter to select • Esc to close • / to open Press Enter without selection to search using Google | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/532 | LLVM Weekly - #532, March 11th 2024 LLVM Weekly - #532, March 11th 2024 Welcome to the five hundred and thirty-second issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org , @llvmweekly or @asbradbury on Twitter, or @llvmweekly@fosstodon.org or @asb@fosstodon.org . News and articles from around the web and events LLVM 18.1.0 (and the quick follow-up 18.1.1 fixing a version numbering issue) was released . Congratulations and thank you to everyone involved. Note that this is the first release using the new versioning scheme . The next LLVM Bay Area meetup will take place on Monday 18th March . MaskRay blogged about a compact relocation format for ELF . According to the LLVM calendar in the coming week there will be the following (note, the US entered daylight savings time this weekend): Office hours with the following hosts: Aaron Ballman, Alexey Bader, Alina Sbirlea, Kristof Beyls, Johannes Doerfert. Online sync-ups on the following topics: pointer authentication, SPIR_V, new contributors, OpenMP, Flang, BOLT, libc, MLIR, RISC-V. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Jeremy Morse provided another update on work to eliminate debug intrinsics. Nikita Popov summarised recent improvements to the IR parser , including the removal of the requirements to have declarations for intrinsics and for unnamed values to be consecutive, and an -allow-incomplete-ir option for opt . Michael Buch posted an LLDB RFC on enabling more reliable completion of record types . Minutes from the February LLVM Foundation board meeting have now been posted . The 63rd MLIR News is now available . Nikita Popov proposed adding nowrap flags to the trunc IR instruction , motivated for use cases like induction variable widening. The responses so far are very positive. LLVM commits The frontend performance tips documentation was expanded with tips about loads/stores and aggregates. ff66e9b . update_test_checks now keeps meta variables ( TMBnn ) stable by default, reducing the noise in diffs. 3846019 . A range attribute was introduced for parameter and return values. 4028267 . STRICT_BF16_TO_FP and STRICT_FP_TO_BF16 SelectionDAG nodes were introduced. 8300f30 . Vector reduction instructions were added to the SPIR-V backend. 540d255 . AMDGPU gained a -lower-buffer-fat-pointers pass which rewrites away address space 7. 6540f16 . Support was added for parsing non-instruction debug info from textual IR. 464d9d9 , 01e5d46 . JITLink gained support for Intel VTune. 00f4121 . Clang commits The C++23 [[assume]] attribute was implemented. 2b5f68a . It’s now possible to use a shorthand for GitHub issue links in the Clang release notes. 6e36ceb . -Wmissing-designated-field-initializers was added. 7df43cc . __builtin_cpu_supports was documented. dcd08da . Other project commits LLVM’s libc GPU documentation pages were overhauled. 0cbbcf1 . Support for additional Linux kernel sections were added to BOT. 0262979 , 143afb4 , f51ade2 , ccf0c8d . The MLIR inline pass was refactored ahead of adding cost model hooks. 2542d34 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://www.visma.com/voiceofvisma/episode-4-joakim-tauren | Ep 04: “How do you make people care about security?” with Joakim Tauren Who we are About us Connected by software – driven by people Become a Visma company Join our family of thriving SaaS companies Technology and AI at Visma Innovation with customer value at its heart Our sponsorship Team Visma | Lease a Bike Sustainability A better impact through software Contact us Find the right contact information What we offer Cloud software We create brilliant ways to work For medium businesses Lead your business with clarity For small businesses Start, run and grow with ease For public sector Empower efficient societies For accounting offices Build your dream accounting office For partners Help us keep customers ahead For investors For investors Latest results, news and strategy Financials Key figures, quarterly and annual results Events Financial calendar Governance Policies, management, board and owners Careers Careers at Visma Join the business software revolution Locations Find your nearest office Open positions Turn your passion into a career Resources News For small businesses Cloud accounting software built for small businesses Who we are About us Technology and AI at Visma Sustainability Become a Visma company Our sponsorship What we offer Cloud software For small businesses For accounting offices For enterprises Public sector For partners For investors Overview Financials Governance News and press Events Careers Careers at Visma Open positions Hubs Resources Blog Visma Developer Trust Centre News Press releases Team Visma | Lease a Bike Podcast Ep 04: “How do you make people care about security?” with Joakim Tauren Voice of Visma June 5, 2024 Spotify Created with Sketch. YouTube Apple Podcasts Amazon Music <iframe style="border-radius:12px" src="https://open.spotify.com/embed/episode/1ZcrIAOcsCVeKJqZzOMSQO?utm_source=generator" width="100%" height="352" frameBorder="0" allowfullscreen="" allow="autoplay; clipboard-write; encrypted-media; fullscreen; picture-in-picture" loading="lazy"></iframe> About the episode With over 700 applications across the Visma Group (and counting!), it’s safe to say that cybersecurity is a make-or-break element for us. But getting everyone’s buy-in is not an easy feat. Join Joakim and Diana as they break down the unique structure of the Visma Security Program – and the techniques they use to keep people engaged. Share More from Voice of Visma We're sitting down with leaders and colleagues from around Visma to share their stories, industry knowledge, and valuable career lessons. With the Voice of Visma podcast, we’re bringing our people and culture closer to you. Get to know the podcast Ep 22: Building, learning, and accelerating growth in the SaaS world with Maxin Schneider Entrepreneurial leadership often grows through experience, and Maxin Schneider has seen that up close. Read more Ep 21: How DEI fuels business success with Iveta Bukane Why DEI isn't just a moral imperative—it’s a business necessity. Read more Ep 20: Driving tangible sustainability outcomes with Freja Landewall Discover how ESG goes far beyond the environment, encompassing people, governance, and the long-term resilience of business. Read more Ep 19: Future-proofing public services in Sweden with Marie Ceder Between demographic changes, the rise in AI, and digitalisation, the public sector is at a pivotal moment. Read more Ep 18: Making inclusion part of our everyday work with Ida Algotsson What does inclusion truly mean at Visma – not just as values, but as everyday actions? Read more Ep 17: Sustainability at the heart of business with Robin Åkerberg Honouring our responsibility goes well beyond the numbers – it starts with a shared purpose and values. Read more Ep 16: Innovation for the public good with Kasper Lyhr Serving the public sector goes way beyond software – it’s about shaping the future of society as a whole. Read more Ep 15: Leading with transparency and vulnerability with Ellen Sano What does it mean to be a “firestarter” in business? Read more Ep 14: Women, innovation, and the future of Visma with Merete Hverven Our CEO, Merete, knows that great leadership takes more than just hard work – it takes vision. Read more Ep 13: Building partnerships beyond software with Daniel Ognøy Kaspersen What does it look like when an accounting software company delivers more than just great software? Read more Ep 12: AI in the accounting sphere with Joris Joppe Artificial intelligence is changing industries across the board, and accounting is no exception. But in such a highly specialised field, what does change actually look like? Read more Ep 11: From Founder to Segment Director with Ari-Pekka Salovaara Ari-Pekka is a serial entrepreneur who joined Visma when his company was acquired in 2010. He now leads the small business segment. Read more Ep 10: When brave choices can save a company with Charlotte von Sydow What’s it like stepping in as the Managing Director for a company in decline? Read more Ep 09: Revolutionising tax tech in Italy with Enrico Mattiazzi and Vito Lomele Take one look at their product, their customer reviews, or their workplace awards, and it’s clear why Fiscozen leads Italy’s tax tech scene. Read more Ep 08: Navigating the waters of entrepreneurship with Steffen Torp When it comes to being an entrepreneur, the journey is as personal as it is unpredictable. Read more Ep 07: The untold stories of Visma with Øystein Moan What did Visma look like in its early days? Are there any decisions our former CEO would have made differently? Read more Ep 06: Measure what matters: Employee engagement with Vibeke Müller Research shows that having engaged, happy employees is so important for building a great company culture and performing better financially. Read more Ep 05: Our Team Visma | Lease a Bike sponsorship with Anne-Grethe Thomle Karlsen It’s one thing to sponsor the world’s best cycling team; it’s a whole other thing to provide software and expertise that helps them do what they do best. Read more Ep 04: “How do you make people care about security?” with Joakim Tauren With over 700 applications across the Visma Group (and counting!), cybersecurity is make-or-break for us. Read more Ep 03: The human side of enterprise with Yvette Hoogewerf As a software company, our products are central to our business… but that’s only one part of the equation. Read more Ep 02: From Management Trainee to CFO with Stian Grindheim How does someone work their way up from Management Trainee to CFO by the age of 30? And balance fatherhood alongside it all? Read more Ep 01: An optimistic look at the future of AI with Jacob Nyman We’re all-too familiar with the fears surrounding artificial intelligence. So today, Jacob and Johan are flipping the script. Read more (Trailer) Introducing: Voice of Visma These are the stories that shape us... and the reason Visma is unlike anywhere else. Read more Visma Software International AS Organisation number: 980858073 MVA (Foretaksregisteret/The Register of Business Enterprises) Main office Karenslyst allé 56 0277 Oslo Norway Postal address PO box 733, Skøyen 0214 Oslo Norway visma@visma.com Visma on LinkedIn Who we are About us Technology at Visma Sustainability Become a Visma company Our sponsorship Contact us What we offer For small businesses For accounting offices For medium businesses For public sector For partners e-invoicing Digital signature For investors Overview Financials Governance Events Careers Careers at Visma Open positions Hubs Resources Blog Trust Centre Community News Press ©️ 2026 Visma Privacy policy Cookie policy Whistleblowing Cookies settings Transparency Act Change country | 2026-01-13T09:30:34 |
https://docs.aws.amazon.com/id_id/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-permissions.html#slr-permissions-lambda-replicator | Siapkan izin dan peran IAM untuk Lambda @Edge - Amazon CloudFront Siapkan izin dan peran IAM untuk Lambda @Edge - Amazon CloudFront Dokumentasi Amazon CloudFront Panduan Developerr Izin IAM diperlukan untuk mengaitkan fungsi Lambda @Edge dengan distribusi CloudFront Peran eksekusi fungsi untuk prinsipal layanan Peran terkait layanan untuk Lambda @Edge Terjemahan disediakan oleh mesin penerjemah. Jika konten terjemahan yang diberikan bertentangan dengan versi bahasa Inggris aslinya, utamakan versi bahasa Inggris. Siapkan izin dan peran IAM untuk Lambda @Edge Untuk mengonfigurasi Lambda @Edge, Anda harus memiliki izin dan peran IAM berikut untuk: AWS Lambda Izin IAM — Izin ini memungkinkan Anda membuat fungsi Lambda dan mengaitkannya dengan distribusi Anda. CloudFront Peran eksekusi fungsi Lambda (peran IAM) — Prinsipal layanan Lambda mengasumsikan peran ini untuk menjalankan fungsi Anda. Peran terkait layanan untuk Lambda @Edge — Peran terkait layanan memungkinkan spesifik untuk Layanan AWS mereplikasi fungsi Lambda ke dan mengaktifkan penggunaan file log. Wilayah AWS CloudWatch CloudFront Izin IAM diperlukan untuk mengaitkan fungsi Lambda @Edge dengan distribusi CloudFront Selain izin IAM yang Anda perlukan untuk Lambda, Anda memerlukan izin berikut untuk mengaitkan fungsi Lambda dengan distribusi: CloudFront lambda:GetFunction — Memberikan izin untuk mendapatkan informasi konfigurasi untuk fungsi Lambda Anda dan URL yang telah ditentukan sebelumnya untuk mengunduh file .zip yang berisi fungsi tersebut. lambda:EnableReplication* — Memberikan izin ke kebijakan sumber daya sehingga layanan replikasi Lambda bisa mendapatkan kode fungsi dan konfigurasi. lambda:DisableReplication* — Memberikan izin untuk kebijakan sumber daya sehingga layanan replikasi Lambda dapat menghapus fungsi. penting Anda harus menambahkan tanda bintang ( * ) di akhir lambda:EnableReplication * dan lambda:DisableReplication * tindakan. Untuk sumber daya, tentukan ARN dari versi fungsi yang ingin Anda jalankan ketika suatu CloudFront peristiwa terjadi, seperti contoh berikut: arn:aws:lambda:us-east-1:123456789012:function: TestFunction :2 iam:CreateServiceLinkedRole — Memberikan izin untuk membuat peran terkait layanan yang digunakan Lambda @Edge untuk mereplikasi fungsi Lambda. CloudFront Setelah Anda mengonfigurasi Lambda @Edge untuk pertama kalinya, peran terkait layanan akan dibuat secara otomatis untuk Anda. Anda tidak perlu menambahkan izin ini ke distribusi lain yang menggunakan Lambda @Edge. cloudfront:UpdateDistribution atau cloudfront:CreateDistribution — Memberikan izin untuk memperbarui atau membuat distribusi. Untuk informasi selengkapnya, lihat topik berikut: Identity and Access Management untuk Amazon CloudFront Izin akses sumber daya Lambda di Panduan Pengembang AWS Lambda Peran eksekusi fungsi untuk prinsipal layanan Anda harus membuat peran IAM yang dapat diasumsikan oleh kepala sekolah lambda.amazonaws.com dan edgelambda.amazonaws.com layanan ketika mereka menjalankan fungsi Anda. Tip Saat membuat fungsi di konsol Lambda, Anda dapat memilih untuk membuat peran eksekusi baru dengan menggunakan templat AWS kebijakan. Langkah ini secara otomatis menambahkan izin Lambda @Edge yang diperlukan untuk menjalankan fungsi Anda. Lihat Langkah 5 dalam Tutorial: Membuat fungsi Lambda @Edge sederhana . Untuk informasi selengkapnya tentang membuat peran IAM secara manual, lihat Membuat peran dan melampirkan kebijakan (konsol) di Panduan Pengguna IAM . contoh Contoh: Kebijakan kepercayaan peran Anda dapat menambahkan peran ini di bawah tab Trust Relationship di konsol IAM. Jangan tambahkan kebijakan ini di bawah tab Izin . JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "lambda.amazonaws.com", "edgelambda.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] } Untuk informasi selengkapnya tentang izin yang perlu Anda berikan ke peran eksekusi, lihat Izin akses sumber daya Lambda di AWS Lambda Panduan Pengembang. Catatan Secara default, setiap kali CloudFront peristiwa memicu fungsi Lambda, data ditulis CloudWatch ke Log. Jika Anda ingin menggunakan log ini, peran eksekusi memerlukan izin untuk menulis data ke CloudWatch Log. Anda dapat menggunakan standar AWSLambdaBasicExecutionRole untuk memberikan izin ke peran eksekusi. Untuk informasi selengkapnya tentang CloudWatch Log, lihat Log fungsi tepi . Jika kode fungsi Lambda Anda mengakses AWS sumber daya lain, seperti membaca objek dari bucket S3, peran eksekusi memerlukan izin untuk melakukan tindakan tersebut. Peran terkait layanan untuk Lambda @Edge Lambda @Edge menggunakan peran terkait layanan IAM. Peran yang terhubung dengan layanan adalah jenis peran IAM unik yang terhubung langsung ke layanan. Peran yang ditautkan dengan layanan ditentukan sebelumnya oleh layanan dan mencakup semua izin yang diperlukan layanan untuk menghubungi layanan AWS lainnya atas nama Anda. Lambda @Edge menggunakan peran terkait layanan IAM berikut: AWSServiceRoleForLambdaReplicator — Lambda @Edge menggunakan peran ini untuk memungkinkan Lambda @Edge mereplikasi fungsi. Wilayah AWS Saat Anda pertama kali menambahkan pemicu Lambda @Edge CloudFront, peran bernama dibuat AWSServiceRoleForLambdaReplicator secara otomatis untuk memungkinkan Lambda @Edge mereplikasi fungsi. Wilayah AWS Peran ini diperlukan untuk menggunakan fungsi Lambda @Edge. ARN untuk AWSServiceRoleForLambdaReplicator peran tersebut terlihat seperti contoh berikut: arn:aws:iam::123456789012:role/aws-service-role/replicator.lambda.amazonaws.com/AWSServiceRoleForLambdaReplicator AWSServiceRoleForCloudFrontLogger — CloudFront menggunakan peran ini untuk mendorong file log ke CloudWatch. Anda dapat menggunakan file log untuk men-debug kesalahan validasi Lambda @Edge. AWSServiceRoleForCloudFrontLoggerPeran dibuat secara otomatis saat Anda menambahkan asosiasi fungsi Lambda @Edge CloudFront untuk memungkinkan mendorong file log kesalahan Lambda @Edge ke. CloudWatch ARN untuk AWSServiceRoleForCloudFrontLogger peran yang terlihat seperti ini: arn:aws:iam::account_number:role/aws-service-role/logger.cloudfront.amazonaws.com/AWSServiceRoleForCloudFrontLogger Peran yang terhubung dengan layanan memudahkan pengaturan dan penggunaan Lambda@Edge karena Anda tidak perlu menambahkan izin yang diperlukan secara manual. Lambda@Edge mendefinisikan izin peran yang terhubung ke layanan, dan hanya Lambda@Edge yang dapat memegang peran tersebut. Izin yang ditentukan mencakup kebijakan kepercayaan dan kebijakan izin. Kebijakan izin tidak dapat dilampirkan ke entitas IAM lainnya. Anda harus menghapus sumber daya terkait CloudFront atau Lambda @Edge sebelum dapat menghapus peran terkait layanan. Ini membantu melindungi sumber daya Lambda @Edge Anda sehingga Anda tidak menghapus peran terkait layanan yang masih diperlukan untuk mengakses sumber daya aktif. Untuk mengetahui informasi selengkapnya tentang peran terkait layanan, lihat Peran terkait layanan untuk CloudFront . Izin peran terkait layanan untuk Lambda @Edge Lambda @Edge menggunakan dua peran terkait layanan, bernama dan. AWSServiceRoleForLambdaReplicator AWSServiceRoleForCloudFrontLogger Bagian berikut menjelaskan izin untuk masing-masing peran ini. Daftar Isi Izin peran terkait layanan untuk replikator Lambda Izin peran terkait layanan untuk logger CloudFront Izin peran terkait layanan untuk replikator Lambda Peran terkait layanan ini memungkinkan Lambda untuk mereplikasi fungsi Lambda @Edge. Wilayah AWS Peran terkait layanan AWSServiceRoleForLambdaReplicator memercayai layanan replicator.lambda.amazonaws.com untuk menjalankan peran. Kebijakan izin peran memungkinkan Lambda@Edge menyelesaikan tindakan berikut pada sumber daya yang ditentukan: lambda:CreateFunction pada arn:aws:lambda:*:*:function:* lambda:DeleteFunction pada arn:aws:lambda:*:*:function:* lambda:DisableReplication pada arn:aws:lambda:*:*:function:* iam:PassRole pada all AWS resources cloudfront:ListDistributionsByLambdaFunction pada all AWS resources Izin peran terkait layanan untuk logger CloudFront Peran terkait layanan ini memungkinkan CloudFront untuk mendorong file log ke dalam CloudWatch sehingga Anda dapat men-debug kesalahan validasi Lambda @Edge. Peran terkait layanan AWSServiceRoleForCloudFrontLogger memercayai layanan logger.cloudfront.amazonaws.com untuk menjalankan peran. Kebijakan izin peran memungkinkan Lambda @Edge menyelesaikan tindakan berikut pada sumber daya yang ditentukan: arn:aws:logs:*:*:log-group:/aws/cloudfront/* logs:CreateLogGroup logs:CreateLogStream logs:PutLogEvents Anda harus mengonfigurasi izin untuk mengizinkan entitas IAM (seperti pengguna, grup, atau peran) untuk menghapus peran yang ditautkan oleh layanan Lambda@Edge. Untuk informasi selengkapnya, lihat Izin peran terkait layanan dalam Panduan Pengguna IAM . Membuat peran terkait layanan untuk Lambda @Edge Anda biasanya tidak membuat peran terkait layanan secara manual untuk Lambda@Edge. Layanan ini membuat peran untuk Anda secara otomatis dalam skenario berikut: Saat pertama kali membuat pemicu, layanan akan membuat AWSServiceRoleForLambdaReplicator peran (jika belum ada). Peran ini memungkinkan Lambda untuk mereplikasi fungsi Lambda @Edge ke. Wilayah AWS Jika Anda menghapus peran layanan yang ditautkan, peran tersebut akan dibuat lagi saat Anda menambahkan pemicu baru untuk Lambda@Edge dalam distribusi. Saat Anda memperbarui atau membuat CloudFront distribusi yang memiliki asosiasi Lambda @Edge, layanan akan membuat AWSServiceRoleForCloudFrontLogger peran (jika peran tersebut belum ada). Peran ini memungkinkan CloudFront untuk mendorong file log Anda ke CloudWatch. Jika Anda menghapus peran terkait layanan, peran akan dibuat lagi saat Anda memperbarui atau membuat CloudFront distribusi yang memiliki asosiasi Lambda @Edge. Untuk membuat peran terkait layanan ini secara manual, Anda dapat menjalankan perintah AWS Command Line Interface (AWS CLI) berikut: Untuk membuat AWSServiceRoleForLambdaReplicator peran Jalankan perintah berikut. aws iam create-service-linked-role --aws-service-name replicator.lambda.amazonaws.com Untuk membuat AWSServiceRoleForCloudFrontLogger peran Jalankan perintah berikut. aws iam create-service-linked-role --aws-service-name logger.cloudfront.amazonaws.com Mengedit peran terkait layanan Lambda @Edge Lambda @Edge tidak mengizinkan Anda mengedit AWSServiceRoleForLambdaReplicator atau peran terkait AWSServiceRoleForCloudFrontLogger layanan. Setelah layanan membuat peran terkait layanan, Anda tidak dapat mengubah nama peran karena berbagai entitas mungkin mereferensikan peran tersebut. Namun, Anda dapat menggunakan IAM untuk mengedit deskripsi peran. Untuk informasi selengkapnya, lihat Mengedit peran terkait layanan dalam Panduan Pengguna IAM . Didukung Wilayah AWS untuk peran terkait layanan Lambda @Edge CloudFront mendukung penggunaan peran terkait layanan untuk Lambda @Edge sebagai berikut: Wilayah AWS AS Timur (Virginia Utara)– us-east-1 AS Timur (Ohio)– us-east-2 AS Barat (California Utara)– us-west-1 AS Barat (Oregon)– us-west-2 Asia Pasifik (Mumbai)– ap-south-1 Asia Pasifik (Seoul)– ap-northeast-2 Asia Pasifik (Singapura)– ap-southeast-1 Asia Pasifik (Sydney)– ap-southeast-2 Asia Pasifik (Tokyo) – ap-northeast-1 Eropa (Frankfurt) – eu-central-1 Eropa (Irlandia)– eu-west-1 Eropa (London) – eu-west-2 Amerika Selatan (São Paulo) – sa-east-1 Javascript dinonaktifkan atau tidak tersedia di browser Anda. Untuk menggunakan Dokumentasi AWS, Javascript harus diaktifkan. Lihat halaman Bantuan browser Anda untuk petunjuk. Konvensi Dokumen Tutorial: Fungsi dasar Lambda @Edge Tulis dan buat fungsi Lambda @Edge Apakah halaman ini membantu Anda? - Ya Terima kasih telah memberitahukan bahwa hasil pekerjaan kami sudah baik. Jika Anda memiliki waktu luang, beri tahu kami aspek apa saja yang sudah bagus, agar kami dapat menerapkannya secara lebih luas. Apakah halaman ini membantu Anda? - Tidak Terima kasih telah memberi tahu kami bahwa halaman ini perlu ditingkatkan. Maaf karena telah mengecewakan Anda. Jika Anda memiliki waktu luang, beri tahu kami bagaimana dokumentasi ini dapat ditingkatkan. | 2026-01-13T09:30:34 |
https://docs.aws.amazon.com/it_it/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-permissions.html | Configurazione di ruoli e autorizzazioni IAM per Lambda@Edge - Amazon CloudFront Configurazione di ruoli e autorizzazioni IAM per Lambda@Edge - Amazon CloudFront Documentazione Amazon CloudFront Guida per gli sviluppatori Autorizzazioni IAM necessarie per associare le funzioni Lambda @Edge alle distribuzioni CloudFront Ruolo di esecuzione della funzione per i principali del servizio Ruoli collegati ai servizi per Lambda@Edge Le traduzioni sono generate tramite traduzione automatica. In caso di conflitto tra il contenuto di una traduzione e la versione originale in Inglese, quest'ultima prevarrà. Configurazione di ruoli e autorizzazioni IAM per Lambda@Edge Per configurare Lambda@Edge, devi disporre delle seguenti autorizzazioni e ruoli IAM per AWS Lambda: Autorizzazioni IAM : queste autorizzazioni ti consentono di creare la tua funzione Lambda e associarla alla tua distribuzione. CloudFront Un ruolo di esecuzione della funzione Lambda (ruolo IAM): i principali del servizio Lambda assumono questo ruolo per eseguire la funzione. Ruoli collegati ai servizi per Lambda @Edge: i ruoli collegati ai servizi consentono a specifiche funzioni Lambda di Servizi AWS replicare e abilitare l'utilizzo di file di registro. Regioni AWS CloudWatch CloudFront Autorizzazioni IAM necessarie per associare le funzioni Lambda @Edge alle distribuzioni CloudFront Oltre alle autorizzazioni IAM necessarie per Lambda, sono necessarie le seguenti autorizzazioni per associare le funzioni Lambda alle distribuzioni: CloudFront lambda:GetFunction : concede l’autorizzazione per ottenere informazioni di configurazione relative alla funzione Lambda e un URL pre-firmato per scaricare un file .zip contenente la funzione. lambda:EnableReplication* : concede l’autorizzazione alla policy delle risorse in modo che il servizio di replica Lambda possa ottenere il codice e la configurazione della funzione. lambda:DisableReplication* : concede l’autorizzazione alla policy delle risorse in modo che il servizio di replica Lambda possa eliminare la funzione. Importante È necessario aggiungere l’asterisco ( * ) alla fine delle azioni lambda:EnableReplication * e lambda:DisableReplication * . Per la risorsa, specificate l'ARN della versione della funzione che desiderate eseguire quando si verifica un CloudFront evento, come nell'esempio seguente: arn:aws:lambda:us-east-1:123456789012:function: TestFunction :2 iam:CreateServiceLinkedRole — Concede l'autorizzazione a creare un ruolo collegato al servizio che Lambda @Edge utilizza per replicare le funzioni Lambda. CloudFront Dopo aver configurato Lambda@Edge per la prima volta, il ruolo collegato al servizio viene creato automaticamente. Non è necessario aggiungere questa autorizzazione ad altre distribuzioni che utilizzano Lambda@Edge. cloudfront:UpdateDistribution o cloudfront:CreateDistribution : concede l’autorizzazione per aggiornare o creare una distribuzione. Per ulteriori informazioni, consulta i seguenti argomenti: Identity and Access Management per Amazon CloudFront Autorizzazioni di accesso alle risorse Lambda nella Guida per gli sviluppatori di AWS Lambda Ruolo di esecuzione della funzione per i principali del servizio È necessario creare un ruolo IAM che può essere assunto dai principali servizi lambda.amazonaws.com e edgelambda.amazonaws.com quando eseguono la funzione. Suggerimento Quando crei la tua funzione nella console Lambda, puoi scegliere di creare un nuovo ruolo di esecuzione utilizzando un modello di AWS policy. Questa fase aggiunge automaticamente le autorizzazioni Lambda@Edge richieste per eseguire la funzione. Consulta Fase 5 del tutorial: creazione di una semplice funzione Lambda@Edge . Per ulteriori informazioni sulla creazione manuale di un ruolo IAM, consulta Creazione di ruoli e collegamento di policy (console) nella Guida per l’utente IAM . Esempio: policy di attendibilità del ruolo Puoi aggiungere questo ruolo nella scheda Relazioni di attendibilità nella console IAM. Non aggiungere questa policy nella scheda Autorizzazioni . JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "lambda.amazonaws.com", "edgelambda.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] } Per ulteriori informazioni sulle autorizzazioni da concedere al ruolo di esecuzione, consulta Autorizzazioni di accesso alle risorse Lambda nella Guida per gli sviluppatori di AWS Lambda . Note Per impostazione predefinita, ogni volta che un CloudFront evento attiva una funzione Lambda, i dati vengono scritti CloudWatch nei log. Se si desidera utilizzare questi registri, il ruolo di esecuzione richiede l'autorizzazione per scrivere dati nei registri. CloudWatch Puoi utilizzare il ruolo AWSLambdaBasicExecutionRole predefinito per concedere l’autorizzazione al ruolo di esecuzione. Per ulteriori informazioni sui CloudWatch registri, vedere. Registri delle funzioni Edge Se il codice della funzione Lambda accede ad altre AWS risorse, ad esempio la lettura di un oggetto da un bucket S3, il ruolo di esecuzione necessita dell'autorizzazione per eseguire tale azione. Ruoli collegati ai servizi per Lambda@Edge Lambda@Edge usa un ruolo collegato al servizio IAM. Un ruolo collegato ai servizi è un tipo univoco di ruolo IAM collegato direttamente a un servizio. I ruoli collegati ai servizi sono definiti automaticamente dal servizio stesso e includono tutte le autorizzazioni richieste dal servizio per eseguire chiamate agli altri servizi AWS per tuo conto. Lambda@Edge usa i seguenti ruoli collegati al servizio IAM: AWSServiceRoleForLambdaReplicator : Lambda@Edge utilizza questo ruolo per consentire a Lambda@Edge di replicare funzioni su Regioni AWS. Quando aggiungi per la prima volta un trigger Lambda @Edge CloudFront, AWSServiceRoleForLambdaReplicator viene creato automaticamente un ruolo denominato per consentire a Lambda @Edge di replicare le funzioni. Regioni AWS Tale ruolo è obbligatorio per utilizzare le funzioni Lambda@Edge. L’aspetto dell’ARN per il ruolo AWSServiceRoleForLambdaReplicator è simile a quello dell’esempio seguente: arn:aws:iam::123456789012:role/aws-service-role/replicator.lambda.amazonaws.com/AWSServiceRoleForLambdaReplicator AWSServiceRoleForCloudFrontLogger — CloudFront utilizza questo ruolo per inviare file di registro in. CloudWatch Puoi utilizzare i file di log per eseguire il debug degli errori di convalida di Lambda@Edge. Il AWSServiceRoleForCloudFrontLogger ruolo viene creato automaticamente quando si aggiunge l'associazione di funzioni Lambda @Edge per consentire di inviare i file di CloudFront registro degli errori Lambda @Edge a. CloudWatch L'ARN per il ruolo AWSServiceRoleForCloudFrontLogger avrà il seguente aspetto: arn:aws:iam::account_number:role/aws-service-role/logger.cloudfront.amazonaws.com/AWSServiceRoleForCloudFrontLogger Un ruolo collegato ai servizi semplifica la configurazione e l'utilizzo di Lambda@Edge perché non dovrai più aggiungere manualmente le autorizzazioni necessarie. Lambda@Edge definisce le autorizzazioni dei relativi ruoli associati ai servizi e solo Lambda@Edge potrà assumere i propri ruoli. Le autorizzazioni definite includono policy di trust e di autorizzazioni. Le policy di autorizzazioni non possono essere attribuite a nessun'altra entità IAM. È necessario rimuovere tutte le risorse associate CloudFront o Lambda @Edge prima di poter eliminare un ruolo collegato al servizio. Questo aiuta a proteggere le risorse Lambda@Edge in modo da non rimuovere un ruolo collegato al servizio che è ancora necessario per accedere alle risorse attive. Per ulteriori informazioni sui ruoli collegati al servizio, consulta Ruoli collegati ai servizi per CloudFront . Autorizzazioni del ruolo collegato ai servizi per Lambda@Edge Lambda@Edge usa due ruoli collegati ai servizi denominati AWSServiceRoleForLambdaReplicator e AWSServiceRoleForCloudFrontLogger . Nelle sezioni seguenti vengono descritte le autorizzazioni per ognuno di questi ruoli. Indice Autorizzazioni del ruolo collegato ai servizi per Lambda Replicator Autorizzazioni relative ai ruoli collegati ai servizi per logger CloudFront Autorizzazioni del ruolo collegato ai servizi per Lambda Replicator Questo ruolo collegato al servizio consente a Lambda di replicare le funzioni Lambda@Edge su Regioni AWS. Ai fini dell'assunzione del ruolo AWSServiceRoleForLambdaReplicator, il ruolo collegato ai servizi replicator.lambda.amazonaws.com considera attendibile il servizio. La policy delle autorizzazioni del ruolo consente a Lambda@Edge di eseguire le seguenti operazioni sulle risorse specificate: lambda:CreateFunction - arn:aws:lambda:*:*:function:* lambda:DeleteFunction - arn:aws:lambda:*:*:function:* lambda:DisableReplication - arn:aws:lambda:*:*:function:* iam:PassRole - all AWS resources cloudfront:ListDistributionsByLambdaFunction - all AWS resources Autorizzazioni relative ai ruoli collegati ai servizi per logger CloudFront Questo ruolo collegato al servizio consente di CloudFront inviare file di registro in CloudWatch modo da poter eseguire il debug degli errori di convalida Lambda @Edge. Ai fini dell'assunzione del ruolo AWSServiceRoleForCloudFrontLogger, il ruolo collegato ai servizi logger.cloudfront.amazonaws.com considera attendibile il servizio. La policy delle autorizzazioni del ruolo consente a Lambda@Edge di eseguire le seguenti azioni sulla risorsa arn:aws:logs:*:*:log-group:/aws/cloudfront/* specificata: logs:CreateLogGroup logs:CreateLogStream logs:PutLogEvents Per consentire a un'entità IAM (ad esempio un utente, un gruppo o un ruolo) di eliminare ruoli Lambda@Edge collegati ai servizi, devi configurare le autorizzazioni. Per ulteriori informazioni, consulta Autorizzazioni del ruolo collegato ai servizi nella Guida per l'utente IAM . Creazione di ruoli collegati ai servizi per Lambda@Edge In genere non si creano manualmente i ruoli collegati ai servizi per Lambda@Edge. Il servizio crea i ruoli automaticamente nei seguenti casi: Quando si crea un trigger per la prima volta, il servizio crea il ruolo AWSServiceRoleForLambdaReplicator (se non esiste già). Questo ruolo consente a Lambda di replicare le funzioni Lambda@Edge su Regioni AWS. Se lo elimini, il ruolo collegato ai servizi verrà creato nuovamente quando aggiungi un nuovo trigger per Lambda@Edge in una distribuzione. Quando aggiorni o crei una CloudFront distribuzione con un'associazione Lambda @Edge, il servizio crea il AWSServiceRoleForCloudFrontLogger ruolo (se il ruolo non esiste già). Questo ruolo consente di CloudFront inviare i file di registro a CloudWatch. Se elimini il ruolo collegato al servizio, il ruolo verrà creato nuovamente quando aggiorni o crei una CloudFront distribuzione con un'associazione Lambda @Edge. Per creare manualmente questi ruoli collegati ai servizi, puoi eseguire i seguenti comandi (): AWS Command Line Interface AWS CLI Per creare il ruolo AWSServiceRoleForLambdaReplicator Esegui il comando seguente. aws iam create-service-linked-role --aws-service-name replicator.lambda.amazonaws.com Per creare il ruolo AWSServiceRoleForCloudFrontLogger Esegui il comando seguente. aws iam create-service-linked-role --aws-service-name logger.cloudfront.amazonaws.com Modifica dei ruoli Lambda@Edge collegati ai servizi Lambda@Edge non consente di modificare i ruoli collegati al servizio AWSServiceRoleForLambdaReplicator o AWSServiceRoleForCloudFrontLogger. Dopo che è stato creato un ruolo collegato al servizio, non puoi modificare il nome del ruolo perché varie entità possono farvi riferimento. Puoi tuttavia utilizzare IAM per modificare la descrizione del ruolo. Per ulteriori informazioni, consulta Modifica di un ruolo collegato al servizio nella Guida per l’utente di IAM . Supportato Regioni AWS per i ruoli collegati ai servizi Lambda @Edge CloudFront supporta l'utilizzo di ruoli collegati ai servizi per Lambda @Edge nei seguenti casi: Regioni AWS Stati Uniti orientali (Virginia settentrionale) – us-east-1 Stati Uniti orientali (Ohio) – us-east-2 Stati Uniti occidentali (California settentrionale) – us-west-1 Stati Uniti occidentali (Oregon) – us-west-2 Asia Pacifico (Mumbai) – ap-south-1 Asia Pacifico (Seul) - ap-northeast-2 Asia Pacifico (Singapore) – ap-southeast-1 Asia Pacifico (Sydney) - ap-southeast-2 Asia Pacifico (Tokyo) - ap-northeast-1 Europe (Francoforte) – eu-central-1 Europa (Irlanda) – eu-west-1 Europe (Londra) – eu-west-2 Sud America (San Paolo) – sa-east-1 JavaScript è disabilitato o non è disponibile nel tuo browser. Per usare la documentazione AWS, JavaScript deve essere abilitato. Consulta le pagine della guida del browser per le istruzioni. Convenzioni dei documenti Tutorial: funzione Lambda@Edge di base Scrivere e creare una funzione Lambda@Edge Questa pagina ti è stata utile? - Sì Grazie per averci comunicato che stiamo facendo un buon lavoro! Se hai un momento, ti invitiamo a dirci che cosa abbiamo fatto che ti è piaciuto così possiamo offrirti altri contenuti simili. Questa pagina ti è stata utile? - No Grazie per averci comunicato che questa pagina ha bisogno di essere modificata. Siamo spiacenti di non aver soddisfatto le tue esigenze. Se hai un momento, ti invitiamo a dirci come possiamo migliorare la documentazione. | 2026-01-13T09:30:34 |
https://releases.llvm.org/18.1.8/docs/ReleaseNotes.html | LLVM 18.1.8 Release Notes — LLVM 18.1.8 documentation Navigation index next | previous | LLVM Home | Documentation » Getting Involved » LLVM 18.1.8 Release Notes Documentation Getting Started/Tutorials User Guides Reference Getting Involved Contributing to LLVM Submitting Bug Reports Mailing Lists IRC Meetups and Social Events Additional Links FAQ Glossary Publications Github Repository This Page Show Source Quick search LLVM 18.1.8 Release Notes ¶ Introduction Non-comprehensive list of changes in this release Update on required toolchains to build LLVM Changes to the LLVM IR Changes to LLVM infrastructure Changes to building LLVM Changes to TableGen Changes to Interprocedural Optimizations Changes to the AArch64 Backend Changes to the AMDGPU Backend Changes to the ARM Backend Changes to the AVR Backend Changes to the DirectX Backend Changes to the Hexagon Backend Changes to the LoongArch Backend Changes to the MIPS Backend Changes to the PowerPC Backend Changes to the RISC-V Backend Changes to the SystemZ Backend Changes to the WebAssembly Backend Changes to the Windows Target Changes to the X86 Backend Changes to the OCaml bindings Changes to the Python bindings Changes to the C API Changes to the CodeGen infrastructure Changes to the Metadata Info Changes to the Debug Info Changes to the LLVM tools Changes to LLDB Changes to Sanitizers Changes to the Profile Runtime Other Changes External Open Source Projects Using LLVM 15 Additional Information Introduction ¶ This document contains the release notes for the LLVM Compiler Infrastructure, release 18.1.8. Here we describe the status of LLVM, including major improvements from the previous release, improvements in various subprojects of LLVM, and some of the current users of the code. All LLVM releases may be downloaded from the LLVM releases web site . For more information about LLVM, including information about the latest release, please check out the main LLVM web site . If you have questions or comments, the Discourse forums is a good place to ask them. Note that if you are reading this file from a Git checkout or the main LLVM web page, this document applies to the next release, not the current one. To see the release notes for a specific release, please see the releases page . Non-comprehensive list of changes in this release ¶ … Update on required toolchains to build LLVM ¶ Changes to the LLVM IR ¶ The llvm.stacksave and llvm.stackrestore intrinsics now use an overloaded pointer type to support non-0 address spaces. The constant expression variants of the following instructions have been removed: and or lshr ashr zext sext fptrunc fpext fptoui fptosi uitofp sitofp Added llvm.exp10 intrinsic. Added a code_model attribute for the global variable . Changes to LLVM infrastructure ¶ Minimum Clang version to build LLVM in C++20 configuration has been updated to clang-17.0.6. Changes to building LLVM ¶ Changes to TableGen ¶ Added constructs for debugging TableGen files: dump keyword to dump messages to standard error, see https://github.com/llvm/llvm-project/pull/68793 . !repr bang operator to inspect the content of values, see https://github.com/llvm/llvm-project/pull/68716 . Changes to Interprocedural Optimizations ¶ Changes to the AArch64 Backend ¶ Added support for Cortex-A520, Cortex-A720 and Cortex-X4 CPUs. Neoverse-N2 was incorrectly marked as an Armv8.5a core. This has been changed to an Armv9.0a core. However, crypto options are not enabled by default for Armv9 cores, so -mcpu=neoverse-n2+crypto is now required to enable crypto for this core. As far as the compiler is concerned, Armv9.0a has the same features enabled as Armv8.5a, with the exception of crypto. Assembler/disassembler support has been added for 2023 architecture extensions. Support has been added for Stack Clash Protection. During function frame creation and dynamic stack allocations, the compiler will issue memory accesses at reguilar intervals so that a guard area at the top of the stack can’t be skipped over. Changes to the AMDGPU Backend ¶ llvm.sqrt.f32 is now lowered correctly. Use llvm.amdgcn.sqrt.f32 for raw instruction access. Implemented llvm.stacksave and llvm.stackrestore intrinsics. Implemented llvm.get.rounding The default AMDHSA code object version is now 5. Changes to the ARM Backend ¶ Added support for Cortex-M52 CPUs. Added execute-only support for Armv6-M. Changes to the AVR Backend ¶ Changes to the DirectX Backend ¶ Changes to the Hexagon Backend ¶ Changes to the LoongArch Backend ¶ Added intrinsics support for all LSX (128-bits SIMD) and LASX (256-bits SIMD) instructions. Added definition and intrinsics support for new instructions that were introduced in LoongArch Reference Manual V1.10. Emitted adjacent pcaddu18i+jirl instrunction sequence with one relocation R_LARCH_CALL36 instead of pcalau12i+jirl with two relocations R_LARCH_PCALA_{HI20,LO12} for function call in medium code model. The code model of global variables can now be overridden by means of the newly added LLVM IR attribute, code_model . Added support for the llvm.is.fpclass intrinsic. mulodi4 and muloti4 libcalls were disabled due to absence in libgcc. Added initial support for auto vectorization. Added initial support for linker relaxation. Assorted codegen improvements. Changes to the MIPS Backend ¶ Changes to the PowerPC Backend ¶ LLJIT’s JIT linker now defaults to JITLink on 64-bit ELFv2 targets. Initial-exec TLS model is supported on AIX. Implemented new resource based scheduling model of POWER7 and POWER8. frexp libcall now references correct symbol name for fp128 . Optimized materialization of 64-bit immediates, code generation of vec_promote and atomics. Global constant strings are pooled in the TOC under one entry to reduce the number of entries in the TOC. Added a number of missing Power10 extended mnemonics. Added the SCV instruction. Fixed register class for the paddi instruction. Optimize VPERM and fix code order for swapping vector operands on LE. Added various bug fixes and code gen improvements. AIX Support/improvements: Support for a non-TOC-based access sequence for the local-exec TLS model (called small local-exec). XCOFF toc-data peephole optimization and bug fixes. Move less often used __ehinfo TOC entries to the end of the TOC section. Fixed problems when the AIX libunwind unwinds starting from a signal handler and the function that raised the signal happens to be a leaf function that shares the stack frame with its caller or a leaf function that does not store the stack frame backchain. Changes to the RISC-V Backend ¶ The Zfa extension version was upgraded to 1.0 and is no longer experimental. Zihintntl extension version was upgraded to 1.0 and is no longer experimental. Intrinsics were added for Zk*, Zbb, and Zbc. See https://github.com/riscv-non-isa/riscv-c-api-doc/blob/master/riscv-c-api.md#scalar-bit-manipulation-extension-intrinsics Default ABI with F but without D was changed to ilp32f for RV32 and to lp64f for RV64. The Zvbb, Zvbc, Zvkb, Zvkg, Zvkn, Zvknc, Zvkned, Zvkng, Zvknha, Zvknhb, Zvks, Zvksc, Zvksed, Zvksg, Zvksh, and Zvkt extension version was upgraded to 1.0 and is no longer experimental. However, the C intrinsics for these extensions are still experimental. To use the C intrinsics for these extensions, -menable-experimental-extensions needs to be passed to Clang. XSfcie extension and SiFive CSRs and instructions that were associated with it have been removed. None of these CSRs and instructions were part of “SiFive Custom Instruction Extension” as SiFive defines it. The LLVM project needs to work with SiFive to define and document real extension names for individual CSRs and instructions. -mcpu=sifive-p450 was added. CodeGen of RV32E/RV64E was supported experimentally. CodeGen of ilp32e/lp64e was supported experimentally. Support was added for the Ziccif, Ziccrse, Ziccamoa, Zicclsm, Za64rs, Za128rs and Zic64b extensions which were introduced as a part of the RISC-V Profiles specification. The Smepmp 1.0 extension is now supported. -mcpu=sifive-p670 was added. Support for the Zicond extension is no longer experimental. Changes to the SystemZ Backend ¶ Properly support 16 byte atomic int/fp types and ops. Support i128 as legal type in VRs. Add an i128 cost model. Support building individual functions with backchain using the __attribute__((target(“backchain”))) syntax. Add exception handling for XPLINK. Add support for llvm-objcopy. Changes to the WebAssembly Backend ¶ Changes to the Windows Target ¶ The LLVM filesystem class UniqueID and function equivalent() no longer determine that distinct different path names for the same hard linked file actually are equal. This is an intentional tradeoff in a bug fix, where the bug used to cause distinct files to be considered equivalent on some file systems. This change fixed the issues https://github.com/llvm/llvm-project/issues/61401 and https://github.com/llvm/llvm-project/issues/22079 . Changes to the X86 Backend ¶ The i128 type now matches GCC and clang’s __int128 type. This mainly benefits external projects such as Rust which aim to be binary compatible with C, but also fixes code generation where LLVM already assumed that the type matched and called into libgcc helper functions. Support ISA of USER_MSR . Support ISA of AVX10.1-256 and AVX10.1-512 . -mcpu=pantherlake and -mcpu=clearwaterforest are now supported. -mapxf is supported. Marking global variables with code_model = "small"/"large" in the IR now overrides the global code model to allow 32-bit relocations or require 64-bit relocations to the global variable. The medium code model’s code generation was audited to be more similar to the small code model where possible. Changes to the OCaml bindings ¶ Changes to the Python bindings ¶ The python bindings have been removed. Changes to the C API ¶ Added LLVMGetTailCallKind and LLVMSetTailCallKind to allow getting and setting tail , musttail , and notail attributes on call instructions. The following functions for creating constant expressions have been removed, because the underlying constant expressions are no longer supported. Instead, an instruction should be created using the LLVMBuildXYZ APIs, which will constant fold the operands if possible and create an instruction otherwise: LLVMConstAnd LLVMConstOr LLVMConstLShr LLVMConstAShr LLVMConstZExt LLVMConstSExt LLVMConstZExtOrBitCast LLVMConstSExtOrBitCast LLVMConstIntCast LLVMConstFPTrunc LLVMConstFPExt LLVMConstFPToUI LLVMConstFPToSI LLVMConstUIToFP LLVMConstSIToFP LLVMConstFPCast Added LLVMCreateTargetMachineWithOptions , along with helper functions for an opaque option structure, as an alternative to LLVMCreateTargetMachine . The option structure exposes an additional setting (i.e., the target ABI) and provides default values for unspecified settings. Added LLVMGetNNeg and LLVMSetNNeg for getting/setting the new nneg flag on zext instructions, and LLVMGetIsDisjoint and LLVMSetIsDisjoint for getting/setting the new disjoint flag on or instructions. Added the following functions for manipulating operand bundles, as well as building call and invoke instructions that use operand bundles: LLVMBuildCallWithOperandBundles LLVMBuildInvokeWithOperandBundles LLVMCreateOperandBundle LLVMDisposeOperandBundle LLVMGetNumOperandBundles LLVMGetOperandBundleAtIndex LLVMGetNumOperandBundleArgs LLVMGetOperandBundleArgAtIndex LLVMGetOperandBundleTag Added LLVMGetFastMathFlags and LLVMSetFastMathFlags for getting/setting the fast-math flags of an instruction, as well as LLVMCanValueUseFastMathFlags for checking if an instruction can use such flags Changes to the CodeGen infrastructure ¶ A new debug type isel-dump is added to show only the SelectionDAG dumps after each ISel phase (i.e. -debug-only=isel-dump ). This new debug type can be filtered by function names using -filter-print-funcs=<function names> , the same flag used to filter IR dumps after each Pass. Note that the existing -debug-only=isel will take precedence over the new behavior and print SelectionDAG dumps of every single function regardless of -filter-print-funcs ’s values. PrologEpilogInserter no longer supports register scavenging during forwards frame index elimination. Targets should use backwards frame index elimination instead. RegScavenger no longer supports forwards register scavenging. Clients should use backwards register scavenging instead, which is preferred because it does not depend on accurate kill flags. Changes to the Metadata Info ¶ Added a new loop metadata !{!”llvm.loop.align”, i32 64} Changes to the Debug Info ¶ Changes to the LLVM tools ¶ llvm-symbolizer now treats invalid input as an address for which source information is not found. Fixed big-endian support in llvm-symbolizer ’s DWARF location parser. llvm-readelf now supports --extra-sym-info ( -X ) to display extra information (section name) when showing symbols. llvm-readobj / llvm-readelf now supports --decompress / -z with string and hex dump for ELF object files. llvm-symbolizer and llvm-addr2line now support addresses specified as symbol names. llvm-objcopy now supports --gap-fill and --pad-to options, for ELF input and binary output files only. llvm-objcopy now supports -O elf64-s390 for SystemZ. Supported parsing XCOFF auxiliary symbols in obj2yaml . llvm-ranlib now supports -X on AIX to specify the type of object file ranlib should examine. llvm-cxxfilt now supports --no-params / -p to skip function parameters. llvm-nm now supports --export-symbol to ignore the import symbol file. llvm-nm now supports the --line-numbers ( -l ) option to use debugging information to print symbols’ filenames and line numbers. llvm-rc and llvm-windres now accept file path references in .rc files concatenated from multiple string literals. The llvm-windres option --preprocessor now resolves its argument in the PATH environment variable as expected, and options passed with --preprocessor-arg are placed before the input file as they should be. The llvm-windres option --preprocessor has been updated with the breaking behaviour change from GNU windres from binutils 2.36, where the whole argument is considered as one path, not considered as a sequence of tool name and parameters. Changes to LLDB ¶ SBWatchpoint::GetHardwareIndex is deprecated and now returns -1 to indicate the index is unavailable. Methods in SBHostOS related to threads have had their implementations removed. These methods will return a value indicating failure. SBType::FindDirectNestedType function is added. It’s useful for formatters to quickly find directly nested type when it’s known where to search for it, avoiding more expensive global search via SBTarget::FindFirstType . lldb-vscode was renamed to lldb-dap and and its installation instructions have been updated to reflect this. The underlying functionality remains unchanged. The mte_ctrl register can now be read from AArch64 Linux core files. LLDB on AArch64 Linux now supports debugging the Scalable Matrix Extension (SME) and Scalable Matrix Extension 2 (SME2) for both live processes and core files. For details refer to the AArch64 Linux documentation . LLDB now supports symbol and binary acquisition automatically using the DEBUFINFOD protocol. The standard mechanism of specifying DEBUFINOD servers in the DEBUGINFOD_URLS environment variable is used by default. In addition, users can specify servers to request symbols from using the LLDB setting plugin.symbol-locator.debuginfod.server_urls , override or adding to the environment variable. When running on AArch64 Linux, lldb-server now provides register field information for the following registers: cpsr , fpcr , fpsr , svcr and mte_ctrl . ( lldb ) register read cpsr cpsr = 0x80001000 = ( N = 1 , Z = 0 , C = 0 , V = 0 , SS = 0 , IL = 0 , <...> This is only available when lldb is built with XML support. Where possible the CPU’s capabilities are used to decide which fields are present, however this is not always possible or entirely accurate. If in doubt, refer to the numerical value. On Windows, LLDB can now read the thread names. Changes to Sanitizers ¶ HWASan now defaults to detecting use-after-scope bugs. SpecialCaseList used by sanitizer ignore lists (e.g. *_ignorelist.txt in the Clang resource directory) now uses glob patterns instead of a variant of POSIX Extended Regular Expression (where * is translated to .* ) by default. Search for | to find patterns that may have different meanings now, and replace a|b with {a,b} . Changes to the Profile Runtime ¶ Public header profile/instr_prof_interface.h is added to declare four API functions to fine tune profile collection. Other Changes ¶ The Flags field of llvm::opt::Option has been split into Flags and Visibility to simplify option sharing between various drivers (such as clang , clang-cl , or flang ) that rely on Clang’s Options.td. Overloads of llvm::opt::OptTable that use FlagsToInclude have been deprecated. There is a script and instructions on how to resolve conflicts - see https://reviews.llvm.org/D157150 and https://reviews.llvm.org/D157151 for details. On Linux, FreeBSD, and NetBSD, setting the environment variable LLVM_ENABLE_SYMBOLIZER_MARKUP causes tools to print stacktraces using Symbolizer Markup . This works even if the tools have no embedded symbol information (i.e. are fully stripped); llvm-symbolizer can symbolize the markup afterwards using debuginfod . External Open Source Projects Using LLVM 15 ¶ A project… Additional Information ¶ A wide variety of additional information is available on the LLVM web page , in particular in the documentation section. The web page also contains versions of the API documentation which is up-to-date with the Git version of the source code. You can access versions of these documents specific to this release by going into the llvm/docs/ directory in the LLVM tree. If you have any questions or comments about LLVM, please feel free to contact us via the Discourse forums . Navigation index next | previous | LLVM Home | Documentation » Getting Involved » LLVM 18.1.8 Release Notes © Copyright 2003-2024, LLVM Project. Last updated on 2024-06-19. Created using Sphinx 7.1.2. | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/545 | LLVM Weekly - #545, June 10th 2024 LLVM Weekly - #545, June 10th 2024 Welcome to the five hundred and forty-fifth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org , @llvmweekly or @asbradbury on Twitter, or @llvmweekly@fosstodon.org or @asb@fosstodon.org . News and articles from around the web and events LLVM 18.1.7 was released . Google how now open sourced GWPSan . According to the LLVM calendar in the coming week there will be the following: Office hours with the following hosts: Alexey Bader, Alina Sbirlea, Kristof Beyls, Johannes Doerfert. Online sync-ups on the following topics: Flang, LLVM AA, pointer authentication, SPIR-V, libc++, new contributors, LLVM/offload, classic flang, loop optimisations, BOLT, OpenMP for Flang, MLIR. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Mahesh Ravishankar posted an MLIR RFC on updating the ‘general design’ section of operation canonicalisations in MLIR . Han-Kuan proposed making the SLP vectorizer support revectorizing vector instructions . Alex Bradbury suggested adding an llvm.memset_pattern.inline intrinsic . “Lifengxiang1025” proposed an RFC on printing PGO accuracy metrics . “ArcaneNibble” started an RFC discussion on supporting the WCH/QingKe RISC-V extensions . Min-Yih Hsu started a discussion on improving pre-commit buildbot output . Minutes from the March and April LLVM Foundation board meeting were published. Stephen Neuendorffer shared news of the open sourcing of an LLVM backend for the AMD/Xilinx AI Engine processors . LLVM commits A new target-specific pass was introduced to merge convergence region exit targets for SPIR-V. a5641f1 . The algorithmic complexity of MachineOutliner::findCandidates was reduced, improving runtime for large inputs. 16c925a . A new pass was implemented to lower variadics in IR, enabled for AMDGPU and tested primarily from WebAssembly. 8516f54 . SelectionDAGISel was ported to the new pass manager. 7652a59 . The GlobalMerge pass gained a MinSize option, used to control the minimum size in bytes of each global that should be considered for merging. 0f66915 . The preserve_none calling convention is now supported for AArch64. ae1596a . Support was removed for icmp and fcmp constant expressions. eb3f2be . AArch64LoopIdiomTransform was generalised to LoopIdiomVectorize. 37e309f . The minimum python version was updated to 3.8. 33f4a77 . Clang commits Support was added for AMDGCN flavoured SPIRV. 88e2bb4 . The unix.BlockInCriticalSection and security.PutEenvStackArray checkers are no longer ‘alpha’ quality. 6ef785c , bc3baa9 . A new optin.taint.TaintedAlloc checker was added to Clang’s static analyzer for catching unbounded memory allocation calls. 289725f . Other project commits LLDB now supports CoreDump debugging for 64-bit RISC-V. d3a9043 . Flang started using fir.declare / fir.dummy_scope to attach TBAA tags. 6cd86d0 . A collection of f16 libc math functions were added to LLVM’s libc. 25b037b , 2635d04 , 6b5ae14 , and more. libcxx hardening was fully documented. 86070a8 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://docs.aws.amazon.com/ko_kr/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-permissions.html | Lambda@Edge에 대한 IAM 권한 및 역할 설정 - Amazon CloudFront Lambda@Edge에 대한 IAM 권한 및 역할 설정 - Amazon CloudFront 설명서 Amazon CloudFront 개발자 가이드 Lambda@Edge 함수를 CloudFront 배포와 연결하는 데 필요한 IAM 권한 서비스 보안 주체에 대한 함수 실행 역할 Lambda@Edge의 서비스 연결 역할 Lambda@Edge에 대한 IAM 권한 및 역할 설정 Lambda@Edge를 구성하려면 AWS Lambda에 대한 다음 IAM 권한과 역할이 있어야 합니다. IAM 권한 - 이러한 권한을 통해 Lambda 함수를 생성하고 CloudFront 배포와 연결할 수 있습니다. Lambda 함수 실행 역할 (IAM 역할) – Lambda 서비스 보안 주체가 이 역할을 맡아 함수를 실행합니다. Lambda@Edge 서비스 연결 역할 - 서비스 연결 역할은 특정 AWS 서비스가 Lambda 함수를 AWS 리전에 복제하고 CloudWatch가 CloudFront 로그 파일을 사용할 수 있도록 합니다. Lambda@Edge 함수를 CloudFront 배포와 연결하는 데 필요한 IAM 권한 사용자가 Lambda 함수를 CloudFront 배포와 연결하려면 Lambda에 필요한 IAM 권한 외에 다음과 같은 권한이 필요합니다. lambda:GetFunction – 함수가 포함된 .zip 파일을 다운로드하기 위한 미리 서명된 URL과 Lambda 함수에 대한 구성 정보를 가져오기 위한 권한을 부여합니다. lambda:EnableReplication* – Lambda 복제 서비스가 함수 코드 및 구성을 가져올 수 있도록 리소스 정책에 대한 권한을 부여합니다. lambda:DisableReplication* – Lambda 복제 서비스가 함수를 삭제할 수 있도록 리소스 정책에 대한 권한을 부여합니다. 중요 lambda:EnableReplication * 및 lambda:DisableReplication * 작업 끝에 별표( * )를 추가해야 합니다. 리소스의 경우, 다음 예시와 같이 CloudFront 이벤트가 발생할 때 실행할 함수 버전의 ARN을 지정합니다. arn:aws:lambda:us-east-1:123456789012:function: TestFunction :2 iam:CreateServiceLinkedRole – Lambda@Edge가 CloudFront에서 Lambda 함수를 복제하는 데 사용하는 서비스 연결 역할을 생성하기 위한 권한을 부여합니다. Lambda@Edge를 처음으로 구성한 후 서비스 연결 역할이 자동으로 생성됩니다. Lambda@Edge를 사용하는 다른 배포에는 이 권한을 추가할 필요가 없습니다. cloudfront:UpdateDistribution 또는 cloudfront:CreateDistribution – 배포를 업데이트 또는 생성하기 위한 권한을 부여합니다. 자세한 정보는 다음의 주제를 참조하세요. Amazon CloudFront용 Identity and Access Management AWS Lambda 개발자 안내서의 Lambda 리소스 액세스 권한 서비스 보안 주체에 대한 함수 실행 역할 함수를 실행할 때 lambda.amazonaws.com 및 edgelambda.amazonaws.com 서비스 보안 주체가 맡을 수 있는 IAM 역할을 생성해야 합니다. 작은 정보 Lambda 콘솔에서 함수를 생성할 때 AWS 정책 템플릿을 사용하여 새로운 실행 역할을 생성할 수 있습니다. 이 단계는 함수를 실행하는 데 필요한 Lambda@Edge 권한을 자동으로 추가합니다. 자습서의 5단계: 간단한 Lambda@Edge 함수 생성 을 참조하세요. IAM 역할 수동 생성에 대한 자세한 내용은 IAM 사용 설명서의 역할 생성 및 정책 연결(콘솔) 을 참조하세요. 예시: 역할 신뢰 정책 IAM 콘솔의 신뢰 관계 탭에서 이 역할을 추가할 수 있습니다. 권한 탭 아래에 이 정책을 추가하지 마세요. JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "lambda.amazonaws.com", "edgelambda.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] } 실행 역할에 부여해야 하는 권한에 대한 자세한 내용은 AWS Lambda 개발자 안내서의 Lambda 리소스 액세스 권한 을 참조하세요. Notes 기본적으로 CloudFront 이벤트가 Lambda 함수를 트리거할 때마다 데이터는 CloudWatch Logs에 기록됩니다. 이러한 로그를 사용하려면 실행 역할에 CloudWatch Logs에 데이터를 기록할 권한이 있어야 합니다. 사전 정의된 AWSLambdaBasicExecutionRole을 사용하여 실행 역할에 대한 권한을 부여할 수 있습니다. CloudWatch Logs에 대한 자세한 내용은 엣지 함수 로그 섹션을 참조하세요. Lambda 함수 코드가 S3 버킷에서 객체를 읽는 것처럼 다른 AWS 리소스에 액세스하는 경우 실행 역할에는 해당 작업을 수행할 수 있는 권한이 필요합니다. Lambda@Edge의 서비스 연결 역할 Lambda@Edge는 IAM 서비스 연결 역할 을 사용합니다. 서비스 연결 역할은 서비스에 직접 연결된 고유한 유형의 IAM 역할입니다. 서비스 연결 역할은 해당 서비스에서 사전 정의하며 서비스에서 사용자를 대신하여 다른 AWS 서비스를 호출하기 위해 필요한 모든 권한을 포함합니다. Lambda@Edge는 다음 IAM 서비스 연결 역할을 사용합니다. AWSServiceRoleForLambdaReplicator - Lambda@Edge에서는 이 역할을 사용해 Lambda@Edge가 AWS 리전에 함수를 복제하도록 합니다. CloudFront에 Lambda@Edge 트리거를 처음 추가할 때 AWSServiceRoleForLambdaReplicator라는 역할이 자동으로 생성되어 Lambda@Edge가 AWS 리전에 함수를 복제할 수 있도록 합니다. Lambda@Edge 함수를 사용하려면 이 역할이 필요합니다. AWSServiceRoleForLambdaReplicator 역할에 대한 ARN은 다음 예시와 같을 수 있습니다. arn:aws:iam::123456789012:role/aws-service-role/replicator.lambda.amazonaws.com/AWSServiceRoleForLambdaReplicator AWSServiceRoleForCloudFrontLogger – CloudFront는 이 역할을 사용하여 CloudWatch에 로그 파일을 푸시합니다. 로그 파일을 사용하여 Lambda@Edge 검증 오류를 디버깅할 수 있습니다. AWSServiceRoleForCloudFrontLogger 역할은 CloudFront에서 Lambda@Edge 오류 로그 파일을 CloudWatch로 푸시하도록 허용하기 위해 Lambda@Edge 함수 연결을 추가한 경우 자동으로 생성됩니다. AWSServiceRoleForCloudFrontLogger 역할의 ARN의 모양은 다음과 같습니다. arn:aws:iam::account_number:role/aws-service-role/logger.cloudfront.amazonaws.com/AWSServiceRoleForCloudFrontLogger 서비스 연결 역할이 있으면 필요한 권한을 수동으로 추가할 필요가 없으므로 Lambda@Edge를 설정 및 사용하기가 쉽습니다. 서비스 연결 역할의 권한은 Lambda@Edge에서 정의하며, Lambda@Edge만이 그 역할을 맡을 수 있습니다. 정의된 권한에는 신뢰 정책과 권한 정책이 포함됩니다. 권한 정책은 다른 어떤 IAM 엔터티에도 연결할 수 없습니다. 서비스 연결 역할을 삭제하려면 먼저 연결된 CloudFront 또는 Lambda@Edge 리소스를 제거해야 합니다. 그러면 활성 리소스에 액세스하는 데 필요한 서비스 연결 역할을 제거하지 않고 Lambda@Edge 리소스를 보호할 수 있습니다. 서비스 연결 역할에 대한 자세한 내용은 CloudFront 서비스 연결 역할 를 참조하세요. Lambda@Edge의 서비스 연결 역할 권한 Lambda@Edge는 AWSServiceRoleForLambdaReplicator 및 AWSServiceRoleForCloudFrontLogger 이라는 서비스 연결 역할을 사용합니다. 다음 단원에서는 이러한 각 역할에 대한 권한을 설명합니다. 목차 Lambda Replicator의 서비스 연결 역할 권한 CloudFront Logger에 대한 서비스 연결 역할 권한 Lambda Replicator의 서비스 연결 역할 권한 이 서비스 연결 역할을 통해 Lambda는 Lambda@Edge 함수를 AWS 리전에 복제할 수 있습니다. AWSServiceRoleForLambdaReplicator 서비스 연결 역할은 역할을 수임하기 위해 replicator.lambda.amazonaws.com 서비스를 신뢰합니다. 역할 권한 정책에서는 Lambda@Edge가 지정된 리소스에서 다음 작업을 완료할 수 있도록 허용합니다. lambda:CreateFunction 의 arn:aws:lambda:*:*:function:* lambda:DeleteFunction 의 arn:aws:lambda:*:*:function:* lambda:DisableReplication 의 arn:aws:lambda:*:*:function:* iam:PassRole 의 all AWS resources cloudfront:ListDistributionsByLambdaFunction 의 all AWS resources CloudFront Logger에 대한 서비스 연결 역할 권한 이 서비스 연결 역할을 사용하면 Lambda@Edge 검증 오류를 디버깅하기 위해 CloudFront에서 CloudWatch로 로그 파일을 푸시할 수 있습니다. AWSServiceRoleForCloudFrontLogger 서비스 연결 역할은 역할을 수임하기 위해 logger.cloudfront.amazonaws.com 서비스를 신뢰합니다. 역할 권한 정책에서는 Lambda@Edge가 지정된 arn:aws:logs:*:*:log-group:/aws/cloudfront/* 리소스에서 다음 작업을 완료할 수 있도록 허용합니다. logs:CreateLogGroup logs:CreateLogStream logs:PutLogEvents IAM 엔터티(예: 사용자, 그룹 또는 역할)가 Lambda@Edge 서비스 연결 역할을 삭제할 수 있도록 권한을 구성해야 합니다. 자세한 내용은 IAM 사용 설명서 의 서비스 연결 역할 권한 을 참조하세요. Lambda@Edge의 서비스 연결 역할 생성 일반적으로 Lambda@Edge에 대한 서비스 연결 역할은 수동으로 생성하지 않습니다. 다음 시나리오에서 서비스는 역할을 자동으로 생성합니다. 트리거를 처음으로 생성할 때, 이 서비스에서는 AWSServiceRoleForLambdaReplicator 역할이 없는 경우 해당 역할을 생성합니다. 이 역할을 통해 Lambda는 Lambda@Edge 함수를 AWS 리전에 복제할 수 있습니다. 서비스 연결 역할을 삭제하는 경우, 배포에서 Lambda@Edge의 새 트리거를 추가할 때 이 역할이 다시 생성됩니다. Lambda@Edge 연결이 있는 CloudFront 배포를 업데이트 또는 생성할 때, 이 서비스에서는 AWSServiceRoleForCloudFrontLogger라는 역할이 없는 경우 해당 역할을 생성합니다. 이 정책은 CloudFront에서 로그 파일을 CloudFront로 푸시할 수 있도록 허용합니다. 서비스 연결 역할을 삭제했다 하더라도, Lambda@Edge 연결이 있는 CloudFront 배포를 업데이트 또는 생성하는 경우 해당 역할이 다시 생성됩니다. 이러한 서비스 연결 역할을 수동으로 만들어야 하는 경우 다음 AWS Command Line Interface(AWS CLI) 명령을 실행합니다. AWSServiceRoleForLambdaReplicator 역할을 만들려면 다음 명령을 실행합니다. aws iam create-service-linked-role --aws-service-name replicator.lambda.amazonaws.com AWSServiceRoleForCloudFrontLogger 역할을 만들려면 다음 명령을 실행합니다. aws iam create-service-linked-role --aws-service-name logger.cloudfront.amazonaws.com Lambda@Edge 서비스 연결 역할 편집 Lambda@Edge에서는 AWSServiceRoleForLambdaReplicator 또는 AWSServiceRoleForCloudFrontLogger 서비스 연결 역할을 편집할 수 없습니다. 서비스에서 서비스 연결 역할을 만든 후에는 다양한 엔터티가 역할을 참조할 수 있기 때문에 역할 이름을 변경할 수 없습니다. 그러나 IAM을 사용하여 역할 설명을 편집할 수는 있습니다. 자세한 내용은 IAM 사용 설명서 의 서비스 연결 역할 편집 을 참조하세요. Lambda@Edge 서비스 연결 역할에 대해 지원되는 AWS 리전 CloudFront는 다음 AWS 리전에서 Lambda@Edge에 대한 서비스 연결 역할 사용을 지원합니다. 미국 동부(버지니아 북부) – us-east-1 미국 동부(오하이오) – us-east-2 미국 서부(캘리포니아 북부) – us-west-1 미국 서부(오리건) – us-west-2 아시아 태평양(뭄바이) – ap-south-1 아시아 태평양(서울) – ap-northeast-2 아시아 태평양(싱가포르) – ap-southeast-1 아시아 태평양(시드니) – ap-southeast-2 아시아 태평양(도쿄) – ap-northeast-1 유럽(프랑크푸르트) – eu-central-1 유럽(아일랜드) – eu-west-1 유럽(런던) – eu-west-2 남아메리카(상파울루) – sa-east-1 javascript가 브라우저에서 비활성화되거나 사용이 불가합니다. AWS 설명서를 사용하려면 Javascript가 활성화되어야 합니다. 지침을 보려면 브라우저의 도움말 페이지를 참조하십시오. 문서 규칙 자습서: 기본 Lambda@Edge 함수 Lambda@Edge 함수 작성 및 생성 이 페이지의 내용이 도움이 되었습니까? - 예 칭찬해 주셔서 감사합니다! 잠깐 시간을 내어 좋았던 부분을 알려 주시면 더 열심히 만들어 보겠습니다. 이 페이지의 내용이 도움이 되었습니까? - 아니요 이 페이지에 작업이 필요하다는 점을 알려 주셔서 감사합니다. 실망시켜 드려 죄송합니다. 잠깐 시간을 내어 설명서를 향상시킬 수 있는 방법에 대해 말씀해 주십시오. | 2026-01-13T09:30:34 |
https://www.php.net/manual/tr/function.session-name.php | PHP: session_name - Manual update page now Downloads Documentation Get Involved Help Search docs Getting Started Introduction A simple tutorial Language Reference Basic syntax Types Variables Constants Expressions Operators Control Structures Functions Classes and Objects Namespaces Enumerations Errors Exceptions Fibers Generators Attributes References Explained Predefined Variables Predefined Exceptions Predefined Interfaces and Classes Predefined Attributes Context options and parameters Supported Protocols and Wrappers Security Introduction General considerations Installed as CGI binary Installed as an Apache module Session Security Filesystem Security Database Security Error Reporting User Submitted Data Hiding PHP Keeping Current Features HTTP authentication with PHP Cookies Sessions Handling file uploads Using remote files Connection handling Persistent Database Connections Command line usage Garbage Collection DTrace Dynamic Tracing Function Reference Affecting PHP's Behaviour Audio Formats Manipulation Authentication Services Command Line Specific Extensions Compression and Archive Extensions Cryptography Extensions Database Extensions Date and Time Related Extensions File System Related Extensions Human Language and Character Encoding Support Image Processing and Generation Mail Related Extensions Mathematical Extensions Non-Text MIME Output Process Control Extensions Other Basic Extensions Other Services Search Engine Extensions Server Specific Extensions Session Extensions Text Processing Variable and Type Related Extensions Web Services Windows Only Extensions XML Manipulation GUI Extensions Keyboard Shortcuts ? This help j Next menu item k Previous menu item g p Previous man page g n Next man page G Scroll to bottom g g Scroll to top g h Goto homepage g s Goto search (current page) / Focus search box session_regenerate_id » « session_module_name PHP Kılavuzu İşlev Başvuru Kılavuzu Oturum Eklentileri Sessions Oturum İşlevleri Change language: English German Spanish French Italian Japanese Brazilian Portuguese Russian Turkish Ukrainian Chinese (Simplified) Other session_name (PHP 4, PHP 5, PHP 7, PHP 8) session_name — Geçerli oturum ismini döndürür ve/veya tanımlar Açıklama session_name ( ? string $isim = null ): string | false session_name() işlevi geçerli oturumun ismini döndürür. isim belirtilmişse ve null değilse, session_name() işlevi oturumun ismini günceller ve eski oturum ismini döndürür. session_name() oturum ismini güncellerken, HTTP çerezinide günceller (ve session.transid etkinse içeriği çıktılar). HTTP çerezi gönderilirse session_name() hata üretir. Oturumun düzgün çalışması için session_name() işlevi session_start() işlevinden önce çağrılmalıdır. Oturum ismi istek başlangıcında session.name yönergesinde tanımlı isimle sıfırlanır. Oturumun ismini öntanımlı isimden farklı bir isimle değiştirmek için her istekte session_name() çağrısı yapmalısınız ( session_start() veya session_register() çağrısından önce). Bağımsız Değişkenler isim Oturum ismi ( PHPSESSID gibi), çerezler ve URL'lerde kullanılan oturum ismidir. Oturum ismi sadece abecesayısal karakterler içermeli, (çerez uyarılarını etkin kılan kullanıcılar için) kısa ve açıklayıcı olmalıdır. Bir isim belirtilirse ve null değilse, geçerli oturumun ismi bu isimle değiştirilir. Uyarı Oturum ismi sadece rakamlardan oluşamaz, hiç olmazsa bir harf içermesi gerekir. Aksi takdirde her seferinde yeni bir oturum kimliği üretilir. Dönen Değerler Geçerli oturumun ismini döndürür. isim belirtilmişse ve null değilse işlev oturum ismini günceller ve eski oturum kimliğini döndürür, başarısızlık durumunda false döner. Sürüm Bilgisi Sürüm: Açıklama 8.0.0 isim artık null olabiliyor. 7.2.0 session_name() artık oturum durumuna bakıyor, evvelce sadece çerez durumuna bakardı. Bu, eski session_name() işlevinin session_start() çağrısından sonra çağrılmasına sebep oluyor ve PHP'nin çökmesine veya hatalı davranmasına yol açabiliyordu. Örnekler Örnek 1 - session_name() örneği <?php /* oturum ismini SiteID yapalım */ $eski_isim = session_name ( "SiteID" ); echo "Önceki oturum ismi $eski_isim idi.<br />" ; ?> Ayrıca Bakınız session.name yapılandıma yönergesi Found A Problem? Learn How To Improve This Page • Submit a Pull Request • Report a Bug + add a note User Contributed Notes 9 notes up down 146 Hongliang Qiang ¶ 21 years ago This may sound no-brainer: the session_name() function will have no essential effect if you set session.auto_start to "true" in php.ini . And the obvious explanation is the session already started thus cannot be altered before the session_name() function--wherever it is in the script--is executed, same reason session_name needs to be called before session_start() as documented. I know it is really not a big deal. But I had a quite hard time before figuring this out, and hope it might be helpful to someone like me. up down 65 php at wiz dot cx ¶ 17 years ago if you try to name a php session "example.com" it gets converted to "example_com" and everything breaks. don't use a period in your session name. up down 40 relsqui at chiliahedron dot com ¶ 16 years ago Remember, kids--you MUST use session_name() first if you want to use session_set_cookie_params() to, say, change the session timeout. Otherwise it won't work, won't give any error, and nothing in the documentation (that I've seen, anyway) will explain why. Thanks to brandan of bildungsroman.com who left a note under session_set_cookie_params() explaining this or I'd probably still be throwing my hands up about it. up down 21 Joseph Dalrymple ¶ 14 years ago For those wondering, this function is expensive! On a script that was executing in a consistent 0.0025 seconds, just the use of session_name("foo") shot my execution time up to ~0.09s. By simply sacrificing session_name("foo"), I sped my script up by roughly 0.09 seconds. up down 10 Victor H ¶ 10 years ago As Joseph Dalrymple said, adding session_name do slow down a little bit the execution time. But, what i've observed is that it decreased the fluctuation between requests. Requests on my script fluctuated between 0,045 and 0,022 seconds. With session_name("myapp"), it goes to 0,050 and 0,045. Not a big deal, but that's a point to note. For those with problems setting the name, when session.auto_start is set to 1, you need to set the session.name on php.ini! up down 3 mmulej at gmail dot com ¶ 4 years ago Hope this is not out of php.net noting scope. session_name('name') must be set before session_start() because the former changes ini settings and the latter reads them. For the same reason session_set_cookie_params($options) must be set before session_start() as well. I find it best to do the following. function is_session_started() { if (php_sapi_name() === 'cli') return false; if (version_compare(phpversion(), '5.4.0', '>=')) return session_status() === PHP_SESSION_ACTIVE; return session_id() !== ''; } if (!is_session_started()) { session_name($session_name); session_set_cookie_params($cookie_options); session_start(); } up down 0 tony at marston-home dot demon dot co dot uk ¶ 7 years ago The description that session_name() gets and/or sets the name of the current session is technically wrong. It does nothing but deal with the value originally supplied by the session.name value within the php.ini file. Thus:- $name = session_name(); is functionally equivalent to $name = ini_get('session.name'); and session_name('newname); is functionally equivalent to ini_set('session.name','newname'); This also means that: $old_name = session_name('newname'); is functionally equivalent to $old_name = ini_set('session.name','newname'); The current value of session.name is not attached to a session until session_start() is called. Once session_start() has used session.name to lookup the session_id() in the cookie data the name becomes irrelevant as all further operations on the session data are keyed by the session_id(). Note that changing session.name while a session is currently active will not update the name in any session cookie. The new name does not take effect until the next call to session_start(), and this requires that the current session, which was created with the previous value for session.name, be closed. up down -4 tony at marston-home dot demon dot co dot uk ¶ 7 years ago The description has recently been modified to contain the statement "When new session name is supplied, session_name() modifies HTTP cookie". This is not correct as session_name() has never modified any cookie data. A change in session.name does not become effective until session_start() is called, and it is session_start() that creates the cookie if it does not already exist. See the following bug report for details: https://bugs.php.net/bug.php?id=76413 up down -3 descartavel1+php at gmail dot com ¶ 2 years ago Always try to set the prefix for your session name attribute to either `__Host-` or `__Secure-` to benefit from Browsers improved security. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie#attributes Also, if you have auto_session enabled, you must set this name in session.name in your config (php.ini, htaccess, etc) + add a note Oturum İşlevleri session_​abort session_​cache_​expire session_​cache_​limiter session_​commit session_​create_​id session_​decode session_​destroy session_​encode session_​gc session_​get_​cookie_​params session_​id session_​module_​name session_​name session_​regenerate_​id session_​register_​shutdown session_​reset session_​save_​path session_​set_​cookie_​params session_​set_​save_​handler session_​start session_​status session_​unset session_​write_​close Copyright © 2001-2026 The PHP Documentation Group My PHP.net Contact Other PHP.net sites Privacy policy ↑ and ↓ to navigate • Enter to select • Esc to close • / to open Press Enter without selection to search using Google | 2026-01-13T09:30:34 |
https://docs.aws.amazon.com/fr_fr/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-permissions.html | Définition des autorisations et rôles IAM pour Lambda@Edge - Amazon CloudFront Définition des autorisations et rôles IAM pour Lambda@Edge - Amazon CloudFront Documentation Amazon CloudFront Guide du développeur Autorisations IAM requises pour associer les fonctions Lambda @Edge aux distributions CloudFront Rôle d'exécution de fonction pour les principaux de service Rôles liés à un service pour Lambda@Edge Les traductions sont fournies par des outils de traduction automatique. En cas de conflit entre le contenu d'une traduction et celui de la version originale en anglais, la version anglaise prévaudra. Définition des autorisations et rôles IAM pour Lambda@Edge Pour configurer Lambda@Edge, vous devez disposer des autorisations et des rôles IAM pour AWS Lambda : Autorisations IAM : ces autorisations vous permettent de créer votre fonction Lambda et de l'associer CloudFront à votre distribution. Un rôle d’exécution de fonction Lambda (rôle IAM) : les principaux de service Lambda assument ce rôle pour exécuter votre fonction. Rôles liés à un service pour Lambda @Edge — Les rôles liés à un service permettent à des utilisateurs spécifiques de Services AWS répliquer des fonctions Lambda dans des fichiers journaux et de les utiliser. Régions AWS CloudWatch CloudFront Autorisations IAM requises pour associer les fonctions Lambda @Edge aux distributions CloudFront Outre les autorisations IAM dont vous avez besoin pour Lambda, vous avez besoin des autorisations suivantes pour associer les fonctions Lambda aux distributions : CloudFront lambda:GetFunction : accorde l’autorisation d’obtenir les informations de configuration de votre fonction Lambda ainsi qu’une URL pré-signée pour télécharger un fichier .zip contenant la fonction. lambda:EnableReplication* : accorde une autorisation à la politique de ressource afin que le service de réplication Lambda puisse récupérer le code et la configuration de la fonction. lambda:DisableReplication* : accorde une autorisation à la politique de ressource afin que le service de réplication Lambda puisse supprimer la fonction. Important Vous devez ajouter l’astérisque ( * ) à la fin des actions lambda:EnableReplication * et lambda:DisableReplication * . Pour la ressource, spécifiez l'ARN de la version de fonction que vous souhaitez exécuter lorsqu'un CloudFront événement se produit, comme dans l'exemple suivant : arn:aws:lambda:us-east-1:123456789012:function: TestFunction :2 iam:CreateServiceLinkedRole — Accorde l'autorisation de créer un rôle lié à un service que Lambda @Edge utilise pour répliquer les fonctions Lambda. CloudFront Après avoir configuré Lambda@Edge pour la première fois, le rôle lié au service est créé automatiquement pour vous. Il n’est pas nécessaire d’ajouter cette autorisation aux autres distributions qui utilisent Lambda@Edge. cloudfront:UpdateDistribution ou cloudfront:CreateDistribution : accorde l’autorisation de mettre à jour ou de créer une distribution. Pour plus d’informations, consultez les rubriques suivantes : Identity and Access Management pour Amazon CloudFront Autorisations d’accès aux ressources Lambda dans le Guide du développeur AWS Lambda Rôle d'exécution de fonction pour les principaux de service Vous devez créer un rôle IAM que les principaux de service lambda.amazonaws.com et edgelambda.amazonaws.com peuvent assumer lorsqu’ils exécutent votre fonction. Astuce Lorsque vous créez votre fonction dans la console Lambda, vous pouvez choisir de créer un nouveau rôle d'exécution à l'aide d'un modèle de AWS politique. Cette étape ajoute automatiquement les autorisations Lambda@Edge requises pour exécuter votre fonction. Consultez l’étape 5 du Didacticiel : création d’une fonction Lambda@Edge simple . Pour plus d’informations sur la création manuelle d’un rôle IAM, consultez Création des rôles et association des politiques (console) dans le Guide de l’utilisateur IAM . Exemple : stratégie d’approbation du rôle Vous pouvez ajouter ce rôle sous l’onglet Relations d’approbation dans la console IAM. N’ajoutez pas cette politique sous l’onglet Autorisations . JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "lambda.amazonaws.com", "edgelambda.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] } Pour plus d’informations sur les autorisations que vous devez associer au rôle d’exécution, consultez Autorisations d’accès aux ressources Lambda dans le Guide du développeur AWS Lambda . Remarques Par défaut, chaque fois qu'un CloudFront événement déclenche une fonction Lambda, les données sont écrites dans Logs. CloudWatch Si vous souhaitez utiliser ces journaux, le rôle d'exécution doit être autorisé à écrire des données dans les CloudWatch journaux. Vous pouvez utiliser le AWSLambdaBasicExecutionRole prédéfini pour accorder l’autorisation nécessaire au rôle d’exécution. Pour plus d'informations sur CloudWatch les journaux, consultez Journaux des fonctions de périphérie . Si votre code de fonction Lambda accède à d'autres AWS ressources, telles que la lecture d'un objet depuis un compartiment S3, le rôle d'exécution doit être autorisé pour effectuer cette action. Rôles liés à un service pour Lambda@Edge Lambda@Edge utilise des rôles liés à un service IAM. Un rôle lié à un service est un type unique de rôle IAM lié directement à un service. Les rôles liés à un service sont prédéfinis par le service et comprennent toutes les autorisations nécessaires au service pour appeler d'autres services AWS en votre nom. Lambda@Edge utilise les rôles liés à un service IAM suivants : AWSServiceRoleForLambdaReplicator – Lambda@Edge utilise ce rôle pour autoriser Lambda@Edge à répliquer des fonctions vers Régions AWS. Lorsque vous ajoutez un déclencheur Lambda @Edge pour la première fois CloudFront, un rôle nommé AWSServiceRoleForLambdaReplicator est créé automatiquement pour permettre à Lambda @Edge de répliquer des fonctions sur. Régions AWS Ce rôle est obligatoire pour utiliser les fonctions Lambda@Edge. L’ARN du rôle AWSServiceRoleForLambdaReplicator ressemble à l’exemple suivant : arn:aws:iam::123456789012:role/aws-service-role/replicator.lambda.amazonaws.com/AWSServiceRoleForLambdaReplicator AWSServiceRoleForCloudFrontLogger — CloudFront utilise ce rôle pour transférer les fichiers journaux dans CloudWatch. Vous pouvez utiliser des fichiers journaux pour corriger les erreurs de validation Lambda@Edge. Le AWSServiceRoleForCloudFrontLogger rôle est créé automatiquement lorsque vous ajoutez une association de fonctions Lambda @Edge pour permettre de transférer les fichiers CloudFront journaux d'erreurs Lambda @Edge vers. CloudWatch L’ARN pour le rôle AWSServiceRoleForCloudFrontLogger prend la forme suivante : arn:aws:iam::account_number:role/aws-service-role/logger.cloudfront.amazonaws.com/AWSServiceRoleForCloudFrontLogger Un rôle lié à un service simplifie la configuration et l'utilisation de Lambda@Edge, car vous n'avez pas besoin d'ajouter manuellement les autorisations requises. Lambda@Edge définit les autorisations de ses rôles liés un service et seul Lambda@Edge peut endosser ces rôles. Les autorisations définies comprennent la politique d'approbation et la politique d'autorisations. Vous ne pouvez pas attacher la politique d'autorisations à une autre entité IAM. Vous devez supprimer toutes les ressources associées CloudFront ou Lambda @Edge avant de pouvoir supprimer un rôle lié à un service. Cela vous aide à protéger vos ressources Lambda@Edge afin d’éviter la suppression d’un rôle lié à un service qui est encore nécessaire pour accéder à des ressources actives. Pour plus d’informations sur les rôles liés à un service, consultez Rôles liés à un service pour CloudFront . Autorisations du rôle lié à un service pour Lambda@Edge Lambda@Edge utilise deux rôles liés à un service nommé AWSServiceRoleForLambdaReplicator et AWSServiceRoleForCloudFrontLogger . Les sections suivantes décrivent comment gérer les autorisations pour chacun de ces rôles. Table des matières Autorisations du rôle lié à un service pour Lambda Replicator Autorisations de rôle liées au service pour l'enregistreur CloudFront Autorisations du rôle lié à un service pour Lambda Replicator Ce rôle lié à un service permet à Lambda de répliquer les fonctions Lambda@Edge vers Régions AWS. Le rôle lié à un service AWSServiceRoleForLambdaReplicator fait confiance au service replicator.lambda.amazonaws.com pour endosser le rôle. La politique d'autorisations du rôle permet à Lambda@Edge de réaliser les actions suivantes sur les ressources spécifiées : lambda:CreateFunction sur arn:aws:lambda:*:*:function:* lambda:DeleteFunction sur arn:aws:lambda:*:*:function:* lambda:DisableReplication sur arn:aws:lambda:*:*:function:* iam:PassRole sur all AWS resources cloudfront:ListDistributionsByLambdaFunction sur all AWS resources Autorisations de rôle liées au service pour l'enregistreur CloudFront Ce rôle lié à un service permet de CloudFront transférer des fichiers journaux CloudWatch afin que vous puissiez corriger les erreurs de validation Lambda @Edge. Le rôle lié à un service AWSServiceRoleForCloudFrontLogger fait confiance au service logger.cloudfront.amazonaws.com pour endosser le rôle. La politique d’autorisations du rôle permet à Lambda@Edge de réaliser les actions suivantes sur les ressources arn:aws:logs:*:*:log-group:/aws/cloudfront/* spécifiées : logs:CreateLogGroup logs:CreateLogStream logs:PutLogEvents Vous devez configurer les autorisations de manière à permettre à une entité IAM (comme un utilisateur, groupe ou rôle) de supprimer les rôles liés à un service Lambda@Edge. Pour plus d’informations, consultez Autorisations de rôles liés à un service dans le Guide de l’utilisateur IAM . Création de rôles liés à un service pour Lambda@Edge Vous n'avez généralement pas besoin de créer manuellement les rôles liés à un service pour Lambda@Edge. Le service crée les rôles automatiquement pour vous dans les scénarios suivants : Lorsque vous créez un déclencheur pour la première fois, le service crée le rôle AWSServiceRoleForLambdaReplicator (s’il n’existe pas encore.). Ce rôle permet à Lambda de répliquer les fonctions Lambda@Edge vers Régions AWS. Si vous supprimez le rôle lié à un service, le rôle sera à nouveau créé lorsque vous ajouterez un nouveau déclencheur pour Lambda@Edge dans une distribution. Lorsque vous mettez à jour ou créez une CloudFront distribution associée à Lambda @Edge, le service crée le AWSServiceRoleForCloudFrontLogger rôle (si le rôle n'existe pas déjà). Ce rôle permet CloudFront de transférer vos fichiers journaux vers CloudWatch. Si vous supprimez le rôle lié à un service, le rôle sera créé à nouveau lorsque vous mettrez à jour ou créerez une CloudFront distribution associée à Lambda @Edge. Pour créer manuellement ces rôles liés à un service, vous pouvez exécuter les commandes suivantes AWS Command Line Interface (AWS CLI) : Pour créer le rôle AWSServiceRoleForLambdaReplicator Exécutez la commande suivante. aws iam create-service-linked-role --aws-service-name replicator.lambda.amazonaws.com Pour créer le rôle AWSServiceRoleForCloudFrontLogger Exécutez la commande suivante. aws iam create-service-linked-role --aws-service-name logger.cloudfront.amazonaws.com Modification des rôles liés à un service Lambda@Edge. Lambda@Edge ne vous permet pas de modifier les rôles liés à un service AWSServiceRoleForLambdaReplicator ou AWSServiceRoleForCloudFrontLogger. Une fois que le service a créé un rôle lié à un service, vous ne pouvez pas changer le nom du rôle, car plusieurs entités peuvent y faire référence. Néanmoins, vous pouvez utiliser IAM pour modifier la description du rôle. Pour plus d’informations, consultez Modification d’un rôle lié à un service dans le Guide de l’utilisateur IAM . Pris en charge Régions AWS pour les rôles liés au service Lambda @Edge CloudFront prend en charge l'utilisation de rôles liés à un service pour Lambda @Edge dans les domaines suivants : Régions AWS USA Est (Virginie du Nord) – us-east-1 USA Est (Ohio) – us-east-2 USA Ouest (Californie du Nord) – us-west-1 USA Ouest (Oregon) – us-west-2 Asie-Pacifique (Mumbai) – ap-south-1 Asie-Pacifique (Séoul) – ap-northeast-2 Asie-Pacifique (Singapour) – ap-southeast-1 Asie-Pacifique (Sydney) – ap-southeast-2 Asie-Pacifique (Tokyo) : ap-northeast-1 Europe (Francfort) – eu-central-1 Europe (Irlande) – eu-west-1 Europe (Londres) – eu-west-2 South America (São Paulo) – sa-east-1 JavaScript est désactivé ou n'est pas disponible dans votre navigateur. Pour que vous puissiez utiliser la documentation AWS, Javascript doit être activé. Vous trouverez des instructions sur les pages d'aide de votre navigateur. Conventions de rédaction Didacticiel : fonction Lambda@Edge basique Écriture et création d’une fonction Lambda@Edge Cette page vous a-t-elle été utile ? - Oui Merci de nous avoir fait part de votre satisfaction. Si vous avez quelques minutes à nous consacrer, merci de nous indiquer ce qui vous a plu afin que nous puissions nous améliorer davantage. Cette page vous a-t-elle été utile ? - Non Merci de nous avoir avertis que cette page avait besoin d'être retravaillée. Nous sommes désolés de ne pas avoir répondu à vos attentes. Si vous avez quelques minutes à nous consacrer, merci de nous indiquer comment nous pourrions améliorer cette documentation. | 2026-01-13T09:30:34 |
http://anh.cs.luc.edu/handsonPythonTutorial/ifstatements.html#congress-exercise | 3.1. If Statements — Hands-on Python Tutorial for Python 3 Navigation index next | previous | Hands-on Python Tutorial » 3. More On Flow of Control » 3.1. If Statements ¶ 3.1.1. Simple Conditions ¶ The statements introduced in this chapter will involve tests or conditions . More syntax for conditions will be introduced later, but for now consider simple arithmetic comparisons that directly translate from math into Python. Try each line separately in the Shell 2 < 5 3 > 7 x = 11 x > 10 2 * x < x type ( True ) You see that conditions are either True or False . These are the only possible Boolean values (named after 19th century mathematician George Boole). In Python the name Boolean is shortened to the type bool . It is the type of the results of true-false conditions or tests. Note The Boolean values True and False have no quotes around them! Just as '123' is a string and 123 without the quotes is not, 'True' is a string, not of type bool. 3.1.2. Simple if Statements ¶ Run this example program, suitcase.py. Try it at least twice, with inputs: 30 and then 55. As you an see, you get an extra result, depending on the input. The main code is: weight = float ( input ( "How many pounds does your suitcase weigh? " )) if weight > 50 : print ( "There is a $25 charge for luggage that heavy." ) print ( "Thank you for your business." ) The middle two line are an if statement. It reads pretty much like English. If it is true that the weight is greater than 50, then print the statement about an extra charge. If it is not true that the weight is greater than 50, then don’t do the indented part: skip printing the extra luggage charge. In any event, when you have finished with the if statement (whether it actually does anything or not), go on to the next statement that is not indented under the if . In this case that is the statement printing “Thank you”. The general Python syntax for a simple if statement is if condition : indentedStatementBlock If the condition is true, then do the indented statements. If the condition is not true, then skip the indented statements. Another fragment as an example: if balance < 0 : transfer = - balance # transfer enough from the backup account: backupAccount = backupAccount - transfer balance = balance + transfer As with other kinds of statements with a heading and an indented block, the block can have more than one statement. The assumption in the example above is that if an account goes negative, it is brought back to 0 by transferring money from a backup account in several steps. In the examples above the choice is between doing something (if the condition is True ) or nothing (if the condition is False ). Often there is a choice of two possibilities, only one of which will be done, depending on the truth of a condition. 3.1.3. if - else Statements ¶ Run the example program, clothes.py . Try it at least twice, with inputs 50 and then 80. As you can see, you get different results, depending on the input. The main code of clothes.py is: temperature = float ( input ( 'What is the temperature? ' )) if temperature > 70 : print ( 'Wear shorts.' ) else : print ( 'Wear long pants.' ) print ( 'Get some exercise outside.' ) The middle four lines are an if-else statement. Again it is close to English, though you might say “otherwise” instead of “else” (but else is shorter!). There are two indented blocks: One, like in the simple if statement, comes right after the if heading and is executed when the condition in the if heading is true. In the if - else form this is followed by an else: line, followed by another indented block that is only executed when the original condition is false . In an if - else statement exactly one of two possible indented blocks is executed. A line is also shown de dented next, removing indentation, about getting exercise. Since it is dedented, it is not a part of the if-else statement: Since its amount of indentation matches the if heading, it is always executed in the normal forward flow of statements, after the if - else statement (whichever block is selected). The general Python if - else syntax is if condition : indentedStatementBlockForTrueCondition else: indentedStatementBlockForFalseCondition These statement blocks can have any number of statements, and can include about any kind of statement. See Graduate Exercise 3.1.4. More Conditional Expressions ¶ All the usual arithmetic comparisons may be made, but many do not use standard mathematical symbolism, mostly for lack of proper keys on a standard keyboard. Meaning Math Symbol Python Symbols Less than < < Greater than > > Less than or equal ≤ <= Greater than or equal ≥ >= Equals = == Not equal ≠ != There should not be space between the two-symbol Python substitutes. Notice that the obvious choice for equals , a single equal sign, is not used to check for equality. An annoying second equal sign is required. This is because the single equal sign is already used for assignment in Python, so it is not available for tests. Warning It is a common error to use only one equal sign when you mean to test for equality, and not make an assignment! Tests for equality do not make an assignment, and they do not require a variable on the left. Any expressions can be tested for equality or inequality ( != ). They do not need to be numbers! Predict the results and try each line in the Shell : x = 5 x x == 5 x == 6 x x != 6 x = 6 6 == x 6 != x 'hi' == 'h' + 'i' 'HI' != 'hi' [ 1 , 2 ] != [ 2 , 1 ] An equality check does not make an assignment. Strings are case sensitive. Order matters in a list. Try in the Shell : 'a' > 5 When the comparison does not make sense, an Exception is caused. [1] Following up on the discussion of the inexactness of float arithmetic in String Formats for Float Precision , confirm that Python does not consider .1 + .2 to be equal to .3: Write a simple condition into the Shell to test. Here is another example: Pay with Overtime. Given a person’s work hours for the week and regular hourly wage, calculate the total pay for the week, taking into account overtime. Hours worked over 40 are overtime, paid at 1.5 times the normal rate. This is a natural place for a function enclosing the calculation. Read the setup for the function: def calcWeeklyWages ( totalHours , hourlyWage ): '''Return the total weekly wages for a worker working totalHours, with a given regular hourlyWage. Include overtime for hours over 40. ''' The problem clearly indicates two cases: when no more than 40 hours are worked or when more than 40 hours are worked. In case more than 40 hours are worked, it is convenient to introduce a variable overtimeHours. You are encouraged to think about a solution before going on and examining mine. You can try running my complete example program, wages.py, also shown below. The format operation at the end of the main function uses the floating point format ( String Formats for Float Precision ) to show two decimal places for the cents in the answer: def calcWeeklyWages ( totalHours , hourlyWage ): '''Return the total weekly wages for a worker working totalHours, with a given regular hourlyWage. Include overtime for hours over 40. ''' if totalHours <= 40 : totalWages = hourlyWage * totalHours else : overtime = totalHours - 40 totalWages = hourlyWage * 40 + ( 1.5 * hourlyWage ) * overtime return totalWages def main (): hours = float ( input ( 'Enter hours worked: ' )) wage = float ( input ( 'Enter dollars paid per hour: ' )) total = calcWeeklyWages ( hours , wage ) print ( 'Wages for {hours} hours at ${wage:.2f} per hour are ${total:.2f}.' . format ( ** locals ())) main () Here the input was intended to be numeric, but it could be decimal so the conversion from string was via float , not int . Below is an equivalent alternative version of the body of calcWeeklyWages , used in wages1.py . It uses just one general calculation formula and sets the parameters for the formula in the if statement. There are generally a number of ways you might solve the same problem! if totalHours <= 40 : regularHours = totalHours overtime = 0 else : overtime = totalHours - 40 regularHours = 40 return hourlyWage * regularHours + ( 1.5 * hourlyWage ) * overtime The in boolean operator : There are also Boolean operators that are applied to types others than numbers. A useful Boolean operator is in , checking membership in a sequence: >>> vals = [ 'this' , 'is' , 'it] >>> 'is' in vals True >>> 'was' in vals False It can also be used with not , as not in , to mean the opposite: >>> vals = [ 'this' , 'is' , 'it] >>> 'is' not in vals False >>> 'was' not in vals True In general the two versions are: item in sequence item not in sequence Detecting the need for if statements : Like with planning programs needing``for`` statements, you want to be able to translate English descriptions of problems that would naturally include if or if - else statements. What are some words or phrases or ideas that suggest the use of these statements? Think of your own and then compare to a few I gave: [2] 3.1.4.1. Graduate Exercise ¶ Write a program, graduate.py , that prompts students for how many credits they have. Print whether of not they have enough credits for graduation. (At Loyola University Chicago 120 credits are needed for graduation.) 3.1.4.2. Head or Tails Exercise ¶ Write a program headstails.py . It should include a function flip() , that simulates a single flip of a coin: It randomly prints either Heads or Tails . Accomplish this by choosing 0 or 1 arbitrarily with random.randrange(2) , and use an if - else statement to print Heads when the result is 0, and Tails otherwise. In your main program have a simple repeat loop that calls flip() 10 times to test it, so you generate a random sequence of 10 Heads and Tails . 3.1.4.3. Strange Function Exercise ¶ Save the example program jumpFuncStub.py as jumpFunc.py , and complete the definitions of functions jump and main as described in the function documentation strings in the program. In the jump function definition use an if - else statement (hint [3] ). In the main function definition use a for -each loop, the range function, and the jump function. The jump function is introduced for use in Strange Sequence Exercise , and others after that. 3.1.5. Multiple Tests and if - elif Statements ¶ Often you want to distinguish between more than two distinct cases, but conditions only have two possible results, True or False , so the only direct choice is between two options. As anyone who has played “20 Questions” knows, you can distinguish more cases by further questions. If there are more than two choices, a single test may only reduce the possibilities, but further tests can reduce the possibilities further and further. Since most any kind of statement can be placed in an indented statement block, one choice is a further if statement. For instance consider a function to convert a numerical grade to a letter grade, ‘A’, ‘B’, ‘C’, ‘D’ or ‘F’, where the cutoffs for ‘A’, ‘B’, ‘C’, and ‘D’ are 90, 80, 70, and 60 respectively. One way to write the function would be test for one grade at a time, and resolve all the remaining possibilities inside the next else clause: def letterGrade ( score ): if score >= 90 : letter = 'A' else : # grade must be B, C, D or F if score >= 80 : letter = 'B' else : # grade must be C, D or F if score >= 70 : letter = 'C' else : # grade must D or F if score >= 60 : letter = 'D' else : letter = 'F' return letter This repeatedly increasing indentation with an if statement as the else block can be annoying and distracting. A preferred alternative in this situation, that avoids all this indentation, is to combine each else and if block into an elif block: def letterGrade ( score ): if score >= 90 : letter = 'A' elif score >= 80 : letter = 'B' elif score >= 70 : letter = 'C' elif score >= 60 : letter = 'D' else : letter = 'F' return letter The most elaborate syntax for an if - elif - else statement is indicated in general below: if condition1 : indentedStatementBlockForTrueCondition1 elif condition2 : indentedStatementBlockForFirstTrueCondition2 elif condition3 : indentedStatementBlockForFirstTrueCondition3 elif condition4 : indentedStatementBlockForFirstTrueCondition4 else: indentedStatementBlockForEachConditionFalse The if , each elif , and the final else lines are all aligned. There can be any number of elif lines, each followed by an indented block. (Three happen to be illustrated above.) With this construction exactly one of the indented blocks is executed. It is the one corresponding to the first True condition, or, if all conditions are False , it is the block after the final else line. Be careful of the strange Python contraction. It is elif , not elseif . A program testing the letterGrade function is in example program grade1.py . See Grade Exercise . A final alternative for if statements: if - elif -.... with no else . This would mean changing the syntax for if - elif - else above so the final else: and the block after it would be omitted. It is similar to the basic if statement without an else , in that it is possible for no indented block to be executed. This happens if none of the conditions in the tests are true. With an else included, exactly one of the indented blocks is executed. Without an else , at most one of the indented blocks is executed. if weight > 120 : print ( 'Sorry, we can not take a suitcase that heavy.' ) elif weight > 50 : print ( 'There is a $25 charge for luggage that heavy.' ) This if - elif statement only prints a line if there is a problem with the weight of the suitcase. 3.1.5.1. Sign Exercise ¶ Write a program sign.py to ask the user for a number. Print out which category the number is in: 'positive' , 'negative' , or 'zero' . 3.1.5.2. Grade Exercise ¶ In Idle, load grade1.py and save it as grade2.py Modify grade2.py so it has an equivalent version of the letterGrade function that tests in the opposite order, first for F, then D, C, .... Hint: How many tests do you need to do? [4] Be sure to run your new version and test with different inputs that test all the different paths through the program. Be careful to test around cut-off points. What does a grade of 79.6 imply? What about exactly 80? 3.1.5.3. Wages Exercise ¶ * Modify the wages.py or the wages1.py example to create a program wages2.py that assumes people are paid double time for hours over 60. Hence they get paid for at most 20 hours overtime at 1.5 times the normal rate. For example, a person working 65 hours with a regular wage of $10 per hour would work at $10 per hour for 40 hours, at 1.5 * $10 for 20 hours of overtime, and 2 * $10 for 5 hours of double time, for a total of 10*40 + 1.5*10*20 + 2*10*5 = $800. You may find wages1.py easier to adapt than wages.py . Be sure to test all paths through the program! Your program is likely to be a modification of a program where some choices worked before, but once you change things, retest for all the cases! Changes can mess up things that worked before. 3.1.6. Nesting Control-Flow Statements ¶ The power of a language like Python comes largely from the variety of ways basic statements can be combined . In particular, for and if statements can be nested inside each other’s indented blocks. For example, suppose you want to print only the positive numbers from an arbitrary list of numbers in a function with the following heading. Read the pieces for now. def printAllPositive ( numberList ): '''Print only the positive numbers in numberList.''' For example, suppose numberList is [3, -5, 2, -1, 0, 7] . You want to process a list, so that suggests a for -each loop, for num in numberList : but a for -each loop runs the same code body for each element of the list, and we only want print ( num ) for some of them. That seems like a major obstacle, but think closer at what needs to happen concretely. As a human, who has eyes of amazing capacity, you are drawn immediately to the actual correct numbers, 3, 2, and 7, but clearly a computer doing this systematically will have to check every number. In fact, there is a consistent action required: Every number must be tested to see if it should be printed. This suggests an if statement, with the condition num > 0 . Try loading into Idle and running the example program onlyPositive.py , whose code is shown below. It ends with a line testing the function: def printAllPositive ( numberList ): '''Print only the positive numbers in numberList.''' for num in numberList : if num > 0 : print ( num ) printAllPositive ([ 3 , - 5 , 2 , - 1 , 0 , 7 ]) This idea of nesting if statements enormously expands the possibilities with loops. Now different things can be done at different times in loops, as long as there is a consistent test to allow a choice between the alternatives. Shortly, while loops will also be introduced, and you will see if statements nested inside of them, too. The rest of this section deals with graphical examples. Run example program bounce1.py . It has a red ball moving and bouncing obliquely off the edges. If you watch several times, you should see that it starts from random locations. Also you can repeat the program from the Shell prompt after you have run the script. For instance, right after running the program, try in the Shell bounceBall ( - 3 , 1 ) The parameters give the amount the shape moves in each animation step. You can try other values in the Shell , preferably with magnitudes less than 10. For the remainder of the description of this example, read the extracted text pieces. The animations before this were totally scripted, saying exactly how many moves in which direction, but in this case the direction of motion changes with every bounce. The program has a graphic object shape and the central animation step is shape . move ( dx , dy ) but in this case, dx and dy have to change when the ball gets to a boundary. For instance, imagine the ball getting to the left side as it is moving to the left and up. The bounce obviously alters the horizontal part of the motion, in fact reversing it, but the ball would still continue up. The reversal of the horizontal part of the motion means that the horizontal shift changes direction and therefore its sign: dx = - dx but dy does not need to change. This switch does not happen at each animation step, but only when the ball reaches the edge of the window. It happens only some of the time - suggesting an if statement. Still the condition must be determined. Suppose the center of the ball has coordinates (x, y). When x reaches some particular x coordinate, call it xLow, the ball should bounce. The edge of the window is at coordinate 0, but xLow should not be 0, or the ball would be half way off the screen before bouncing! For the edge of the ball to hit the edge of the screen, the x coordinate of the center must be the length of the radius away, so actually xLow is the radius of the ball. Animation goes quickly in small steps, so I cheat. I allow the ball to take one (small, quick) step past where it really should go ( xLow ), and then we reverse it so it comes back to where it belongs. In particular if x < xLow : dx = - dx There are similar bounding variables xHigh , yLow and yHigh , all the radius away from the actual edge coordinates, and similar conditions to test for a bounce off each possible edge. Note that whichever edge is hit, one coordinate, either dx or dy, reverses. One way the collection of tests could be written is if x < xLow : dx = - dx if x > xHigh : dx = - dx if y < yLow : dy = - dy if y > yHigh : dy = - dy This approach would cause there to be some extra testing: If it is true that x < xLow , then it is impossible for it to be true that x > xHigh , so we do not need both tests together. We avoid unnecessary tests with an elif clause (for both x and y): if x < xLow : dx = - dx elif x > xHigh : dx = - dx if y < yLow : dy = - dy elif y > yHigh : dy = - dy Note that the middle if is not changed to an elif , because it is possible for the ball to reach a corner , and need both dx and dy reversed. The program also uses several methods to read part of the state of graphics objects that we have not used in examples yet. Various graphics objects, like the circle we are using as the shape, know their center point, and it can be accessed with the getCenter() method. (Actually a clone of the point is returned.) Also each coordinate of a Point can be accessed with the getX() and getY() methods. This explains the new features in the central function defined for bouncing around in a box, bounceInBox . The animation arbitrarily goes on in a simple repeat loop for 600 steps. (A later example will improve this behavior.) def bounceInBox ( shape , dx , dy , xLow , xHigh , yLow , yHigh ): ''' Animate a shape moving in jumps (dx, dy), bouncing when its center reaches the low and high x and y coordinates. ''' delay = . 005 for i in range ( 600 ): shape . move ( dx , dy ) center = shape . getCenter () x = center . getX () y = center . getY () if x < xLow : dx = - dx elif x > xHigh : dx = - dx if y < yLow : dy = - dy elif y > yHigh : dy = - dy time . sleep ( delay ) The program starts the ball from an arbitrary point inside the allowable rectangular bounds. This is encapsulated in a utility function included in the program, getRandomPoint . The getRandomPoint function uses the randrange function from the module random . Note that in parameters for both the functions range and randrange , the end stated is past the last value actually desired: def getRandomPoint ( xLow , xHigh , yLow , yHigh ): '''Return a random Point with coordinates in the range specified.''' x = random . randrange ( xLow , xHigh + 1 ) y = random . randrange ( yLow , yHigh + 1 ) return Point ( x , y ) The full program is listed below, repeating bounceInBox and getRandomPoint for completeness. Several parts that may be useful later, or are easiest to follow as a unit, are separated out as functions. Make sure you see how it all hangs together or ask questions! ''' Show a ball bouncing off the sides of the window. ''' from graphics import * import time , random def bounceInBox ( shape , dx , dy , xLow , xHigh , yLow , yHigh ): ''' Animate a shape moving in jumps (dx, dy), bouncing when its center reaches the low and high x and y coordinates. ''' delay = . 005 for i in range ( 600 ): shape . move ( dx , dy ) center = shape . getCenter () x = center . getX () y = center . getY () if x < xLow : dx = - dx elif x > xHigh : dx = - dx if y < yLow : dy = - dy elif y > yHigh : dy = - dy time . sleep ( delay ) def getRandomPoint ( xLow , xHigh , yLow , yHigh ): '''Return a random Point with coordinates in the range specified.''' x = random . randrange ( xLow , xHigh + 1 ) y = random . randrange ( yLow , yHigh + 1 ) return Point ( x , y ) def makeDisk ( center , radius , win ): '''return a red disk that is drawn in win with given center and radius.''' disk = Circle ( center , radius ) disk . setOutline ( "red" ) disk . setFill ( "red" ) disk . draw ( win ) return disk def bounceBall ( dx , dy ): '''Make a ball bounce around the screen, initially moving by (dx, dy) at each jump.''' win = GraphWin ( 'Ball Bounce' , 290 , 290 ) win . yUp () radius = 10 xLow = radius # center is separated from the wall by the radius at a bounce xHigh = win . getWidth () - radius yLow = radius yHigh = win . getHeight () - radius center = getRandomPoint ( xLow , xHigh , yLow , yHigh ) ball = makeDisk ( center , radius , win ) bounceInBox ( ball , dx , dy , xLow , xHigh , yLow , yHigh ) win . close () bounceBall ( 3 , 5 ) 3.1.6.1. Short String Exercise ¶ Write a program short.py with a function printShort with heading: def printShort ( strings ): '''Given a list of strings, print the ones with at most three characters. >>> printShort(['a', 'long', one']) a one ''' In your main program, test the function, calling it several times with different lists of strings. Hint: Find the length of each string with the len function. The function documentation here models a common approach: illustrating the behavior of the function with a Python Shell interaction. This part begins with a line starting with >>> . Other exercises and examples will also document behavior in the Shell. 3.1.6.2. Even Print Exercise ¶ Write a program even1.py with a function printEven with heading: def printEven ( nums ): '''Given a list of integers nums, print the even ones. >>> printEven([4, 1, 3, 2, 7]) 4 2 ''' In your main program, test the function, calling it several times with different lists of integers. Hint: A number is even if its remainder, when dividing by 2, is 0. 3.1.6.3. Even List Exercise ¶ Write a program even2.py with a function chooseEven with heading: def chooseEven ( nums ): '''Given a list of integers, nums, return a list containing only the even ones. >>> chooseEven([4, 1, 3, 2, 7]) [4, 2] ''' In your main program, test the function, calling it several times with different lists of integers and printing the results in the main program. (The documentation string illustrates the function call in the Python shell, where the return value is automatically printed. Remember, that in a program, you only print what you explicitly say to print.) Hint: In the function, create a new list, and append the appropriate numbers to it, before returning the result. 3.1.6.4. Unique List Exercise ¶ * The madlib2.py program has its getKeys function, which first generates a list of each occurrence of a cue in the story format. This gives the cues in order, but likely includes repetitions. The original version of getKeys uses a quick method to remove duplicates, forming a set from the list. There is a disadvantage in the conversion, though: Sets are not ordered, so when you iterate through the resulting set, the order of the cues will likely bear no resemblance to the order they first appeared in the list. That issue motivates this problem: Copy madlib2.py to madlib2a.py , and add a function with this heading: def uniqueList ( aList ): ''' Return a new list that includes the first occurrence of each value in aList, and omits later repeats. The returned list should include the first occurrences of values in aList in their original order. >>> vals = ['cat', 'dog', 'cat', 'bug', 'dog', 'ant', 'dog', 'bug'] >>> uniqueList(vals) ['cat', 'dog', 'bug', 'ant'] ''' Hint: Process aList in order. Use the in syntax to only append elements to a new list that are not already in the new list. After perfecting the uniqueList function, replace the last line of getKeys , so it uses uniqueList to remove duplicates in keyList . Check that your madlib2a.py prompts you for cue values in the order that the cues first appear in the madlib format string. 3.1.7. Compound Boolean Expressions ¶ To be eligible to graduate from Loyola University Chicago, you must have 120 credits and a GPA of at least 2.0. This translates directly into Python as a compound condition : credits >= 120 and GPA >= 2.0 This is true if both credits >= 120 is true and GPA >= 2.0 is true. A short example program using this would be: credits = float ( input ( 'How many units of credit do you have? ' )) GPA = float ( input ( 'What is your GPA? ' )) if credits >= 120 and GPA >= 2.0 : print ( 'You are eligible to graduate!' ) else : print ( 'You are not eligible to graduate.' ) The new Python syntax is for the operator and : condition1 and condition2 The compound condition is true if both of the component conditions are true. It is false if at least one of the conditions is false. See Congress Exercise . In the last example in the previous section, there was an if - elif statement where both tests had the same block to be done if the condition was true: if x < xLow : dx = - dx elif x > xHigh : dx = - dx There is a simpler way to state this in a sentence: If x < xLow or x > xHigh, switch the sign of dx. That translates directly into Python: if x < xLow or x > xHigh : dx = - dx The word or makes another compound condition: condition1 or condition2 is true if at least one of the conditions is true. It is false if both conditions are false. This corresponds to one way the word “or” is used in English. Other times in English “or” is used to mean exactly one alternative is true. Warning When translating a problem stated in English using “or”, be careful to determine whether the meaning matches Python’s or . It is often convenient to encapsulate complicated tests inside a function. Think how to complete the function starting: def isInside ( rect , point ): '''Return True if the point is inside the Rectangle rect.''' pt1 = rect . getP1 () pt2 = rect . getP2 () Recall that a Rectangle is specified in its constructor by two diagonally oppose Point s. This example gives the first use in the tutorials of the Rectangle methods that recover those two corner points, getP1 and getP2 . The program calls the points obtained this way pt1 and pt2 . The x and y coordinates of pt1 , pt2 , and point can be recovered with the methods of the Point type, getX() and getY() . Suppose that I introduce variables for the x coordinates of pt1 , point , and pt2 , calling these x-coordinates end1 , val , and end2 , respectively. On first try you might decide that the needed mathematical relationship to test is end1 <= val <= end2 Unfortunately, this is not enough: The only requirement for the two corner points is that they be diagonally opposite, not that the coordinates of the second point are higher than the corresponding coordinates of the first point. It could be that end1 is 200; end2 is 100, and val is 120. In this latter case val is between end1 and end2 , but substituting into the expression above 200 <= 120 <= 100 is False. The 100 and 200 need to be reversed in this case. This makes a complicated situation. Also this is an issue which must be revisited for both the x and y coordinates. I introduce an auxiliary function isBetween to deal with one coordinate at a time. It starts: def isBetween ( val , end1 , end2 ): '''Return True if val is between the ends. The ends do not need to be in increasing order.''' Clearly this is true if the original expression, end1 <= val <= end2 , is true. You must also consider the possible case when the order of the ends is reversed: end2 <= val <= end1 . How do we combine these two possibilities? The Boolean connectives to consider are and and or . Which applies? You only need one to be true, so or is the proper connective: A correct but redundant function body would be: if end1 <= val <= end2 or end2 <= val <= end1 : return True else : return False Check the meaning: if the compound expression is True , return True . If the condition is False , return False – in either case return the same value as the test condition. See that a much simpler and neater version is to just return the value of the condition itself! return end1 <= val <= end2 or end2 <= val <= end1 Note In general you should not need an if - else statement to choose between true and false values! Operate directly on the boolean expression. A side comment on expressions like end1 <= val <= end2 Other than the two-character operators, this is like standard math syntax, chaining comparisons. In Python any number of comparisons can be chained in this way, closely approximating mathematical notation. Though this is good Python, be aware that if you try other high-level languages like Java and C++, such an expression is gibberish. Another way the expression can be expressed (and which translates directly to other languages) is: end1 <= val and val <= end2 So much for the auxiliary function isBetween . Back to the isInside function. You can use the isBetween function to check the x coordinates, isBetween ( point . getX (), p1 . getX (), p2 . getX ()) and to check the y coordinates, isBetween ( point . getY (), p1 . getY (), p2 . getY ()) Again the question arises: how do you combine the two tests? In this case we need the point to be both between the sides and between the top and bottom, so the proper connector is and . Think how to finish the isInside method. Hint: [5] Sometimes you want to test the opposite of a condition. As in English you can use the word not . For instance, to test if a Point was not inside Rectangle Rect, you could use the condition not isInside ( rect , point ) In general, not condition is True when condition is False , and False when condition is True . The example program chooseButton1.py , shown below, is a complete program using the isInside function in a simple application, choosing colors. Pardon the length. Do check it out. It will be the starting point for a number of improvements that shorten it and make it more powerful in the next section. First a brief overview: The program includes the functions isBetween and isInside that have already been discussed. The program creates a number of colored rectangles to use as buttons and also as picture components. Aside from specific data values, the code to create each rectangle is the same, so the action is encapsulated in a function, makeColoredRect . All of this is fine, and will be preserved in later versions. The present main function is long, though. It has the usual graphics starting code, draws buttons and picture elements, and then has a number of code sections prompting the user to choose a color for a picture element. Each code section has a long if - elif - else test to see which button was clicked, and sets the color of the picture element appropriately. '''Make a choice of colors via mouse clicks in Rectangles -- A demonstration of Boolean operators and Boolean functions.''' from graphics import * def isBetween ( x , end1 , end2 ): '''Return True if x is between the ends or equal to either. The ends do not need to be in increasing order.''' return end1 <= x <= end2 or end2 <= x <= end1 def isInside ( point , rect ): '''Return True if the point is inside the Rectangle rect.''' pt1 = rect . getP1 () pt2 = rect . getP2 () return isBetween ( point . getX (), pt1 . getX (), pt2 . getX ()) and \ isBetween ( point . getY (), pt1 . getY (), pt2 . getY ()) def makeColoredRect ( corner , width , height , color , win ): ''' Return a Rectangle drawn in win with the upper left corner and color specified.''' corner2 = corner . clone () corner2 . move ( width , - height ) rect = Rectangle ( corner , corner2 ) rect . setFill ( color ) rect . draw ( win ) return rect def main (): win = GraphWin ( 'pick Colors' , 400 , 400 ) win . yUp () # right side up coordinates redButton = makeColoredRect ( Point ( 310 , 350 ), 80 , 30 , 'red' , win ) yellowButton = makeColoredRect ( Point ( 310 , 310 ), 80 , 30 , 'yellow' , win ) blueButton = makeColoredRect ( Point ( 310 , 270 ), 80 , 30 , 'blue' , win ) house = makeColoredRect ( Point ( 60 , 200 ), 180 , 150 , 'gray' , win ) door = makeColoredRect ( Point ( 90 , 150 ), 40 , 100 , 'white' , win ) roof = Polygon ( Point ( 50 , 200 ), Point ( 250 , 200 ), Point ( 150 , 300 )) roof . setFill ( 'black' ) roof . draw ( win ) msg = Text ( Point ( win . getWidth () / 2 , 375 ), 'Click to choose a house color.' ) msg . draw ( win ) pt = win . getMouse () if isInside ( pt , redButton ): color = 'red' elif isInside ( pt , yellowButton ): color = 'yellow' elif isInside ( pt , blueButton ): color = 'blue' else : color = 'white' house . setFill ( color ) msg . setText ( 'Click to choose a door color.' ) pt = win . getMouse () if isInside ( pt , redButton ): color = 'red' elif isInside ( pt , yellowButton ): color = 'yellow' elif isInside ( pt , blueButton ): color = 'blue' else : color = 'white' door . setFill ( color ) win . promptClose ( msg ) main () The only further new feature used is in the long return statement in isInside . return isBetween ( point . getX (), pt1 . getX (), pt2 . getX ()) and \ isBetween ( point . getY (), pt1 . getY (), pt2 . getY ()) Recall that Python is smart enough to realize that a statement continues to the next line if there is an unmatched pair of parentheses or brackets. Above is another situation with a long statement, but there are no unmatched parentheses on a line. For readability it is best not to make an enormous long line that would run off your screen or paper. Continuing to the next line is recommended. You can make the final character on a line be a backslash ( '\\' ) to indicate the statement continues on the next line. This is not particularly neat, but it is a rather rare situation. Most statements fit neatly on one line, and the creator of Python decided it was best to make the syntax simple in the most common situation. (Many other languages require a special statement terminator symbol like ‘;’ and pay no attention to newlines). Extra parentheses here would not hurt, so an alternative would be return ( isBetween ( point . getX (), pt1 . getX (), pt2 . getX ()) and isBetween ( point . getY (), pt1 . getY (), pt2 . getY ()) ) The chooseButton1.py program is long partly because of repeated code. The next section gives another version involving lists. 3.1.7.1. Congress Exercise ¶ A person is eligible to be a US Senator who is at least 30 years old and has been a US citizen for at least 9 years. Write an initial version of a program congress.py to obtain age and length of citizenship from the user and print out if a person is eligible to be a Senator or not. A person is eligible to be a US Representative who is at least 25 years old and has been a US citizen for at least 7 years. Elaborate your program congress.py so it obtains age and length of citizenship and prints out just the one of the following three statements that is accurate: You are eligible for both the House and Senate. You eligible only for the House. You are ineligible for Congress. 3.1.8. More String Methods ¶ Here are a few more string methods useful in the next exercises, assuming the methods are applied to a string s : s .startswith( pre ) returns True if string s starts with string pre : Both '-123'.startswith('-') and 'downstairs'.startswith('down') are True , but '1 - 2 - 3'.startswith('-') is False . s .endswith( suffix ) returns True if string s ends with string suffix : Both 'whoever'.endswith('ever') and 'downstairs'.endswith('airs') are True , but '1 - 2 - 3'.endswith('-') is False . s .replace( sub , replacement , count ) returns a new string with up to the first count occurrences of string sub replaced by replacement . The replacement can be the empty string to delete sub . For example: s = '-123' t = s . replace ( '-' , '' , 1 ) # t equals '123' t = t . replace ( '-' , '' , 1 ) # t is still equal to '123' u = '.2.3.4.' v = u . replace ( '.' , '' , 2 ) # v equals '23.4.' w = u . replace ( '.' , ' dot ' , 5 ) # w equals '2 dot 3 dot 4 dot ' 3.1.8.1. Article Start Exercise ¶ In library alphabetizing, if the initial word is an article (“The”, “A”, “An”), then it is ignored when ordering entries. Write a program completing this function, and then testing it: def startsWithArticle ( title ): '''Return True if the first word of title is "The", "A" or "An".''' Be careful, if the title starts with “There”, it does not start with an article. What should you be testing for? 3.1.8.2. Is Number String Exercise ¶ ** In the later Safe Number Input Exercise , it will be important to know if a string can be converted to the desired type of number. Explore that here. Save example isNumberStringStub.py as isNumberString.py and complete it. It contains headings and documentation strings for the functions in both parts of this exercise. A legal whole number string consists entirely of digits. Luckily strings have an isdigit method, which is true when a nonempty string consists entirely of digits, so '2397'.isdigit() returns True , and '23a'.isdigit() returns False , exactly corresponding to the situations when the string represents a whole number! In both parts be sure to test carefully. Not only confirm that all appropriate strings return True . Also be sure to test that you return False for all sorts of bad strings. Recognizing an integer string is more involved, since it can start with a minus sign (or not). Hence the isdigit method is not enough by itself. This part is the most straightforward if you have worked on the sections String Indices and String Slices . An alternate approach works if you use the count method from Object Orientation , and some methods from this section. Complete the function isIntStr . Complete the function isDecimalStr , which introduces the possibility of a decimal point (though a decimal point is not required). The string methods mentioned in the previous part remain useful. [1] This is an improvement that is new in Python 3. [2] “In this case do ___; otherwise”, “if ___, then”, “when ___ is true, then”, “___ depends on whether”, [3] If you divide an even number by 2, what is the remainder? Use this idea in your if condition. [4] 4 tests to distinguish the 5 cases, as in the previous version [5] Once again, you are calculating and returning a Boolean result. You do not need an if - else statement. Table Of Contents 3.1. If Statements 3.1.1. Simple Conditions 3.1.2. Simple if Statements 3.1.3. if - else Statements 3.1.4. More Conditional Expressions 3.1.4.1. Graduate Exercise 3.1.4.2. Head or Tails Exercise 3.1.4.3. Strange Function Exercise 3.1.5. Multiple Tests and if - elif Statements 3.1.5.1. Sign Exercise 3.1.5.2. Grade Exercise 3.1.5.3. Wages Exercise 3.1.6. Nesting Control-Flow Statements 3.1.6.1. Short String Exercise 3.1.6.2. Even Print Exercise 3.1.6.3. Even List Exercise 3.1.6.4. Unique List Exercise 3.1.7. Compound Boolean Expressions 3.1.7.1. Congress Exercise 3.1.8. More String Methods 3.1.8.1. Article Start Exercise 3.1.8.2. Is Number String Exercise Previous topic 3. More On Flow of Control Next topic 3.2. Loops and Tuples This Page Show Source Quick search Enter search terms or a module, class or function name. Navigation index next | previous | Hands-on Python Tutorial » 3. More On Flow of Control » © Copyright 2019, Dr. Andrew N. Harrington. Last updated on Jan 05, 2020. Created using Sphinx 1.3.1+. | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/548 | LLVM Weekly - #548, July 1st 2024 LLVM Weekly - #548, July 1st 2024 Welcome to the five hundred and forty-eighth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org , @llvmweekly or @asbradbury on Twitter, or @llvmweekly@fosstodon.org or @asb@fosstodon.org . Last week I was honoured and delighted to be recognised with the Software Contributor Award by the RISC-V International Board of Directors. News and articles from around the web and events Additional videos were uploaded to the EuroLLVM 2024 recordings playlist . Fangrui Song wrote up integrated assembler improvements in LLVM 19 , which resulted in measurable performance gains. The next Seattle area LLVM social will take place on Tuesday July 16th . According to the LLVM calendar in the coming week there will be the following: Office hours with the following hosts: Renato Golin, Quentin Colombet, Johannes Doerfert. Online sync-ups on the following topics: MLIR C/C++ frontend, pointer authentication, SPIR-V, AArch64, new contributors, OpenMP, Flang, LLVM libc, MLIR. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Tobias Hieta shared the proposed LLVM 19 release schedule , with the branch occurring on 23rd July and the final release planned for 3rd September. Nikita Popov proposes adding a nopoison attribute . Much of the discussion is around the semantics of a store of a poison value. Volodymyr Sapsai is bumping the RFC on adding an option to enable precompiled headers for LLVM libraries , summarising the discussion so far. Andy Kaylor started an RFC discussion on introducing stronger guarantees for denormal-fp-math . Amrit Bhogal kicked off an RFC thread on allowing [[gnu::cleanup]] to work with [[clang::overloadable]] . LLVM commits The LLVM part of the numerical sanitizer was committed. 1710679 . The AArch64 backend learned to lower ptrauth constants in code. 1488fb4 . Function, GlobalValue, BasicBlock and Instruction gained a new getDataLayout() helper. 2d209d9 , 9df71d7 . Support was added for the SPH_KHR_cooperative_matrix SPIR-V extension. 57f7937 . SmallPtrSet gained a remove_if() method. f019581 . A scheduling model was introduced for the Syntacore SCR3. 2d84e0f . The Xtensa backend can now lower GlobalAddress/BlockAddress/JumpTable. cc8fdd6 . Functions created via createWithDefaultAddr() now have target-{cpu,features} attributes added. 89d8df1 . The minimal Z3 version was bumped to 4.8.9 from 4.7.1. b7762f2 . MachineDomTreeUpdater was introduced, built on top of GenericDomTreeUpdater (which was generalised from DomTreeUpdater). c931ac5 . Clang commits Pointer authentication was implemented for C++ virtual functions, vtables, and virtual table tables (VTTs). 1b8ab2f . The --print-enabled-extensions option was added to Clang for AArch64 and will print the list of extensions that are enabled for the target by the combination of -target , -march , and -mcpu . bb83a3d . __builtin_verbose_trap was added. 2604830 . nonblocking and nonallocating attributes were introduced. f03cb00 . A new option to remove leading blank lines was added to clang-format. 9267f8f . __builtin_object_size was documented. 569faa4 . Other project commits Documentation was added for optimising the Linux kernel with BOLT. ec2fb59 . NVPTX profiling infrastructure was added to LLVM’s libc. 02b57de . A generic convert-to-spirv pass was added to MLIR. 13c1fec . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://releases.llvm.org/18.1.8/tools/flang/docs/ReleaseNotes.html | Flang |version| (In-Progress) Release Notes — The Flang Compiler Navigation index next | previous | Flang Home | Documentation » Flang |version| (In-Progress) Release Notes Documentation Getting Started Getting Involved Mailing Lists Slack Calls Additional Links Github Repository Bug Reports Code Review Doxygen API Quick search Flang |version| (In-Progress) Release Notes ¶ warning These are in-progress notes for the upcoming LLVM |version| release. Release notes for previous releases can be found on the Download Page . Introduction ¶ This document contains the release notes for the Flang Fortran frontend, part of the LLVM Compiler Infrastructure, release |version|. Here we describe the status of Flang in some detail, including major improvements from the previous release and new feature work. For the general LLVM release notes, see the LLVM documentation . All LLVM releases may be downloaded from the LLVM releases web site . Note that if you are reading this file from a Git checkout, this document applies to the next release, not the current one. To see the release notes for a specific release, please see the releases page . Major New Features ¶ Bug Fixes ¶ Non-comprehensive list of changes in this release ¶ New Compiler Flags ¶ Windows Support ¶ Fortran Language Changes in Flang ¶ Build System Changes ¶ New Issues Found ¶ Additional Information ¶ Flang’s documentation is located in the flang/docs/ directory in the LLVM monorepo. If you have any questions or comments about Flang, please feel free to contact us on the Discourse forums . Navigation index next | previous | Flang Home | Documentation » Flang |version| (In-Progress) Release Notes © Copyright 2017-2024, The Flang Team. Last updated on Jun 19, 2024. Created using Sphinx 7.1.2. | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/550 | LLVM Weekly - #550, July 15th 2024 LLVM Weekly - #550, July 15th 2024 Welcome to the five hundred and fiftieth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org , @llvmweekly or @asbradbury on Twitter, or @llvmweekly@fosstodon.org or @asb@fosstodon.org . News and articles from around the web and events The call for talk proposals at the LLVM Developers' Meeting is now open and closes on August 11th. The next Portland LLVM social will take place on July 17th . The next Munich LLVM meetup will take place on July 24th . According to the LLVM calendar in the coming week there will be the following: Office hours with the following hosts: Phoebe Wang, Johannes Doerfert, Aaron Ballman. Online sync-ups on the following topics: pointer authentication, vectorizer improvements, security group, new contributors, OpenMP, Clang C/C++ language working group, Flang, floating point, RISC-V, MLIR, embedded toolchains. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Ludget Paehler shared a new dataset of LLVM IR for machine learning and other uses . Justin Stitt posted an RFC on disabling sanitizer instrumentation for common overflow idioms . Johannes Doerfert shared some educational compiler videos he’s started to release and is welcoming feedback. Yuxuan Chen started a thread on language extensions for better and more deterministic HALO (heap allocation elision optimisation) for C++ coroutines . Ivan Ivanov kicked off an RFC discussion on adding support for the OpenMP workdistribute construct in Flang . Renato Golin started an MLIR RFC discussion on a transpose attribute for linalg matmul operations . LLVM commits LoopIdiomRecognize learned to recognize “shift until less-than”. 83b01aa . LLVM’s MC layer (assembler) now supports .cfi_label , fixing a common case where GCC generated assembly fails to work with LLVM. 2718654 . Branch hint prefixes are now supported in the X86 backend. e603451 . Improved codegen is now implemented for ISD::CTL_ZERO_UNDEF , taking advantage of the fact zero is an invalid input. 69192e0 . The .insn <value> and .insn <insn-legnth>, <value> variants of the .insn directive are now supported for RISC-V. 2a086dc . Documentation on new atomicrmw metadata was added for AMDGPU. 62d9497 . The RISCVUsage documentation now gives guidance on the supported RISC-V profiles. 884a07f . Support was added for the RISC-V QingKe “XW” compressed instruction set extensions, as used in WCH microcontrollers. 3c5f929 . Support was added for resolving conflicts between vendor-specific CSRs in RISC-V. 0628446 . Clang commits modernize-use-ranges and boost-use-ranges checks were added to clang-tidy to convert iterator algorithms into C++20 or boost ranges. 1038db6 . sized_by , counted_by_or_null and sized_by_or_null . e22ebee . Raw string literals are now supported as an extension in GNU C modes. e464684 . Documentation was added on working around an issue with missing vtables for classes attached to modules. 62a7562 . -fptrauth-function-pointer-type-discrimination is now supported. ae18b94 . -m[no-]scalar-strict-align and -m[no-]vector-strict-align are now provided for RISC-V. 73acf8d . Other project commits Initial support was added to compiler-rt builtins for GPU targets. dad7442 . Support for -fhermetic-module-files was added to Flang. 6598795 . [f]pathconf was implemented in LLVM’s libc. 7f3c40a . A new CyclicReplacerCache was added to MLIR, intended for use as a cache for functions that map values between two domains. ec50f58 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://zh-cn.facebook.com/login/?next=https%3A%2F%2Fl.facebook.com%2Fl.php%3Fu%3Dhttps%253A%252F%252Fwww.instagram.com%252F%26amp%253Bh%3DAT0_eO-stakg9vLsOUo7Lt26VLQ_aYkr9Pf8ZZaLhPMiLcqEKHlcgGyJNGUNiyBkcQSC6SVtvR-ocoHfFkGBRUIIfK4kqcuSRuVlDS1_g0KFGeSi6wHVGd63HL5D3r9fcZfhU2T8Lb5znjek | Facebook Facebook 邮箱或手机号 密码 忘记账户了? 创建新账户 你暂时被禁止使用此功能 你暂时被禁止使用此功能 似乎你过度使用了此功能,因此暂时被阻止,不能继续使用。 Back 中文(简体) 한국어 English (US) Tiếng Việt Bahasa Indonesia ภาษาไทย Español 日本語 Português (Brasil) Français (France) Deutsch 注册 登录 Messenger Facebook Lite 视频 Meta Pay Meta 商店 Meta Quest Ray-Ban Meta Meta AI Meta AI 更多内容 Instagram Threads 选民信息中心 隐私政策 隐私中心 关于 创建广告 创建公共主页 开发者 招聘信息 Cookie Ad Choices 条款 帮助 联系人上传和非用户 设置 动态记录 Meta © 2026 | 2026-01-13T09:30:34 |
https://www.php.net/session-name | PHP: session_name - Manual update page now Downloads Documentation Get Involved Help Search docs Getting Started Introduction A simple tutorial Language Reference Basic syntax Types Variables Constants Expressions Operators Control Structures Functions Classes and Objects Namespaces Enumerations Errors Exceptions Fibers Generators Attributes References Explained Predefined Variables Predefined Exceptions Predefined Interfaces and Classes Predefined Attributes Context options and parameters Supported Protocols and Wrappers Security Introduction General considerations Installed as CGI binary Installed as an Apache module Session Security Filesystem Security Database Security Error Reporting User Submitted Data Hiding PHP Keeping Current Features HTTP authentication with PHP Cookies Sessions Handling file uploads Using remote files Connection handling Persistent Database Connections Command line usage Garbage Collection DTrace Dynamic Tracing Function Reference Affecting PHP's Behaviour Audio Formats Manipulation Authentication Services Command Line Specific Extensions Compression and Archive Extensions Cryptography Extensions Database Extensions Date and Time Related Extensions File System Related Extensions Human Language and Character Encoding Support Image Processing and Generation Mail Related Extensions Mathematical Extensions Non-Text MIME Output Process Control Extensions Other Basic Extensions Other Services Search Engine Extensions Server Specific Extensions Session Extensions Text Processing Variable and Type Related Extensions Web Services Windows Only Extensions XML Manipulation GUI Extensions Keyboard Shortcuts ? This help j Next menu item k Previous menu item g p Previous man page g n Next man page G Scroll to bottom g g Scroll to top g h Goto homepage g s Goto search (current page) / Focus search box session_regenerate_id » « session_module_name PHP Manual Function Reference Session Extensions Sessions Session Functions Change language: English German Spanish French Italian Japanese Brazilian Portuguese Russian Turkish Ukrainian Chinese (Simplified) Other session_name (PHP 4, PHP 5, PHP 7, PHP 8) session_name — Get and/or set the current session name Description session_name ( ? string $name = null ): string | false session_name() returns the name of the current session. If name is given, session_name() will update the session name and return the old session name. If a new session name is supplied, session_name() modifies the HTTP cookie (and outputs the content when session.use_trans_sid is enabled). Once the HTTP cookie has been sent, calling session_name() raises an E_WARNING . session_name() must be called before session_start() for the session to work properly. The session name is reset to the default value stored in session.name at request startup time. Thus, you need to call session_name() for every request (and before session_start() is called). Parameters name The session name references the name of the session, which is used in cookies and URLs (e.g. PHPSESSID ). It should contain only alphanumeric characters; it should be short and descriptive (i.e. for users with enabled cookie warnings). If name is specified and not null , the name of the current session is changed to its value. Warning The session name can't consist of digits only, at least one letter must be present. Otherwise a new session id is generated every time. Return Values Returns the name of the current session. If name is given and function updates the session name, name of the old session is returned, or false on failure. Changelog Version Description 8.0.0 name is nullable now. 7.2.0 session_name() checks session status, previously it only checked cookie status. Therefore, older session_name() allows to call session_name() after session_start() which may crash PHP and may result in misbehaviors. Examples Example #1 session_name() example <?php /* set the session name to WebsiteID */ $previous_name = session_name ( "WebsiteID" ); echo "The previous session name was $previous_name <br />" ; ?> See Also The session.name configuration directive Found A Problem? Learn How To Improve This Page • Submit a Pull Request • Report a Bug + add a note User Contributed Notes 9 notes up down 146 Hongliang Qiang ¶ 21 years ago This may sound no-brainer: the session_name() function will have no essential effect if you set session.auto_start to "true" in php.ini . And the obvious explanation is the session already started thus cannot be altered before the session_name() function--wherever it is in the script--is executed, same reason session_name needs to be called before session_start() as documented. I know it is really not a big deal. But I had a quite hard time before figuring this out, and hope it might be helpful to someone like me. up down 65 php at wiz dot cx ¶ 17 years ago if you try to name a php session "example.com" it gets converted to "example_com" and everything breaks. don't use a period in your session name. up down 40 relsqui at chiliahedron dot com ¶ 16 years ago Remember, kids--you MUST use session_name() first if you want to use session_set_cookie_params() to, say, change the session timeout. Otherwise it won't work, won't give any error, and nothing in the documentation (that I've seen, anyway) will explain why. Thanks to brandan of bildungsroman.com who left a note under session_set_cookie_params() explaining this or I'd probably still be throwing my hands up about it. up down 21 Joseph Dalrymple ¶ 14 years ago For those wondering, this function is expensive! On a script that was executing in a consistent 0.0025 seconds, just the use of session_name("foo") shot my execution time up to ~0.09s. By simply sacrificing session_name("foo"), I sped my script up by roughly 0.09 seconds. up down 10 Victor H ¶ 10 years ago As Joseph Dalrymple said, adding session_name do slow down a little bit the execution time. But, what i've observed is that it decreased the fluctuation between requests. Requests on my script fluctuated between 0,045 and 0,022 seconds. With session_name("myapp"), it goes to 0,050 and 0,045. Not a big deal, but that's a point to note. For those with problems setting the name, when session.auto_start is set to 1, you need to set the session.name on php.ini! up down 3 mmulej at gmail dot com ¶ 4 years ago Hope this is not out of php.net noting scope. session_name('name') must be set before session_start() because the former changes ini settings and the latter reads them. For the same reason session_set_cookie_params($options) must be set before session_start() as well. I find it best to do the following. function is_session_started() { if (php_sapi_name() === 'cli') return false; if (version_compare(phpversion(), '5.4.0', '>=')) return session_status() === PHP_SESSION_ACTIVE; return session_id() !== ''; } if (!is_session_started()) { session_name($session_name); session_set_cookie_params($cookie_options); session_start(); } up down 0 tony at marston-home dot demon dot co dot uk ¶ 7 years ago The description that session_name() gets and/or sets the name of the current session is technically wrong. It does nothing but deal with the value originally supplied by the session.name value within the php.ini file. Thus:- $name = session_name(); is functionally equivalent to $name = ini_get('session.name'); and session_name('newname); is functionally equivalent to ini_set('session.name','newname'); This also means that: $old_name = session_name('newname'); is functionally equivalent to $old_name = ini_set('session.name','newname'); The current value of session.name is not attached to a session until session_start() is called. Once session_start() has used session.name to lookup the session_id() in the cookie data the name becomes irrelevant as all further operations on the session data are keyed by the session_id(). Note that changing session.name while a session is currently active will not update the name in any session cookie. The new name does not take effect until the next call to session_start(), and this requires that the current session, which was created with the previous value for session.name, be closed. up down -4 tony at marston-home dot demon dot co dot uk ¶ 7 years ago The description has recently been modified to contain the statement "When new session name is supplied, session_name() modifies HTTP cookie". This is not correct as session_name() has never modified any cookie data. A change in session.name does not become effective until session_start() is called, and it is session_start() that creates the cookie if it does not already exist. See the following bug report for details: https://bugs.php.net/bug.php?id=76413 up down -3 descartavel1+php at gmail dot com ¶ 2 years ago Always try to set the prefix for your session name attribute to either `__Host-` or `__Secure-` to benefit from Browsers improved security. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie#attributes Also, if you have auto_session enabled, you must set this name in session.name in your config (php.ini, htaccess, etc) + add a note Session Functions session_​abort session_​cache_​expire session_​cache_​limiter session_​commit session_​create_​id session_​decode session_​destroy session_​encode session_​gc session_​get_​cookie_​params session_​id session_​module_​name session_​name session_​regenerate_​id session_​register_​shutdown session_​reset session_​save_​path session_​set_​cookie_​params session_​set_​save_​handler session_​start session_​status session_​unset session_​write_​close Copyright © 2001-2026 The PHP Documentation Group My PHP.net Contact Other PHP.net sites Privacy policy ↑ and ↓ to navigate • Enter to select • Esc to close • / to open Press Enter without selection to search using Google | 2026-01-13T09:30:34 |
https://www.php.net/manual/it/function.session-name.php | PHP: session_name - Manual update page now Downloads Documentation Get Involved Help Search docs Getting Started Introduction A simple tutorial Language Reference Basic syntax Types Variables Constants Expressions Operators Control Structures Functions Classes and Objects Namespaces Enumerations Errors Exceptions Fibers Generators Attributes References Explained Predefined Variables Predefined Exceptions Predefined Interfaces and Classes Predefined Attributes Context options and parameters Supported Protocols and Wrappers Security Introduction General considerations Installed as CGI binary Installed as an Apache module Session Security Filesystem Security Database Security Error Reporting User Submitted Data Hiding PHP Keeping Current Features HTTP authentication with PHP Cookies Sessions Handling file uploads Using remote files Connection handling Persistent Database Connections Command line usage Garbage Collection DTrace Dynamic Tracing Function Reference Affecting PHP's Behaviour Audio Formats Manipulation Authentication Services Command Line Specific Extensions Compression and Archive Extensions Cryptography Extensions Database Extensions Date and Time Related Extensions File System Related Extensions Human Language and Character Encoding Support Image Processing and Generation Mail Related Extensions Mathematical Extensions Non-Text MIME Output Process Control Extensions Other Basic Extensions Other Services Search Engine Extensions Server Specific Extensions Session Extensions Text Processing Variable and Type Related Extensions Web Services Windows Only Extensions XML Manipulation GUI Extensions Keyboard Shortcuts ? This help j Next menu item k Previous menu item g p Previous man page g n Next man page G Scroll to bottom g g Scroll to top g h Goto homepage g s Goto search (current page) / Focus search box session_regenerate_id » « session_module_name Manuale PHP Guida Funzioni Estensioni di sessione Sessions Session Funzioni Change language: English German Spanish French Italian Japanese Brazilian Portuguese Russian Turkish Ukrainian Chinese (Simplified) Other session_name (PHP 4, PHP 5, PHP 7, PHP 8) session_name — Recupera e/o imposta il nome della sessione corrente Descrizione session_name ( string $name = ? ): string session_name() ritorna il nome della sessione corrente. Se viene fornito name , session_name() aggiornerà il nome della sessione e restituirà il vecchio nome della sessione. Il nome di sessione è ripristinato al valore memorizzato in session.name al momento della request. Quindi, si deve chiamare session_name() ad ogni richiesta (e prima che session_start() o session_register() vengano chiamate). Elenco dei parametri name Il nome della sessione si riferisce al nome della sessione, che viene utilizzato nei cookies e negli URL (p.e. PHPSESSID ). Dovrebbe contenere solo caratteri alfanumerici; dovrebbe essere corto e descrittivo (p.e. per utenti con l'avviso di cookie attivo). Se name è specificato, il nome della sessione corrente viene cambiato al suo valore. Avviso Il nome di sessione non può essere composto da sole cifre numeriche, deve essere presente almeno una lettera. In caso contrario un nuovo nome è generato ogni volta. Valori restituiti Restituisce il nome della sessione corrente. Se viene passato il name e la funzione aggiorna il nome della sessione, viene restituito il nome della vecchia sessione. Esempi Example #1 session_name() esempi <?php // imposta il nome di sessione a WebsiteID $previous_name = session_name ( "WebsiteID" ); echo "Il precedente nome di sessione è $previous_name <br />" ; ?> Vedere anche: La direttiva di configurazione session.name Found A Problem? Learn How To Improve This Page • Submit a Pull Request • Report a Bug + add a note User Contributed Notes 9 notes up down 146 Hongliang Qiang ¶ 21 years ago This may sound no-brainer: the session_name() function will have no essential effect if you set session.auto_start to "true" in php.ini . And the obvious explanation is the session already started thus cannot be altered before the session_name() function--wherever it is in the script--is executed, same reason session_name needs to be called before session_start() as documented. I know it is really not a big deal. But I had a quite hard time before figuring this out, and hope it might be helpful to someone like me. up down 65 php at wiz dot cx ¶ 17 years ago if you try to name a php session "example.com" it gets converted to "example_com" and everything breaks. don't use a period in your session name. up down 40 relsqui at chiliahedron dot com ¶ 16 years ago Remember, kids--you MUST use session_name() first if you want to use session_set_cookie_params() to, say, change the session timeout. Otherwise it won't work, won't give any error, and nothing in the documentation (that I've seen, anyway) will explain why. Thanks to brandan of bildungsroman.com who left a note under session_set_cookie_params() explaining this or I'd probably still be throwing my hands up about it. up down 21 Joseph Dalrymple ¶ 14 years ago For those wondering, this function is expensive! On a script that was executing in a consistent 0.0025 seconds, just the use of session_name("foo") shot my execution time up to ~0.09s. By simply sacrificing session_name("foo"), I sped my script up by roughly 0.09 seconds. up down 10 Victor H ¶ 10 years ago As Joseph Dalrymple said, adding session_name do slow down a little bit the execution time. But, what i've observed is that it decreased the fluctuation between requests. Requests on my script fluctuated between 0,045 and 0,022 seconds. With session_name("myapp"), it goes to 0,050 and 0,045. Not a big deal, but that's a point to note. For those with problems setting the name, when session.auto_start is set to 1, you need to set the session.name on php.ini! up down 3 mmulej at gmail dot com ¶ 4 years ago Hope this is not out of php.net noting scope. session_name('name') must be set before session_start() because the former changes ini settings and the latter reads them. For the same reason session_set_cookie_params($options) must be set before session_start() as well. I find it best to do the following. function is_session_started() { if (php_sapi_name() === 'cli') return false; if (version_compare(phpversion(), '5.4.0', '>=')) return session_status() === PHP_SESSION_ACTIVE; return session_id() !== ''; } if (!is_session_started()) { session_name($session_name); session_set_cookie_params($cookie_options); session_start(); } up down 0 tony at marston-home dot demon dot co dot uk ¶ 7 years ago The description that session_name() gets and/or sets the name of the current session is technically wrong. It does nothing but deal with the value originally supplied by the session.name value within the php.ini file. Thus:- $name = session_name(); is functionally equivalent to $name = ini_get('session.name'); and session_name('newname); is functionally equivalent to ini_set('session.name','newname'); This also means that: $old_name = session_name('newname'); is functionally equivalent to $old_name = ini_set('session.name','newname'); The current value of session.name is not attached to a session until session_start() is called. Once session_start() has used session.name to lookup the session_id() in the cookie data the name becomes irrelevant as all further operations on the session data are keyed by the session_id(). Note that changing session.name while a session is currently active will not update the name in any session cookie. The new name does not take effect until the next call to session_start(), and this requires that the current session, which was created with the previous value for session.name, be closed. up down -4 tony at marston-home dot demon dot co dot uk ¶ 7 years ago The description has recently been modified to contain the statement "When new session name is supplied, session_name() modifies HTTP cookie". This is not correct as session_name() has never modified any cookie data. A change in session.name does not become effective until session_start() is called, and it is session_start() that creates the cookie if it does not already exist. See the following bug report for details: https://bugs.php.net/bug.php?id=76413 up down -3 descartavel1+php at gmail dot com ¶ 2 years ago Always try to set the prefix for your session name attribute to either `__Host-` or `__Secure-` to benefit from Browsers improved security. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie#attributes Also, if you have auto_session enabled, you must set this name in session.name in your config (php.ini, htaccess, etc) + add a note Session Funzioni session_​abort session_​cache_​expire session_​cache_​limiter session_​commit session_​create_​id session_​decode session_​destroy session_​encode session_​gc session_​get_​cookie_​params session_​id session_​module_​name session_​name session_​regenerate_​id session_​register_​shutdown session_​reset session_​save_​path session_​set_​cookie_​params session_​set_​save_​handler session_​start session_​status session_​unset session_​write_​close Copyright © 2001-2026 The PHP Documentation Group My PHP.net Contact Other PHP.net sites Privacy policy ↑ and ↓ to navigate • Enter to select • Esc to close • / to open Press Enter without selection to search using Google | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/534 | LLVM Weekly - #534, March 25th 2024 LLVM Weekly - #534, March 25th 2024 Welcome to the five hundred and thirty-fourth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org , @llvmweekly or @asbradbury on Twitter, or @llvmweekly@fosstodon.org or @asb@fosstodon.org . News and articles from around the web and events LLVM 18.1.2 was released . The /r/cpp subreddit C++ committee trip report for the March meeting in Tokyo is now up . According to the LLVM calendar in the coming week there will be the following (note: US is still ahead of most countries in entering daylight savings time so meeting times may be different to usual): Office hours with the following hosts: Kristof Beyls, Johannes Doerfert, Amara Emerson. Online sync-ups on the following topics: SPIR-V, new contributors, OpenMP, Flang, MLIR, RISC-V, embedded toolchains. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Stephen Tozer shared a public service announcement on how instruction constructors are changing to iterator-only insertion . Christudasan Devadasan is seeking feedback on the addition of synthetic register classes and allocation masks , intended to solve some reocurring problems with allocating vector registers in the AMDGPU backend. M Zeeshan Siddiqui posted an MLIR RFC on an optimisation in the SuperVectorizer regarding the handling of misaligned data . Tom Eccles started an RFC discussion on adding an interface for top level container operations in Flang FIR . “byrnesj1” posted an RFC on tracking values through integral address space casts for improved alignment reasoning . LLVM commits A detailed InstCombine contributor guide is now available. 6898147 . Temporary DbgRecord functions were added to the DIBuilder C API to help downstream projects during the transition period. f0dbcfe . 3-way comparison intrinsics were introduced. 276847a . As a stepping stone towards the migration to ptradd , the representation of getelementptr inrange was changed. 0f46e31 . A PreferSmallerInstructions option was added to control the AsmMatcherEmitter and used for Arm as part of a refactoring. 6854f6f , 295cdd5 . A scheduling model was added for the SiFive-P670. c48d818 . Minbitwidth analysis in the superword level parallelism vectoriser was improved. 31eaf86 . DPValue was renamed to DbgVariableRecord, DPLabel renamed to DbgLabelRecord, and DPMarker to DbgMarker. ffd08c7 , bdc77d1 , 75dfa58 . The Hexagon backend gained support for emitting Hexagon elf attributes. 31f4b32 . REG_SEQUENCE and EXTRACT_SUBREG are now used to move between individual GPRs and GPRPair, leading to much better codegen for Zdinx (double-precision floating point in GPRs) on RV32. 576d81b . llvm-objdump gained --skip-symbol[s] options. 4946cc3 . Clang commits The default linker for wasm32-wasip2 is now wasm-component-ld. d66121d . clang-cl now supports runtime feature detection of intrinsics. afec08e . A soft-float ABI was added for AArch64. ef395a4 . Lambda expressions are now accepted in C++03 mode as an extension. 2699072 . RISC-V profile names can now be giving for -march . b44771f . bugprone-suspicious-stringview-data-usage was added to clang-tidy. 28c1279 . Other project commits BOLT gained support for the Linux kernel static keys jump table. 6b1cf00 . LLVM’s libc gained implementations of strfromd and strfroml . 83e9697 . ObjectiveC category merging was added to the lld-macho linker. cd34860 . Python bindings were enabled for the MLIR index dialect. eb861ac . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://penneo.com/da/trust-center/ | Trust Center - Penneo Produkter Penneo Sign Validator Hvorfor Penneo Integrationer Løsninger Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Brancher Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Priser Ressourcer Vidensunivers Trust Center Produktopdateringer SIGN Hjælpecenter KYC Hjælpecenter Systemstatus LOG PÅ Penneo Sign Log ind på Penneo Sign. LOG PÅ Penneo KYC Log ind på Penneo KYC. LOG PÅ BOOK ET MØDE GRATIS PRØVEPERIODE DA EN NO FR NL Produkter Penneo Sign Validator Hvorfor Penneo Integrationer Løsninger Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Priser Ressourcer Vidensunivers Trust Center Produktopdateringer SIGN Hjælpecenter KYC Hjælpecenter Systemstatus BOOK ET MØDE GRATIS PRØVEPERIODE LOG PÅ DA EN NO FR NL Penneo Sign Log ind på Penneo Sign. LOG PÅ Penneo KYC Log ind på Penneo KYC. LOG PÅ Penneo Trust Center At Penneo, we prioritize the security and privacy of our customers. EU-approved Qualified Trust Service Provider (QTSP) Certified under ISO/IEC 27001:2022 and ISO/IEC 27701:2019 EU-based data hosting & GDPR compliance Security We operate a certified Information Security Management System (ISMS) and Privacy Information Management System (PIMS) compliant with ISO/IEC 27001:2022 (Information Security) and ISO/IEC 27701:2019 (Privacy Management), respectively. You can find our certificates here: https://penneo.com/iso-certificates/ . This ensures we have best practise security measures in place at both technical and organisational level. For an overview of our security measures, read our Data Processing Addendum at https://penneo.com/terms/ . Data privacy All customer data, including documents, signatures, and personal information, is stored and processed exclusively within secure AWS data centers in the European Union (Frankfurt and Dublin). Read our Privacy Policy , Data Processing Addendum , or contact our DPO at compliance@penneo.com for more information. EU Qualified Trust Service Provider (eIDAS) Penneo is recognized on the European Union Trust List (EUTL) as a Qualified Trust Service Provider (QTSP), authorizing Penneo to provide legally binding trust services across the EU. View Penneo’s QTSP documentation and certificates at eutl.penneo.com . Platform availability Penneo is committed to a highly available and reliable platform. You can view our real-time and historical system status at any time. Check Live System Status . Additional regulatory compliance We continuously monitor the evolving regulatory landscape to ensure our platform meets the needs of customers in regulated industries. Governance & Sustainability: Penneo is committed to operating as a responsible business by minimizing our environmental impact and upholding strong social and governance principles. Read more about it here. DORA (Digital Operational Resilience Act): Penneo supports financial entities’ ICT risk management and reporting obligations under DORA. Please contact compliance@penneo.com for further information. EU Data Act: Our commitment to data portability and interoperability aligns with the principles of the EU Data Act. Read our Data Act Addendum for more information. Accessibility: We are committed to ensuring our platform is accessible to all end-users. Read our Accessibility Statement for more information. Talk to our experts Book a quick demo and we’ll walk you through the key features and answer your questions – no pressure, just clarity. BOOK A DEMO Get a free trial today Sign your first documents with Penneo Sign and see how easy digital compliance can be. No credit card needed. GET A FREE TRIAL Produkter Penneo Sign Priser Integrationer Åben API Validator Hvorfor Penneo Løsninger Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Ressourcer Vidensunivers Trust Center Produktopdateringer SUPPORT SIGN Hjælpecenter KYC Hjælpecenter Systemstatus Virksomhed Om os Karriere Privatlivspolitik Vilkår Brug af cookies Accessibility Statement Whistleblower Policy Kontakt os PENNEO A/S - Gærtorvet 1-5, DK-1799 København V - CVR: 35633766 | 2026-01-13T09:30:34 |
https://docs.aws.amazon.com/pt_br/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-permissions.html | Configurar permissões e perfis do IAM para o Lambda@Edge - Amazon CloudFront Configurar permissões e perfis do IAM para o Lambda@Edge - Amazon CloudFront Documentação Amazon CloudFront Guia do Desenvolvedor Permissões do IAM necessárias para associar funções do Lambda às distribuições do CloudFront Função de execução de função para primários de serviço Funções vinculadas ao serviço para o Lambda@Edge Configurar permissões e perfis do IAM para o Lambda@Edge Para configurar o Lambda@Edge, é necessário ter as seguintes permissões e perfis do IAM para o AWS Lambda: Permissões do IAM : essas permissões autorizam você a criar a função do Lambda e a associá-la à distribuição do CloudFront. Perfil de execução da função do Lambda (perfil do IAM): as entidades principais do serviço do Lambda assumem esse perfil para executar a função. Perfis vinculados ao serviço para o Lambda@Edge : os perfis vinculados ao serviço permitem que Serviços da AWS específicos repliquem funções do Lambda para Regiões da AWS e habilitem o CloudWatch para usar arquivos de log do CloudFront. Permissões do IAM necessárias para associar funções do Lambda às distribuições do CloudFront Além das permissões do IAM necessárias para o Lambda, você precisa das seguintes permissões para associar as funções do Lambda às distribuições do CloudFront: lambda:GetFunction : concede permissão para recebimento de informações de configuração para a função do Lambda e de um URL pré-assinado para baixar um arquivo .zip que contém a função. lambda:EnableReplication* : concede permissão à política de recurso para que o serviço de replicação do Lambda possa obter o a configuração e o código da função. lambda:DisableReplication* : concede permissão à política de recurso para que o serviço de replicação do Lambda possa excluir a função. Importante É necessário adicionar o asterisco ( * ) ao final das ações lambda:EnableReplication * e lambda:DisableReplication * . Para o recurso, especifique o ARN da versão da função que você deseja executar quando ocorrer um evento do CloudFront, conforme mostrado no seguinte exemplo: arn:aws:lambda:us-east-1:123456789012:function: TestFunction :2 iam:CreateServiceLinkedRole : concede permissão para criar um perfil vinculado ao serviço que o Lambda@Edge usa para replicar funções do Lambda no CloudFront. Depois que você configurar o Lambda@Edge pela primeira vez, o perfil vinculado ao serviço será criado automaticamente para você. Não é necessário adicionar essa permissão a outras distribuições que usam o Lambda@Edge. cloudfront:UpdateDistribution ou cloudfront:CreateDistribution : concede permissão para atualizar ou criar uma distribuição. Para obter mais informações, consulte os tópicos a seguir. Identity and Access Management para Amazon CloudFront Permissões de acesso a recursos do Lambda no Guia do desenvolvedor do AWS Lambda Função de execução de função para primários de serviço É necessário criar um perfil do IAM que as entidades principais do serviço lambda.amazonaws.com e edgelambda.amazonaws.com possam assumir ao executarem a função. dica Ao criar a função no console do Lambda, você pode optar por criar um perfil de execução usando um modelo de política da AWS. Essa etapa adiciona automaticamente as permissões necessárias do Lambda@Edge para executar a função. Consulte a Etapa 5 do tutorial Criação de uma função do Lambda@Edge simples . Consulte mais informações sobre como criar um perfil do IAM manualmente em Criar funções e anexar políticas (console) no Guia do usuário do IAM . exemplo Exemplo: política de confiança do perfil É possível adicionar esse perfil na guia Relação de confiança no console do IAM. Não adicione essa política na guia Permissões . JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "lambda.amazonaws.com", "edgelambda.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] } Consulte mais informações sobre as permissões que você precisa conceder ao perfil de execução em Permissões de acesso a recursos do Lambda no Guia do desenvolvedor do AWS Lambda . Observações Por padrão, sempre que um evento do CloudFront aciona uma função do Lambda, os dados são gravados no CloudWatch Logs. Se você quiser usar esses logs, a função de execução precisará de permissão para registrar dados no CloudWatch Logs. É possível usar o AWSLambdaBasicExecutionRole predefinido para conceder permissão ao perfil de execução. Para obter mais informações sobre o CloudWatch Logs, consulte Logs de funções de borda . Se o código da função do Lambda acessar outros recursos da AWS, como a leitura de um objeto em um bucket do S3, o perfil de execução precisará de permissão para executar essa ação. Funções vinculadas ao serviço para o Lambda@Edge O Lambda@Edge usa perfis vinculados ao serviço do IAM. Uma função vinculada ao serviço é um tipo exclusivo de função do IAM vinculada diretamente a um serviço. As funções vinculadas a serviços são predefinidas pelo serviço e incluem todas as permissões de que ele precisa para chamar outros serviços da AWS em seu nome. O Lambda@Edge usa os seguintes perfis vinculados ao serviço do IAM: AWSServiceRoleForLambdaReplicator – o Lambda@Edge usa essa função para permitir que ele mesmo replique funções para Regiões da AWS. Quando você adiciona um acionador do Lambda@Edge ao CloudFront pela primeira vez, um perfil chamado AWSServiceRoleForLambdaReplicator é criado automaticamente para permitir que o Lambda@Edge replique funções para Regiões da AWS. Esse perfil é necessário para usar as funções do Lambda@Edge. O ARN do perfil AWSServiceRoleForLambdaReplicator é semelhante a este exemplo: arn:aws:iam::123456789012:role/aws-service-role/replicator.lambda.amazonaws.com/AWSServiceRoleForLambdaReplicator AWSServiceRoleForCloudFrontLogger – o CloudFront usa esse perfil para enviar arquivos de log ao CloudWatch. É possível usar arquivos de log para depurar erros de validação do Lambda@Edge. O perfil AWSServiceRoleForCloudFrontLogger é criado automaticamente quando você adiciona a associação da função do Lambda@Edge para permitir que o CloudFront envie arquivos de log de erros do Lambda@Edge ao CloudWatch. O ARN para a função AWSServiceRoleForCloudFrontLogger é semelhante a: arn:aws:iam::account_number:role/aws-service-role/logger.cloudfront.amazonaws.com/AWSServiceRoleForCloudFrontLogger Uma função vinculada a serviço facilita a configuração e o uso do Lambda@Edge, pois você não precisa adicionar manualmente as permissões necessárias. Lambda@Edge define as permissões de suas funções vinculadas ao serviço e apenas Lambda@Edge pode assumir as funções. As permissões definidas incluem a política de confiança e a política de permissões. Não é possível anexar a política de permissões a nenhuma outra entidade do IAM. Você deve remover todos os recursos do CloudFront ou do Lambda@Edge associados para poder excluir a função vinculada ao serviço. Isso ajuda a proteger seus recursos do Lambda@Edge de modo que você não remova um perfil vinculado ao serviço que ainda seja necessário para acessar os recursos ativos. Para obter mais informações sobre funções vinculadas ao serviço, consulte Funções vinculadas ao serviço do CloudFront . Permissões de função vinculada ao serviço para o Lambda@Edge O Lambda@Edge usa duas funções vinculadas a serviços: AWSServiceRoleForLambdaReplicator e AWSServiceRoleForCloudFrontLogger . As seções a seguir descrevem as permissões para cada uma dessas funções. Sumário Permissões de função vinculada ao serviço para o replicador do Lambda Permissões de função vinculada ao serviço para o CloudFront Logger Permissões de função vinculada ao serviço para o replicador do Lambda Essa função vinculada ao serviço permite que o Lambda replique funções do Lambda@Edge para Regiões da AWS. O perfil vinculado ao serviço AWSServiceRoleForLambdaReplicator confia no serviço replicator.lambda.amazonaws.com para presumir o perfil. A política de permissões da função permite que o Lambda@Edge conclua as seguintes ações nos recursos especificados: lambda:CreateFunction na arn:aws:lambda:*:*:function:* lambda:DeleteFunction na arn:aws:lambda:*:*:function:* lambda:DisableReplication na arn:aws:lambda:*:*:function:* iam:PassRole na all AWS resources cloudfront:ListDistributionsByLambdaFunction na all AWS resources Permissões de função vinculada ao serviço para o CloudFront Logger Esse perfil vinculado ao serviço permite que o CloudFront envie arquivos de log por push ao CloudWatch para que seja possível depurar erros de validação do Lambda@Edge. O perfil vinculado ao serviço AWSServiceRoleForCloudFrontLogger confia no serviço logger.cloudfront.amazonaws.com para presumir o perfil. A política de permissões do perfil permite que o Lambda@Edge conclua as seguintes ações no recurso arn:aws:logs:*:*:log-group:/aws/cloudfront/* especificado: logs:CreateLogGroup logs:CreateLogStream logs:PutLogEvents Você deve configurar permissões para permitir que uma entidade do IAM (como um usuário, grupo ou função) exclua uma função vinculada ao serviço do Lambda@Edge. Para obter mais informações, consulte Service-linked role permissions (Permissões de nível vinculado a serviços) no Guia do usuário do IAM . Criação de funções vinculadas ao serviço para o Lambda@Edge Normalmente, não é necessário criar manualmente as funções vinculadas a serviços para o Lambda@Edge. O serviço cria as funções automaticamente nas seguintes situações: Quando você cria um acionador pela primeira vez, o serviço cria o perfil AWSServiceRoleForLambdaReplicator (caso ele ainda não exista). Esse perfil permite que o Lambda replique funções do Lambda@Edge para Regiões da AWS. Se você excluir a função vinculada ao serviço, a função será criada novamente quando você adicionar um novo gatilho para o Lambda@Edge em uma distribuição. Quando você atualiza ou cria uma distribuição do CloudFront que tem uma associação ao Lambda@Edge, o serviço cria o perfil AWSServiceRoleForCloudFrontLogger (caso o perfil ainda não exista). Esse perfil permite que o CloudFront envie arquivos de log ao CloudWatch. Se você excluir a função vinculada ao serviço, ela será criada novamente quando você atualizar ou criar uma distribuição do CloudFront que tenha uma associação ao Lambda@Edge. Para criar manualmente esses perfis vinculados ao serviço, execute os seguintes comandos da AWS Command Line Interface (AWS CLI): Para criar a função AWSServiceRoleForLambdaReplicator Execute o comando a seguir. aws iam create-service-linked-role --aws-service-name replicator.lambda.amazonaws.com Para criar a função AWSServiceRoleForCloudFrontLogger Execute o comando a seguir. aws iam create-service-linked-role --aws-service-name logger.cloudfront.amazonaws.com Edição de funções vinculadas ao serviço do Lambda@Edge O Lambda@Edge não permite que você edite os perfis vinculados ao serviço AWSServiceRoleForLambdaReplicator ou AWSServiceRoleForCloudFrontLogger. Depois que o serviço criar um perfil vinculado ao serviço, você não poderá alterar o nome dele, pois várias entidades podem fazer referência a ele. No entanto, é possível usar o IAM para editar a descrição da função. Para obter mais informações, consulte Editar um perfil vinculado ao serviço no Guia do usuário do IAM . Regiões da AWS compatíveis com perfis vinculados ao serviço do Lambda@Edge O CloudFront é compatível com as funções vinculadas ao serviço do Lambda@Edge nas seguintes Regiões da AWS: Leste dos EUA (Norte da Virgínia) – us-east-1 Leste dos EUA (Ohio) – us-east-2 US West (N. California) – us-west-1 US West (Oregon) – us-west-2 Asia Pacific (Mumbai) – ap-south-1 Asia Pacific (Seoul) – ap-northeast-2 Asia Pacific (Singapore) – ap-southeast-1 Asia Pacific (Sydney) – ap-southeast-2 Asia Pacific (Tokyo) – ap-northeast-1 Europe (Frankfurt) – eu-central-1 Europe (Ireland) – eu-west-1 Europe (London) – eu-west-2 South America (São Paulo) – sa-east-1 O Javascript está desativado ou não está disponível no seu navegador. Para usar a documentação da AWS, o Javascript deve estar ativado. Consulte as páginas de Ajuda do navegador para obter instruções. Convenções do documento Tutorial: função básica do Lambda@Edge Escrever e criar uma função do Lambda@Edge Essa página foi útil? - Sim Obrigado por nos informar que estamos fazendo um bom trabalho! Se tiver tempo, conte-nos sobre o que você gostou para que possamos melhorar ainda mais. Essa página foi útil? - Não Obrigado por nos informar que precisamos melhorar a página. Lamentamos ter decepcionado você. Se tiver tempo, conte-nos como podemos melhorar a documentação. | 2026-01-13T09:30:34 |
https://aws.amazon.com/contact-us/?pg=ln&sec=hs | AWS Support and Customer Service Contact Info | Amazon Web Services Skip to main content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Support › Contact AWS Contact AWS General support for sales, compliance, and subscribers Want to speak with an AWS sales specialist? Get in touch Chat online or talk by phone Connect with support directly Monday through Friday Request form Request AWS sales support Submit a sales support form Compliance support Request support related to AWS compliance Connect with AWS compliance support Subscriber support services Technical support Support for service related technical issues. Unavailable under the Basic Support Plan. Sign in and submit request Account or billing support Assistance with account and billing related inquiries Sign in to request Wrongful charges support Received a bill for AWS, but don't have an AWS account? Learn more Support plans Learn about AWS support plan options See Premium Support options AWS sign-in resources See additional resources for issues related to logging into the console Help signing in to the console Need assistance to sign in to the AWS Management Console? View documentation Trouble shoot your sign-in issue Tried sign in, but the credentials didn’t work? Or don’t have the credentials to access AWS root user account? View solutions Help with multi-factor authentication (MFA) issues Lost or unusable Multi-Factor Authentication (MFA) device View solution Still unable to sign in to your AWS account? If you are still unable to log into your AWS account please fill out this form. View form Additional resources Self-service re:Post provides access to curated knowledge and a vibrant community that helps you become even more successful on AWS View AWS re:Post Service limit increases Need to increase to service limit? Fill out a quick request form Sign in to request Report abuse Report abusive activity from Amazon Web Services Resources Report suspected abuse Amazon.com support Request Kindle or Amazon.com support View on amazon.com Did you find what you were looking for today? Let us know so we can improve the quality of the content on our pages Yes No Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center SDKs & Tools .NET on AWS Python on AWS Java on AWS PHP on AWS JavaScript on AWS Help Contact Us File a Support Ticket AWS re:Post Knowledge Center AWS Support Overview Get Expert Help AWS Accessibility Legal English Back to top Amazon is an Equal Opportunity Employer: Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation / Age. x facebook linkedin instagram twitch youtube podcasts email Privacy Site terms Cookie Preferences © 2026, Amazon Web Services, Inc. or its affiliates. All rights reserved. | 2026-01-13T09:30:34 |
https://docs.aws.amazon.com/ko_kr/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-testing-debugging.html | Lambda@Edge 함수 테스트 및 디버깅 - Amazon CloudFront Lambda@Edge 함수 테스트 및 디버깅 - Amazon CloudFront 설명서 Amazon CloudFront 개발자 가이드 Lambda@Edge 함수 테스트 CloudFront에서 Lambda@Edge 함수 오류 식별 잘못된 Lambda@Edge 함수 응답(검증 오류) 문제 해결 Lambda@Edge 함수 실행 오류 문제 해결 Lambda@Edge 리전 확인 계정이 CloudWatch로 로그를 푸시하는지 확인 Lambda@Edge 함수 테스트 및 디버깅 의도한 작업을 완료하는지 확인하기 위해서는 Lambda@Edge 함수 코드를 단독으로 테스트해야 하고, 함수가 CloudFront와 함께 잘 작동하는지 확인하기 위해서는 통합 테스트를 수행해야 합니다. 통합 테스트 중 또는 함수 배포 완료 후 HTTP 5xx 오류 등과 같은 CloudFront 오류를 디버깅해야 할 수 있습니다. Lambda 함수에서 반환되는 잘못된 응답, 함수 실행 시 발생하는 실행 오류 또는 Lambda 서비스의 실행 조절로 인한 오류 등이 있을 수 있습니다. 이 주제의 단원에서는 이러한 문제가 어떤 유형의 실패인지 확인한 다음 각 문제를 해결하기 위해 수행할 수 있는 단계에 대해 설명합니다. 참고 오류를 해결하기 위해 CloudWatch 로그 파일 또는 지표를 검토할 때 함수가 실행된 위치에 가장 가까운 AWS 리전에 로그 파일 또는 지표가 표시 또는 저장된다는 점에 유의하세요. 예를 들어, 영국 내 사용자가 있는 웹 사이트 또는 웹 애플리케이션이 있고 배포와 연결된 Lambda 함수가 있는 경우 런던 AWS 리전 리전에 대한 CloudWatch 지표 또는 로그 파일을 볼 수 있도록 리전을 변경해야 합니다. 자세한 내용은 Lambda@Edge 리전 확인 섹션을 참조하세요. 주제 Lambda@Edge 함수 테스트 CloudFront에서 Lambda@Edge 함수 오류 식별 잘못된 Lambda@Edge 함수 응답(검증 오류) 문제 해결 Lambda@Edge 함수 실행 오류 문제 해결 Lambda@Edge 리전 확인 계정이 CloudWatch로 로그를 푸시하는지 확인 Lambda@Edge 함수 테스트 Lambda 함수 테스트에는 독립 실행형 테스트와 통합 테스트, 이렇게 두 가지 단계가 있습니다. 독립 실행형 기능 테스트 CloudFront에 Lambda 함수를 추가하기 전에 Lambda 콘솔의 테스트 기능을 사용하거나 다른 방법을 사용하여 기능을 먼저 테스트해야 합니다. Lambda 콘솔 테스트에 대한 자세한 내용은 AWS Lambda 개발자 안내서 에서 콘솔을 사용하여 Lambda 함수 호출 을 참조하세요. CloudFront에서 함수의 작동 상태를 보기 위한 테스트 함수가 배포와 연결되어 있고 CloudFront 이벤트를 기반으로 실행되는 경우에는 통합 테스트를 반드시 완료해야 합니다. 함수가 올바른 이벤트 발생 시 실행되고 CloudFront에 대해 올바른 응답을 반환하는지 확인합니다. 예를 들어, 이벤트 구조가 올바른지, 유효한 헤더만 포함되어 있는지 등을 확인합니다. Lambda 콘솔에서 함수의 통합 테스트를 반복하는 것처럼 코드를 수정하거나 함수를 호출하는 CloudFront 트리거를 변경할 때 Lambda@Edge 자습서의 단계를 참조하세요. 예를 들어, 본 자습서의 단계( 4단계: 함수를 실행할 CloudFront 트리거 추가 )에서 설명하는 것처럼 다양한 버전의 함수에서 작동하는지 확인합니다. 함수를 변경하고 배포할 때 업데이트된 함수 및 CloudFront 트리거가 모든 리전에서 복제되는 데 몇 분 정도 걸릴 수 있습니다. 일반적으로 몇 분이면 되지만 최대 15분까지 걸릴 수 있습니다. CloudFront 콘솔로 이동하여 해당 배포를 보고 복제가 완료되었는지 확인할 수 있습니다. 복제본 배포가 완료되었는지 확인하려면 https://console.aws.amazon.com/cloudfront/v4/home 에서 CloudFront 콘솔을 엽니다. 배포 이름을 선택합니다. 배포 상태가 진행 중 에서 다시 배포 완료 로 바뀌었는지 확인합니다. 이것은 함수가 복제되었다는 의미입니다. 이제 다음 단원의 단계에 따라 함수가 작동하는지 확인합니다. 콘솔의 테스트는 함수의 로직만 확인하며, Lambda@Edge에 고유한 모든 서비스 할당량(이전에는 제한이라고 함)은 적용하지 않습니다. CloudFront에서 Lambda@Edge 함수 오류 식별 함수 로직이 올바르게 작동하는지 확인한 후 CloudFront에서 함수 실행 시 HTTP 5xx 오류가 나타날 수 있습니다. HTTP 5xx 오류는 여러 가지 이유로 반환될 수 있습니다. 여기에는 Lambda 함수 오류 또는 CloudFront의 다른 문제가 포함될 수 있습니다. Lambda@Edge 함수를 사용하는 경우 CloudFront 콘솔의 그래프를 사용하여 오류의 원인을 추적하고 문제를 해결할 수 있습니다. 예를 들어, HTTP 5xx 오류가 CloudFront 또는 Lambda 함수에 의해 발생한 것인지를 확인한 다음 특정 함수에 대해 관련 로그 파일을 검토하여 문제를 조사할 수 있습니다. CloudFront에서 일반적인 HTTP 오류 문제를 해결하려면 CloudFront의 오류 응답 상태 코드 문제 해결 주제의 문제 해결 단계를 참조하세요. CloudFront에서 Lambda@Edge 함수 오류를 일으키는 원인 Lambda 함수가 HTTP 5xx 오류를 일으키는 원인에는 여러 가지가 있으며, 수행해야 할 문제 해결 단계는 오류 유형에 따라 달라집니다. 오류는 다음과 같이 분류할 수 있습니다. Lambda 함수 실행 오류 함수에 처리되지 않은 예외가 있거나 코드에 오류가 있기 때문에 CloudFront가 Lambda에서 응답을 받지 못한 경우 실행 오류가 발생합니다. 예: 코드에 콜백이 포함된 경우(오류) 잘못된 Lambda 함수 응답이 CloudFront로 반환됨 함수 실행 후 CloudFront가 Lambda에서 응답을 수신했습니다. 응답의 객체 구조가 Lambda@Edge 이벤트 구조 를 따르지 않거나 응답에 잘못된 헤더 또는 기타 잘못된 필드가 포함되어 있는 경우 오류가 반환됩니다. Lambda 서비스 할당량(이전에는 제한이라고 함)으로 인해 CloudFront에서의 실행이 제한됨 Lambda 서비스는 각 리전에서 실행을 조절하는데 할당량을 초과하면 오류를 반환합니다. 자세한 내용은 Lambda@Edge에 대한 할당량 섹션을 참조하세요. 실패 유형을 결정하는 방법 디버깅할 때 집중할 위치를 결정하고 CloudFront에서 반환한 오류를 해결하려면 CloudFront에서 HTTP 오류가 반환된 이유를 파악하는 것이 좋습니다. 시작하려면 AWS Management 콘솔의 CloudFront 콘솔에 있는 모니터링 섹션에 제공된 그래프를 사용합니다. CloudFront 콘솔의 모니터링(Monitoring) 섹션에서 그래프를 보는 방법에 대한 자세한 내용은 Amazon CloudWatch를 사용한 CloudFront 지표 모니터링 단원을 참조하세요. 다음 그래프는 오리진 또는 Lambda 함수로 오류를 반환하는지 여부를 추적하고 Lambda 함수의 오류일 경우 문제 유형의 범위를 좁히려 할 때 특히 유용합니다. 오류 발생율 그래프 각 배포의 개요 탭에서 볼 수 있는 그래프 중에는 오류 발생율 그래프가 있습니다. 이 그래프는 배포에 들어오는 총 요청의 백분율로 오류 발생율을 표시합니다. 그래프는 총 오류 발생율, 총 4xx 오류, 총 5xx 오류 및 Lambda 함수의 총 5xx 오류를 보여줍니다. 오류 유형 및 볼륨에 따라 원인을 조사하고 문제를 해결할 수 있는 단계를 수행할 수 있습니다. Lambda 오류가 표시되면 함수가 반환하는 특정 유형의 오류를 확인하여 추가 조사를 수행할 수 있습니다. Lambda@Edge errors(Lambda@Edge 오류) 탭에는 함수 오류를 유형별로 분류하여 특정 함수에 대한 문제를 찾아낼 수 있는 그래프가 포함되어 있습니다. CloudFront 오류가 표시되면 문제를 해결하고 오리진 오류를 수정하거나 CloudFront 구성을 변경할 수 있습니다. 자세한 내용은 CloudFront의 오류 응답 상태 코드 문제 해결 단원을 참조하세요. 실행 오류 및 잘못된 함수 응답 그래프 Lambda@Edge errors(Lambda@Edge 오류) 탭에는 특정 배포에 대한 Lambda@Edge 오류를 유형별로 분류하는 그래프가 포함되어 있습니다. 예를 들어, 한 그래프는 모든 실행 오류를 AWS 리전별로 표시합니다. 문제를 쉽게 해결할 수 있도록 특정 함수에 대한 로그 파일을 열고 리전별로 검토하여 특정 문제를 찾을 수 있습니다. 리전별로 특정 함수의 로그 파일을 보려면 Lambda@Edge 오류 탭의 관련 Lambda@Edge 함수 에서 함수 이름을 선택한 다음 지표 보기 를 선택합니다. 그런 다음, 함수 이름이 있는 페이지의 오른쪽 상단 모서리에서 함수 로그 보기 를 선택한 다음 리전을 선택합니다. 예를 들어 미국 서부(오레곤) 리전에 대한 오류 그래프에 문제가 있는 경우 드롭다운 목록에서 해당 리전을 선택합니다. 그러면 Amazon CloudWatch 콘솔이 열립니다. 해당 리전의 CloudWatch 콘솔에서 로그 스트림 아래의 로그 스트림을 선택하여 함수에 대한 이벤트를 확인합니다. 또한 이 장의 다음 단원에서 오류 해결 및 수정에 대한 추가 권장 사항을 읽으십시오. 제한 그래프 Lambda@Edge errors(Lambda@Edge 오류) 탭에는 제한 그래프도 있습니다. 경우에 따라 리전 동시성 할당량(이전에는 제한이라고 함)에 도달하면 Lambda 서비스가 리전별로 함수 호출을 제한합니다. 제한 초과 오류가 발생한 경우는 리전에서의 실행에 대해 Lambda 서비스가 부과한 할당량에 도달한 것입니다. 이러한 할당량을 늘리기 위한 요청 방법을 비롯한 자세한 내용은 Lambda@Edge에 대한 할당량 단원을 참조하세요. 이 정보를 사용하여 HTTP 오류 문제를 해결하는 방법에 대한 예제는 AWS에서 콘텐츠 전송을 디버깅하기 위한 네 가지 단계 를 참조하세요. 잘못된 Lambda@Edge 함수 응답(검증 오류) 문제 해결 Lambda 확인 오류가 문제라고 파악한 경우는 Lambda 함수가 CloudFront에 잘못된 응답을 반환하고 있다는 의미입니다. 이 단원의 지침에 따라 함수를 검토하는 단계를 수행한 다음 응답이 CloudFront 요구 사항을 따르는지 확인하세요. CloudFront는 다음 두 가지 방법으로 Lambda 함수의 응답을 확인합니다. Lambda 응답은 필수 객체 구조를 따라야 합니다. 잘못된 객체 구조의 예에는 구문 분석이 불가능한 JSON, 필수 필드 누락 및 응답에 포함된 잘못된 객체 등이 있습니다. 자세한 내용은 Lambda@Edge 이벤트 구조 단원을 참조하세요. 응답에는 올바른 객체만 포함되어 있어야 합니다. 응답에 올바른 객체가 포함되어 있는데 그 값이 지원되지 않는 경우에는 오류가 발생합니다. 등록된 헤더 또는 읽기 전용 헤더를 추가 또는 업데이트하는 경우( 엣지 함수에 대한 제한 사항 참조), 최대 본문 크기 초과(Lambda@Edge 오류 주제에서 생성된 응답 크기 제한 참조) 및 잘못된 문자 또는 값( Lambda@Edge 이벤트 구조 참조) 등을 예로 들 수 있습니다. Lambda가 CloudFront에 대해 유효하지 않은 응답을 반환하면 CloudFront가 Lambda 함수가 실행된 리전의 CloudWatch에 푸시하는 로그 파일에 오류 메시지가 기록됩니다. 잘못된 응답이 있는 경우 CloudWatch로 로그 파일을 보내는 것은 기본 동작입니다. 그러나 Lambda 함수를 릴리스하기 전에 CloudFront와 연결한 경우에는 함수에 대해 활성화되지 않을 수 있습니다. 자세한 내용은 이 주제 뒷부분에 나오는 계정이 CloudWatch로 로그를 푸시하는지 확인 을 참조하세요. CloudFront는 배포와 연결된 로그 그룹 내에서 함수가 실행되는 위치에 해당하는 리전으로 로그 파일을 푸시합니다. 로그 그룹의 형식은 /aws/cloudfront/LambdaEdge/ DistributionId 인데, 여기서 DistributionId 는 배포 ID입니다. CloudWatch 로그 파일을 찾을 수 있는 리전을 확인하려면 이 주제 뒷부분에 나오는 Lambda@Edge 리전 확인 을 참조하세요. 오류를 재현할 수 있는 경우 오류가 발생하는 새로운 요청을 생성한 다음 실패한 CloudFront 응답( X-Amz-Cf-Id 헤더)에서 해당 요청 ID를 찾아 로그 파일에서 단일 실패를 확인할 수 있습니다. 로그 파일 항목에는 오류가 반환된 이유를 파악하는 데 도움이 되는 정보가 포함되어 있고 해당하는 Lambda 요청 ID가 나열되어 있기 때문에 단일 요청의 컨텍스트 내에서 근본 원인을 분석할 수 있습니다. 오류가 간헐적으로 발생하는 경우에는 CloudFront 액세스 로그를 사용하여 실패한 요청의 요청 ID를 찾은 다음 CloudWatch 로그에서 해당하는 오류 메시지를 검색할 수 있습니다. 자세한 내용은 앞의 실패 유형 확인 단원을 참조하십시오. Lambda@Edge 함수 실행 오류 문제 해결 Lambda 실행 오류가 문제인 경우에는 Lambda 함수에 대한 로깅 문을 생성해 CloudWatch 로그 파일에 CloudFront에서 함수의 실행을 모니터링하는 메시지를 작성한 다음 예상대로 작동하는지 확인하면 도움이 될 수 있습니다. 그런 다음 CloudWatch 로그 파일에서 해당 문을 검색해 함수가 작동 중인지 확인할 수 있습니다. 참고 Lambda@Edge 함수를 변경하지 않은 경우에도 Lambda 함수 실행 환경에 대한 업데이트가 영향을 줄 수 있으며, 실행 오류가 반환될 수 있습니다. 테스트 및 이후 버전으로의 마이그레이션에 대한 자세한 내용은 AWS Lambda 및 AWS Lambda@Edge 실행 환경 에 대해 예정된 업데이트를 참조하세요. Lambda@Edge 리전 확인 Lambda@Edge 함수가 트래픽을 수신하는 리전을 보려면 에서 CloudFront 콘솔의 함수에 대한 지표를 확인하세요AWS Management 콘솔 지표는 각 AWS 리전별로 표시됩니다. 동일한 페이지에서 리전을 선택하고 해당 리전의 로그 파일을 확인하여 문제를 조사할 수 있습니다. CloudFront에서 Lambda 함수를 실행할 때 생성된 로그 파일을 확인하려면 올바른 AWS 리전의 CloudWatch Logs 파일을 검토해야 합니다. CloudFront 콘솔의 모니터링(Monitoring) 섹션에서 그래프를 보는 방법에 대한 자세한 내용은 Amazon CloudWatch를 사용한 CloudFront 지표 모니터링 단원을 참조하세요. 계정이 CloudWatch로 로그를 푸시하는지 확인 기본적으로 CloudFront는 잘못된 Lambda 함수 응답 로깅을 활성화하고 Lambda@Edge의 서비스 연결 역할 중 하나를 수행해 로그 파일을 CloudWatch로 푸시합니다. 잘못된 Lambda 함수 응답 로그 기능이 활성화되기 전에 CloudFront에 Lambda@Edge 함수를 추가한 경우에는 예를 들어, CloudFront 트리거 추가 등과 같이 다음에 Lambda@Edge 구성을 업데이트하는 경우 로깅이 활성화됩니다. 다음을 수행해 계정에 대해 CloudWatch로 로그 파일 푸시 기능이 활성화되어 있는지 확인할 수 있습니다. CloudWatch에 로그가 나타나는지 확인 - Lambda @Edge 함수가 실행된 리전에서 로그 파일을 확인해야 합니다. 자세한 내용은 Lambda@Edge 리전 확인 섹션을 참조하세요. IAM의 계정에 관련 서비스 연결 역할이 있는지 확인 - 계정에 IAM 역할 AWSServiceRoleForCloudFrontLogger 가 있어야 합니다. 이에 대한 자세한 내용은 Lambda@Edge의 서비스 연결 역할 섹션을 참조하세요. javascript가 브라우저에서 비활성화되거나 사용이 불가합니다. AWS 설명서를 사용하려면 Javascript가 활성화되어야 합니다. 지침을 보려면 브라우저의 도움말 페이지를 참조하십시오. 문서 규칙 Lambda@Edge 함수에 트리거 추가 함수 및 복제본 삭제 이 페이지의 내용이 도움이 되었습니까? - 예 칭찬해 주셔서 감사합니다! 잠깐 시간을 내어 좋았던 부분을 알려 주시면 더 열심히 만들어 보겠습니다. 이 페이지의 내용이 도움이 되었습니까? - 아니요 이 페이지에 작업이 필요하다는 점을 알려 주셔서 감사합니다. 실망시켜 드려 죄송합니다. 잠깐 시간을 내어 설명서를 향상시킬 수 있는 방법에 대해 말씀해 주십시오. | 2026-01-13T09:30:34 |
https://penneo.com/da/why-penneo/ | Hvorfor vælge Penneo? Produkter Penneo Sign Validator Hvorfor Penneo Integrationer Løsninger Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Brancher Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Priser Ressourcer Vidensunivers Trust Center Produktopdateringer SIGN Hjælpecenter KYC Hjælpecenter Systemstatus LOG PÅ Penneo Sign Log ind på Penneo Sign. LOG PÅ Penneo KYC Log ind på Penneo KYC. LOG PÅ BOOK ET MØDE GRATIS PRØVEPERIODE DA EN NO FR NL Produkter Penneo Sign Validator Hvorfor Penneo Integrationer Løsninger Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Priser Ressourcer Vidensunivers Trust Center Produktopdateringer SIGN Hjælpecenter KYC Hjælpecenter Systemstatus BOOK ET MØDE GRATIS PRØVEPERIODE LOG PÅ DA EN NO FR NL Penneo Sign Log ind på Penneo Sign. LOG PÅ Penneo KYC Log ind på Penneo KYC. LOG PÅ Effektiv digital signering og fuld overholdelse af EU-lovgivningen Penneo giver dig fuld fart på arbejdsgangene. Du håndterer alle aftaler hurtigt og trygt , altid sikret mod lovbrud. Løsningen sparer dig tid og minimerer fejl, og du kan nemt signere i henhold til alle eIDAS-krav via MitID, itsme®, BankID, .beID og pas . BOOK ET MØDE Derfor vælger andre Penneo 3.000+ virksomheder – herunder de fire største revisionsfirmaer – bruger Penneo. 60 % af de dokumenter, der sendes med Penneo, bliver underskrevet inden for 24 timer. 81 % af alle årsrapporter i Danmark bliver underskrevet med Penneo. Nøglefordele ved Penneo Spar tid og arbejd mere effektivt Giv dine kunder en bedre oplevelse Sikr høj datasikkerhed og overholdelse af lovgivningen Spar tid og arbejd mere effektivt Giv dine kunder en bedre oplevelse Sikr høj datasikkerhed og overholdelse af lovgivningen Integrér Penneo med dine eksisterende systemer og slip for manuelle indtastninger, fejl og skift mellem platforme. Automatisér selv de mest komplekse underskriftsforløb med regler, der sender dokumenter til flere underskrivere i en fastlagt rækkefølge. Undgå forsinkelser – automatiske påmindelser holder dine underskriftsprocesser på sporet. Brug standardiserede e-mail-skabeloner til at kommunikere professionelt og ensartet med underskrivere – ingen grund til at skrive nye beskeder hver gang. Spar tid på opfølgning – automatiske statusopdateringer holder sagsansvarlige informeret om ændringer og underskriveraktivitet. Kunder kan underskrive flere dokumenter på én gang med en enkelt digital signatur – nemt og effektivt. Underskriv og udfyld formularer digitalt med MitID, pas eller eID – ingen print, scanning eller postgang. Den intuitive platform guider underskrivere trin for trin og gør processen let – også for dem, der ikke er teknisk anlagte. Kunderne får automatisk adgang til deres underskrevne dokumenter i et personligt arkiv – de skal ikke længere bede om kopier eller gemme dem lokalt. Underskrivere modtager beskeder og bruger platformen på deres foretrukne sprog – en mere behagelig og tilgængelig oplevelse. Opret kvalificerede elektroniske signaturer med itsme®, norsk BankID pas eller .beID – juridisk gyldige i hele EU og ligestillet med en håndskrevet underskrift. Opret avancerede elektroniske signaturer i overensstemmelse med eIDAS med MitID eller svensk BankID. Penneo lever op til GDPR og er certificeret efter ISO 27001 og 27701 – så dine data er i sikre hænder. Beskyt følsomme dokumenter med adgangskontrol via eID eller SMS – kun de rette personer får adgang. Alle handlinger bliver registreret i en sikker og uforanderlig log, som giver fuld gennemsigtighed og dokumentation. Derfor får du mere end bare en standardløsning med Penneo Penneo Standardløsninger til elektroniske underskrifter Komplekse underskriftsforløb Penneo håndterer selv de mest komplekse forløb – med understøttelse af flere underskrivere, rollebaseret rækkefølge og godkendelsesrunder, hvor intet overlades til tilfældighederne. Standardløsninger er ofte begrænset til simple, lineære forløb og mangler fleksibilitet til at understøtte flere trin, roller og godkendelser. Kvalificerede elektroniske underskrifter (QES) Penneo understøtter kvalificerede elektroniske underskrifter med pas, itsme®, norsk BankID eller .beID – med samme juridiske gyldighed som en håndskrevet underskrift og anerkendt i hele EU. Standardløsninger tilbyder sjældent QES og er typisk begrænset til underskriftsformer med lavere sikkerhedsniveau, som ikke opfylder kravene til fuld EU-anerkendelse. Avancerede elektroniske underskrifter (AES) Penneo understøtter avancerede elektroniske underskrifter med betroede identitetsløsninger som MitID, MitID Erhverv og svensk BankID – så du får både høj sikkerhed og fuld sporbarhed. Standardløsninger kan i nogle tilfælde understøtte AES, men mangler ofte integration med nationale eID’er eller pas. Understøttede sprog Penneo er tilgængelig på 8 sprog og sikrer en brugervenlig oplevelse på tværs af markeder – både for dig og dine kunder. Standardløsninger understøtter ofte kun få sprog, hvilket kan skabe barrierer for internationale brugere og begrænse anvendeligheden. Åben API Penneo tilbyder en fleksibel og veldokumenteret åben API, der gør det nemt at integrere med dine eksisterende systemer og automatisere arbejdsprocesser. Standardløsninger har ofte begrænset API-adgang eller kræver specialtilpasninger, hvilket gør integration mere besværlig og mindre skalerbar. Automatiske påmindelser Penneo sender automatisk påmindelser til underskrivere, så du undgår forsinkelser og sikrer, at underskriftsforløb bliver afsluttet til tiden. Standardløsninger kræver ofte manuelle opfølgninger, hvilket medfører ekstra administrativt arbejde og øger risikoen for oversete frister. Brugervenlighed Penneo er designet med en intuitiv og guidet underskriftsoplevelse, som gør det nemt for alle – uanset teknisk erfaring – at underskrive korrekt og hurtigt. Standardløsninger kan være forvirrende eller teknisk tunge, især for førstegangsbrugere eller personer uden teknisk baggrund. Databeskyttelse og informationssikkerhed Penneo lever op til GDPR og er certificeret efter ISO 27001 og ISO 27701, hvilket sikrer højeste standard for datasikkerhed og beskyttelse af personoplysninger. Sikkerhedsniveauet varierer i standardløsninger, og mange mangler enten relevante certificeringer eller fuld overensstemmelse med GDPR. Skalerer med din forretning Penneo er skabt til at vokse med din virksomhed – fra små teams til store organisationer, med fleksible arbejdsgange og effektiv brugerhåndtering. Standardløsninger har ofte begrænset skalerbarhed og kan have svært ved at håndtere komplekse processer eller store mængder dokumenter. Kundesupport Penneo tilbyder hurtig og personlig support, med dedikeret assistance tilpasset netop din virksomheds behov. Standardløsninger har ofte generisk support, som kan være langsom, upersonlig eller begrænset til basal dokumentation. SE ALLE FUNKTIONER Goldwasser Exchange reducerer kontoåbning fra 10 dage til 24 timer med Penneo Tidligere tog det typisk mellem 5 og 10 dage at få afsluttet en kontoåbning. I dag – takket være Penneo, elektroniske underskrifter og den automatisering vi har implementeret – kan vi åbne konti på under 24 timer. — Jonathan Goldwasser, Administrerende direktør, Goldwasser Exchange Læs kundehistorien Skabt til selv de mest komplekse underskriftsprocesser Uanset om du arbejder med revision, regnskab, ejendomshandel, finans eller HR, giver Penneo dit team mulighed for at underskrive digitalt – sikkert og effektivt. Løsningen er udviklet til at automatisere selv de mest komplekse underskriftsprocesser, så du kan fokusere på det, der virkelig betyder noget. Revision & Regnskab Send aftalebreve, revisionspåtegninger og årsrapporter til underskrift med få klik. Læs mere Ejendomshandel Få hurtigere afslutning på ejendomshandler ved at fjerne behovet for fysiske møder og manuelt papirarbejde. Læs mere Advokater Giv dine klienter mulighed for at underskrive på afstand med sikre digitale signaturer, der lever op til eIDAS-kravene. Læs mere Finans og bank Skær ned på papirarbejde og manuelt arbejde – og giv dine kunder en smidig og professionel oplevelse. Læs mere HR & Rekruttering Få nye medarbejdere hurtigt ombord – send ansættelseskontrakter til digital underskrift på få minutter. Læs mere Hurtig opsætning. Nemt skifte. Ingen afbrydelser. At skifte til Penneo betyder ikke, at du skal starte forfra. Takket være vores færdige integrationer og åbne API kan du nemt forbinde dine eksisterende systemer og overføre data på tværs af platforme – uden at forstyrre den daglige drift. SE ALLE INTEGRATIONER Tusindvis af virksomheder stoler på Penneo til at gøre deres underskriftsprocesser enklere Tid til at blive en del af fællesskabet? BOOK ET MØDE SE PRISER Produkter Penneo Sign Priser Integrationer Åben API Validator Hvorfor Penneo Løsninger Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Ressourcer Vidensunivers Trust Center Produktopdateringer SUPPORT SIGN Hjælpecenter KYC Hjælpecenter Systemstatus Virksomhed Om os Karriere Privatlivspolitik Vilkår Brug af cookies Accessibility Statement Whistleblower Policy Kontakt os PENNEO A/S - Gærtorvet 1-5, DK-1799 København V - CVR: 35633766 | 2026-01-13T09:30:34 |
https://support.microsoft.com/it-it/windows/gestire-i-cookie-in-microsoft-edge-visualizzare-consentire-bloccare-eliminare-e-usare-168dab11-0753-043d-7c16-ede5947fc64d | Gestire i cookie in Microsoft Edge: visualizzare, consentire, bloccare, eliminare e usare - Supporto tecnico Microsoft Argomenti correlati × Sicurezza, protezione e privacy di Windows Panoramica Panoramica sulla sicurezza, sulla protezione e sulla privacy sicurezza di Windows Assistenza per la sicurezza di Windows Protezione con Sicurezza di Windows Prima di riciclare, vendere o regalare la propria Xbox o PC Windows Rimozione di malware dal PC Windows Protezione di Windows Assistenza per la protezione di Windows Visualizzazione ed eliminazione della cronologia del browser in Microsoft Edge Eliminazione e gestione dei cookie Rimozione sicura dei contenuti importanti durante la reinstallazione di Windows Ricerca e blocco di un dispositivo Windows perso Privacy di Windows Assistenza con la privacy di Windows Impostazioni di Privacy di Windows utilizzate dalle app Visualizza i tuoi dati nel dashboard per la privacy Passa a contenuti principali Microsoft Supporto Supporto Supporto Home Microsoft 365 Office Prodotti Microsoft 365 Outlook Microsoft Teams OneDrive Microsoft Copilot OneNote Windows altro ... Dispositivi Surface Accessori per PC Xbox Giochi per PC HoloLens Surface Hub Garanzie hardware Account e fatturazione account Microsoft Store e fatturazione Risorse Novità Forum della community Amministratori Microsoft 365 Portale per piccole imprese Sviluppatore Istruzione Segnala una truffa di supporto Sicurezza del prodotto Espandi Acquista Microsoft 365 Tutti i siti Microsoft Global Microsoft 365 Teams Copilot Windows Surface Xbox Offerte Aziende Supporto tecnico Software Software App di Windows IA OneDrive Outlook Passaggio da Skype a Teams OneNote Microsoft Teams Accessori e dispositivi Accessori e dispositivi Acquista i prodotti Xbox Accessori Intrattenimento Intrattenimento Xbox Game Pass Ultimate Xbox e giochi Giochi per PC Business Business Microsoft Security Azure Dynamics 365 Microsoft 365 per le aziende Microsoft Industry Microsoft Power Platform Windows 365 Sviluppatori e IT Sviluppatori e IT Sviluppatore Microsoft Microsoft Learn Supporto per le app del marketplace di IA Microsoft Tech Community Microsoft Marketplace Visual Studio Marketplace Rewards Altro Altro Microsoft Rewards Download gratuiti e sicurezza Formazione Carte regalo Licensing Visualizza mappa del sito Cerca Richiedi assistenza Nessun risultato Annulla Accedi Accedi con Microsoft Accedi o crea un account. Salve, Seleziona un altro account. Hai più account Scegli l'account con cui vuoi accedere. Argomenti correlati Sicurezza, protezione e privacy di Windows Panoramica Panoramica sulla sicurezza, sulla protezione e sulla privacy sicurezza di Windows Assistenza per la sicurezza di Windows Protezione con Sicurezza di Windows Prima di riciclare, vendere o regalare la propria Xbox o PC Windows Rimozione di malware dal PC Windows Protezione di Windows Assistenza per la protezione di Windows Visualizzazione ed eliminazione della cronologia del browser in Microsoft Edge Eliminazione e gestione dei cookie Rimozione sicura dei contenuti importanti durante la reinstallazione di Windows Ricerca e blocco di un dispositivo Windows perso Privacy di Windows Assistenza con la privacy di Windows Impostazioni di Privacy di Windows utilizzate dalle app Visualizza i tuoi dati nel dashboard per la privacy Gestire i cookie in Microsoft Edge: visualizzare, consentire, bloccare, eliminare e usare Si applica a Windows 10 Windows 11 Microsoft Edge I cookie sono piccoli dati archiviati nel dispositivo dai siti Web visitati. Servono a vari scopi, ad esempio memorizzare le credenziali di accesso, le preferenze del sito e tenere traccia del comportamento degli utenti. Tuttavia, potresti voler eliminare i cookie per motivi di privacy o per risolvere i problemi di esplorazione. Questo articolo fornisce istruzioni su come: Visualizza tutti i cookie Consenti tutti i cookie Consentire i cookie da un sito Web specifico Bloccare i cookie di terze parti Blocca tutti i cookie Bloccare i cookie da un sito specifico Elimina tutti i cookie Elimina i cookie da un sito specifico Elimina i cookie a ogni chiusura del browser Usare i cookie per precaricare la pagina per un'esplorazione più veloce Visualizza tutti i cookie Apri il browser Edge, seleziona Impostazioni e altro nell'angolo in alto a destra della finestra del browser. Seleziona Impostazioni > Privacy, ricerca e servizi . Selezionare Cookie , quindi fare clic su Visualizza tutti i cookie e i dati del sito per visualizzare tutti i cookie archiviati e le informazioni relative al sito. Consenti tutti i cookie Consentendo i cookie, i siti Web saranno in grado di salvare e recuperare i dati nel tuo browser, in modo da migliorare la tua esperienza di esplorazione memorizzando le tue preferenze e le informazioni di accesso. Apri il browser Edge, seleziona Impostazioni e altro nell'angolo in alto a destra della finestra del browser. Seleziona Impostazioni > Privacy, ricerca e servizi . Seleziona Cookie e abilita l'interruttore Consenti ai siti di salvare e leggere i dati dei cookie (scelta consigliata) per consentire tutti i cookie. Consentire i cookie da un sito specifico Consentendo i cookie, i siti Web saranno in grado di salvare e recuperare i dati nel tuo browser, in modo da migliorare la tua esperienza di esplorazione memorizzando le tue preferenze e le informazioni di accesso. Apri il browser Edge, seleziona Impostazioni e altro nell'angolo in alto a destra della finestra del browser. Seleziona Impostazioni > Privacy, ricerca e servizi . Seleziona Cookie e vai a Consentito per salvare i cookie. Selezionare Aggiungi sito per consentire i cookie in base al sito immettendo l'URL del sito. Bloccare i cookie di terze parti Se non vuoi che i siti di terze parti archivino i cookie nel tuo PC, puoi bloccare i cookie. Questa operazione potrebbe tuttavia impedire la corretta visualizzazione di alcune pagine o causare un messaggio che indica che è necessario consentire i cookie per visualizzare il sito. Apri il browser Edge, seleziona Impostazioni e altro nell'angolo in alto a destra della finestra del browser. Seleziona Impostazioni > Privacy, ricerca e servizi . Seleziona Cookie e abilita l'interruttore Blocca i cookie di terze parti. Blocca tutti i cookie Se non vuoi che i siti di terze parti archivino i cookie nel tuo PC, puoi bloccare i cookie. Questa operazione potrebbe tuttavia impedire la corretta visualizzazione di alcune pagine o causare un messaggio che indica che è necessario consentire i cookie per visualizzare il sito. Apri il browser Edge, seleziona Impostazioni e altro nell'angolo in alto a destra della finestra del browser. Seleziona Impostazioni > Privacy, ricerca e servizi . Seleziona Cookie e disabilita Consenti ai siti di salvare e leggere i dati dei cookie (scelta consigliata) per bloccare tutti i cookie. Bloccare i cookie da un sito specifico Microsoft Edge consente di bloccare i cookie da un sito specifico, tuttavia questa operazione potrebbe impedire la corretta visualizzazione di alcune pagine oppure potrebbe essere visualizzato un messaggio da un sito che informa che è necessario consentire i cookie per visualizzare tale sito. Per bloccare i cookie da un sito specifico: Apri il browser Edge, seleziona Impostazioni e altro nell'angolo in alto a destra della finestra del browser. Seleziona Impostazioni > Privacy, ricerca e servizi . Seleziona Cookie e vai a Non consentito salvare e leggere i cookie . Selezionare Aggiungi sito per bloccare i cookie in base ai singoli siti immettendo l'URL del sito. Elimina tutti i cookie Apri il browser Edge, seleziona Impostazioni e altro nell'angolo in alto a destra della finestra del browser. Seleziona Impostazioni > Privacy, ricerca e servizi . Seleziona Cancella dati delle esplorazioni , quindi scegli gli elementi da cancellare accanto a Cancella dati delle esplorazioni ora . In Intervallo di tempo scegli un intervallo di tempo. Seleziona Cookie e altri dati del sito , quindi Cancella ora . Nota: In alternativa, è possibile eliminare i cookie premendo ctrl + MAIUSC + CANC insieme e procedendo con i passaggi 4 e 5. Tutti i cookie e gli altri dati del sito verranno eliminati per l'intervallo di tempo selezionato. Si disconnette dalla maggior parte dei siti. Elimina i cookie da un sito specifico Apri il browser Edge, seleziona Impostazioni e altro > Impostazioni > Privacy, ricerca e servizi . Seleziona Cookie , quindi fai clic su Visualizza tutti i cookie e i dati del sito e cerca il sito di cui vuoi eliminare i cookie. Seleziona la freccia in giù a destra del sito di cui vuoi eliminare i cookie e seleziona Elimina . I cookie per il sito selezionato vengono eliminati. Ripetere questo passaggio per qualsiasi sito di cui si desidera eliminare i cookie. Elimina i cookie a ogni chiusura del browser Apri il browser Edge, seleziona Impostazioni e altro > Impostazioni > Privacy, ricerca e servizi . Seleziona Cancella dati delle esplorazioni , quindi scegli cosa cancellare ogni volta che chiudi il browser . Attivare l'interruttore Cookie e altri dati del sito . Una volta attivata questa funzionalità, ogni volta che chiudi il browser Edge tutti i cookie e gli altri dati del sito vengono eliminati. Si disconnette dalla maggior parte dei siti. Usare i cookie per precaricare la pagina per un'esplorazione più veloce Apri il browser Edge, seleziona Impostazioni e altro nell'angolo in alto a destra della finestra del browser. Seleziona Impostazioni > Privacy, ricerca e servizi . Seleziona Cookie e abilita l'interruttore Precarica pagine per velocizzare l'esplorazione e la ricerca. SOTTOSCRIVI FEED RSS Serve aiuto? Vuoi altre opzioni? Individua Community Contattaci Esplorare i vantaggi dell'abbonamento e i corsi di formazione, scoprire come proteggere il dispositivo e molto altro ancora. Vantaggi dell'abbonamento a Microsoft 365 Formazione su Microsoft 365 Microsoft Security Centro accessibilità Le community aiutano a porre e a rispondere alle domande, a fornire feedback e ad ascoltare gli esperti con approfondite conoscenze. Chiedi alla community Microsoft Microsoft Tech Community Partecipanti al Programma Windows Insider Partecipanti al Programma Insider di Microsoft 365 Trovare soluzioni ai problemi comuni o ottenere assistenza da un agente di supporto. Supporto online Queste informazioni sono risultate utili? Sì No Grazie! Altri feedback per Microsoft? Puoi aiutarci a migliorare? (Invia feedback a Microsoft per consentirci di aiutarti.) Come valuti la qualità della lingua? Cosa ha influito sulla tua esperienza? Il problema è stato risolto Cancella istruzioni Facile da seguire Nessun linguaggio gergale Immagini utili Qualità della traduzione Non adatto al mio schermo Istruzioni non corrette Troppo tecnico Informazioni insufficienti Immagini insufficienti Qualità della traduzione Altri commenti e suggerimenti? (Facoltativo) Invia feedback Premendo Inviare, il tuo feedback verrà usato per migliorare i prodotti e i servizi Microsoft. L'amministratore IT potrà raccogliere questi dati. Informativa sulla privacy. Grazie per il feedback! × Le novità Surface Pro Surface Laptop Surface Laptop Studio 2 Copilot per le organizzazioni Copilot per l'utilizzo personale Microsoft 365 Esplora i prodotti Microsoft App di Windows 11 Microsoft Store Profilo account Download Center Supporto Microsoft Store Resi Monitoraggio ordini Riciclaggio Garanzie commerciali Formazione Microsoft Education Dispositivi per l'istruzione Microsoft Teams per l'istruzione Microsoft 365 Education Office Education Formazione e sviluppo per gli insegnanti Offerte per studenti e genitori Azure per studenti Aziende Microsoft Security Azure Dynamics 365 Microsoft 365 Microsoft 365 Copilot Microsoft Teams Piccole imprese Sviluppatori e IT Sviluppatore Microsoft Microsoft Learn Supporto per le app del marketplace di IA Microsoft Tech Community Microsoft Marketplace Microsoft Power Platform Marketplace Rewards Visual Studio Azienda Opportunità di carriera Informazioni su Microsoft Notizie aziendali Privacy in Microsoft Investitori Accessibilità Sostenibilità Italiano (Italia) Icona di rifiuto esplicito delle scelte di privacy Le tue scelte sulla privacy Icona di rifiuto esplicito delle scelte di privacy Le tue scelte sulla privacy Privacy per l'integrità dei consumer Riferimenti societari Contatta Microsoft Privacy Gestisci i cookie Condizioni per l'utilizzo Marchi Informazioni sulle inserzioni EU Compliance DoCs © Microsoft 2026 | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/535 | LLVM Weekly - #535, April 1st 2024 LLVM Weekly - #535, April 1st 2024 Welcome to the five hundred and thirty-fifth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org , @llvmweekly or @asbradbury on Twitter, or @llvmweekly@fosstodon.org or @asb@fosstodon.org . News and articles from around the web and events A fork of Clang that implements the P2996 C++ reflection specification is available , by Dan Katz and others at Bloomberg, amongst other contributors. See the previous link for information on the implementation choices and the /r/cpp discussion . Bartosz Taudul blogged about pretty printing Arm Neon registers in LLDB . According to the LLVM calendar in the coming week there will be the following: Office hours with the following hosts: Anastasia Stulova, Quentin Colombet, Johannes Doerfert. Online sync-ups on the following topics: Flang, MLIR C/C++ frontend, MemorySSA, new contributors, LLVM/offload, classic flang, loop optimisations, OpenMP in Flang, MLIR open meeting, PowerPC, HLSL. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Tom Stellard indicated that he’s now hoping to collate release notes for LLVM point releases , starting from 18.1.3. If April fools jokes are your thing, you’ll enjoy this MLIR syntax proposal . There were more discussions about criteria for LLVM commit access in light of the recent xz backdoor. Noah Goldstein proposed supporting the nneg flag for uitofp . Fangrui Song shared details of work on supporting --compress-sections in llvm-objcopy . Matt Arsenault is seeking feedback on supporting atomicrmw with floating-point vector operations . People are trying to get feedback on EuroLLVM round tables, e.g. for MLIR or on debuginfo (and perhaps others I missed). LLVM commits An alternative translation of -wasm-enable-sjlj was implemented. 6420f37 . TableGen’s implementation was restructured into a “Basic” and “Common” library. fa3d789 . The SPIRVUsage documentation was expanded. f5e1cd5 . The RISCVMakeCompressible pass learned to handle byte/half load/store for the Zcb extension. 22bfc58 . The va_list related intrinsics were made generic, meaning arguments don’t have to be address space 0 pointers. ab7dba2 . The language reference was expanded to better clarify some metadata semantics. eee8c61 . The nuw and nsw nowrap flags can now be used for trunc . 7d3924c . Clang commits The optin.performance.Padding checker from the Clang static analyzer was documented. b8cc838 . Flexible arrays in unions or alone in structs are now allowed. 14ba782 . The -Wformat-signedness warning is now supported. ea92b1f . Other project commits Support for REDUCE() was implemented in Flang’s runtime. 3ada883 . The C23 fmaximum and fminimum functions were implemented in LLVM’s libc. 3e3f0c3 . Guidelines for adding [[nodiscard]] to the libcxx implementation was documented. 2684a09 . BufferOriginAnalysis was added to MLIR. dbfc38e . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://aws.amazon.com/it/waf/ | Web Application Firewall - Protezione delle API Web - AWS WAF - AWS Passa al contenuto principale Filter: Tutto English Contattaci AWS Marketplace Supporto Il mio account Ricerca Filter: Tutto Accedi alla console Crea un account Web Application Firewall AWS Panoramica Funzionalità Prezzi Nozioni di base Risorse Altro Prodotti › Sicurezza, identità e conformità › AWS WAF Ricevi 10 milioni di richieste di rilevamento dei bot comuni al mese con il Piano gratuito AWS → AWS WAF Proteggi le applicazioni Web dagli exploit più comuni Nozioni di base su AWS WAF Vantaggi di AWS WAF Risparmia tempo con le regole gestite Risparmia tempo con le regole gestite in modo da poter dedicare più tempo alla creazione di applicazioni. Monitora, blocca o limita la frequenza dei bot Monitora, blocca o limita più facilmente i bot comuni e pervasivi. Riduci i passaggi di configurazione della sicurezza Accelera la configurazione di sicurezza complessa con un'interfaccia consolidata che riduce la complessità e i passaggi di configurazione dell'implementazione della sicurezza fino all'80%. Visibilità centralizzata e fruibile Un'unica interfaccia completa combina le funzioni di sicurezza di base con le protezioni specializzate dei partner per migliorare la visibilità e i controlli di sicurezza. Questo approccio unificato trasforma i dati di sicurezza in informazioni fruibili, eliminando gli attriti operativi e accelerando la risposta ai rischi. Rafforza l'assetto di sicurezza I pacchetti di protezione preconfigurati sfruttano l'esperienza di sicurezza di AWS per fornire modelli di protezione istantanei per settori e tipi di carichi di lavoro specifici come API, applicazioni PHP e servizi Web. Questi modelli sono continuamente ottimizzati per garantire una sicurezza aggiornata senza richiedere una profonda esperienza di implementazione. Ottieni suggerimenti continui sulla sicurezza per rafforzare l'assetto di sicurezza generale. Perché utilizzare AWS WAF? Grazie ad AWS WAF, puoi creare regole di sicurezza che controllano il traffico dei bot e bloccano modelli di attacco comuni come iniezione SQL o cross-site scripting (XSS). Riproduci Casi d'uso Filtra il traffico Web Crea regole per filtrare le richieste Web in base a condizioni quali indirizzi IP, intestazioni e strutture HTTP o URI personalizzati. Scopri di più sulla creazione di regole Previeni le frodi di acquisizione account Monitora la pagina di accesso della tua applicazione per l'accesso non autorizzato agli account utente con credenziali compromesse. Scopri di più sulla prevenzione delle frodi Protezione automatica da attacchi DDoS di livello 7 Progettato per monitorare continuamente e mitigare automaticamente gli eventi Distributed Denial of Service (DDoS) a livello di applicazione (livello 7) in pochi secondi. Implementazione rapida della sicurezza Avvia nuove applicazioni in tutta sicurezza utilizzando la configurazione di onboarding guidata semplificata con un'interfaccia a pagina singola per attivare impostazioni di sicurezza preconfigurate su misura per le tue esigenze. Rafforza l'assetto di sicurezza Grazie a pacchetti di regole curati da esperti, visibilità consolidata e consigli continui, ottieni una protezione immediata per ottimizzare il tuo livello di sicurezza. Nozioni di base su AWS WAF Nozioni di base su AWS WAF Esplora AWS WAF Contatta un esperto Contattaci Crea un account AWS Scopri Cos'è AWS? Cos'è il cloud computing? Cos'è l'IA agentica? Hub dei concetti di cloud computing Sicurezza nel cloud AWS Novità Blog Comunicati stampa Risorse Nozioni di base Formazione AWS Trust Center Biblioteca di soluzioni AWS Centro di architettura Domande frequenti tecniche e relative ai prodotti Report degli analisti Partner AWS Sviluppatori Centro builder SDK e strumenti .NET su AWS Python su AWS Java su AWS PHP su AWS JavaScript su AWS Assistenza Contattaci Inoltra un ticket di supporto AWS re:Post Knowledge Center Panoramica del Supporto AWS Accedi ai consigli degli esperti Accessibilità AWS Note legali English Torna all'inizio Amazon è un datore di lavoro per le pari opportunità: Minoranza/Donne/Disabilità/Veterano/Identità di genere/Orientamento sessuale/Età. x facebook linkedin instagram twitch youtube podcasts email Privacy Termini di utilizzo del sito Preferenze cookie © 2026, Amazon Web Services, Inc. o società affiliate. Tutti i diritti riservati. | 2026-01-13T09:30:34 |
https://logging.apache.org/log4j/2.x/manual/json-template-layout.html | JSON Template Layout :: Apache Log4j a subproject of Apache Logging Services Home Download Release notes Support Versioning and maintenance policy Security Manual Getting started Installation API Loggers Event Logger Simple Logger Status Logger Fluent API Fish tagging Levels Markers Thread Context Messages Flow Tracing Implementation Architecture Configuration Configuration file Configuration properties Programmatic configuration Appenders File appenders Rolling file appenders Database appenders Network Appenders Message queue appenders Delegating Appenders Layouts JSON Template Layout Pattern Layout Lookups Filters Scripts JMX Extending Plugins Performance Asynchronous loggers Garbage-free logging References Plugin reference Java API reference Resources F.A.Q. Migrating from Log4j 1 Migrating from Logback Migrating from SLF4J Building GraalVM native images Integrating with Hibernate Integrating with Jakarta EE Integrating with service-oriented architectures Development Components Log4j IOStreams Log4j Spring Boot Support Log4j Spring Cloud Configuration JUL-to-Log4j bridge Log4j-to-JUL bridge Related projects Log4j Jakarta EE Log4j JMX GUI Log4j Kotlin Log4j Scala Log4j Tools Log4j Transformation Tools Home Manual Implementation Configuration Layouts JSON Template Layout Edit this Page JSON Template Layout JsonTemplateLayout is a customizable, efficient , and garbage-free JSON generating layout. It encodes LogEvent s according to the structure described by the JSON template provided. In a nutshell, it shines with its Customizable JSON structure (see eventTemplate[Uri] and stackTraceElementTemplate[Uri] layout configuration parameters ) Customizable timestamp formatting (see timestamp event template resolver) Feature rich exception formatting (see exception and exceptionRootCause event template resolvers) Extensible plugin support Customizable object recycling strategy JSON Template Layout is intended for production deployments where the generated logs are expected to be delivered to an external log ingestion system such as Elasticsearch or Google Cloud Logging. While running tests or developing locally, you can use Pattern Layout for human-readable log output. Usage Adding log4j-layout-template-json artifact to your list of dependencies is enough to enable access to JSON Template Layout in your Log4j configuration: Maven Gradle We assume you use log4j-bom for dependency management. <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-layout-template-json</artifactId> <scope>runtime</scope> </dependency> We assume you use log4j-bom for dependency management. runtimeOnly 'org.apache.logging.log4j:log4j-layout-template-json' JSON Template Layout is primarily configured through an event template describing the structure log events should be JSON-encoded in. Event templates themselves are also JSON documents, where objects containing $resolver members, such as, { "$resolver": "message", (1) "stringified": true (2) } 1 Indicating that this object should be replaced with the output from the message event template resolver 2 Passing a configuration to the message event template resolver are interpreted by the JSON Template Layout compiler, and replaced with the referenced event or stack trace template resolver rendering that particular item. For instance, given the following event template stored in MyLayout.json in your classpath: { "instant": { (1) "$resolver": "timestamp", "pattern": { "format": "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'", "timeZone": "UTC" } }, "someConstant": 1, (2) "message": { (3) "$resolver": "message", "stringified": true } } 1 Using the timestamp event template resolver to populate the instant field 2 Passing a constant that will be rendered as is 3 Using the message event template resolver to populate the message field in combination with the below layout configuration: XML JSON YAML Properties Snippet from an example log4j2.xml <JsonTemplateLayout eventTemplateUri="classpath:MyLayout.json"/> Snippet from an example log4j2.json "JsonTemplateLayout": { "eventTemplateUri": "classpath:MyLayout.json" } Snippet from an example log4j2.yaml JsonTemplateLayout: eventTemplateUri: "classpath:MyLayout.json" Snippet from an example log4j2.properties appender.0.layout.type = JsonTemplateLayout appender.0.layout.eventTemplateUri = classpath:MyLayout.json JSON Template Layout generates JSON as follows: {"instant":"2017-05-25T19:56:23.370Z","someConstant":1,"message":"Hello, error!"} (1) 1 JSON pretty-printing is not supported for performance reasons. Good news is JSON Template Layout is perfectly production-ready without any configuration! The event template defaults to EcsLayout.json , bundled in the classpath, modelling the Elastic Common Schema (ECS) specification . JSON Template Layout bundles several more predefined event templates modeling popular JSON-based log formats. Configuration This section explains how to configure JSON Template Layout plugin element in a Log4j configuration file. Are you trying to implement your own event (or stack trace) template and looking for help on available resolvers? Please refer to Template configuration instead. 📖 Plugin reference for JsonTemplateLayout Plugin attributes JSON Template Layout plugin configuration accepts the following attributes: charset Type Charset Default value UTF-8 Configuration property log4j.layout.jsonTemplate.charset Charset used for encoding the produced JSON into bytes While RFC 4627, the JSON specification, supports multiple Unicode encodings , we strongly advise you to stick to the layout default, and use UTF-8 for several practical reasons . locationInfoEnabled Type boolean Default value false Configuration property log4j.layout.jsonTemplate.locationInfoEnabled Toggles access to the LogEvent source; file name, line number, etc. See also location information . stackTraceEnabled Type boolean Default value true Configuration property log4j.layout.jsonTemplate.stackTraceEnabled Toggles access to the stack traces eventTemplate Type String Default value null Configuration property log4j.layout.jsonTemplate.eventTemplate Inline event template JSON for rendering LogEvent s. If present, this configuration overrides eventTemplateUri . eventTemplateUri Type String Default value classpath:EcsLayout.json Configuration property log4j.layout.jsonTemplate.eventTemplateUri URI pointing to the event template JSON for rendering LogEvent s. This configuration is overriden by eventTemplate , if present. eventTemplateRootObjectKey Type String Default value null Configuration property log4j.layout.jsonTemplate.eventTemplateRootObjectKey If present, the event template is put into a JSON object composed of a single member with the provided key. stackTraceElementTemplate Type String Default value null Configuration property log4j.layout.jsonTemplate.stackTraceElementTemplate Inline stack trace element template JSON for rendering StackTraceElement s If present, this configuration overrides stackTraceElementTemplateUri . stackTraceElementTemplateUri Type String Default value classpath:StackTraceElementLayout.json Configuration property log4j.layout.jsonTemplate.stackTraceElementTemplateUri URI pointing to the stack trace element template JSON for rendering StackTraceElement s. This configuration is overriden by stackTraceElementTemplate , if present. eventDelimiter Type String Default value System.lineSeparator() Configuration property log4j.layout.jsonTemplate.eventDelimiter Delimiter used for separating rendered LogEvent s. if nullEventDelimiterEnabled is true , this value will be suffixed with \0 (null) character. nullEventDelimiterEnabled Type String Default value false Configuration property log4j.layout.jsonTemplate.nullEventDelimiterEnabled If true , eventDelimiter will be suffixed with \0 (null) character. maxStringLength Type int Default value 16384 (16 KiB) Configuration property log4j.layout.jsonTemplate.maxStringLength Causes truncation of string values longer than the specified limit. When a string value is truncated, its length will be shortened to the maxStringLength provided and truncatedStringSuffix will be appended to indicate the truncation. Note that this doesn’t cap the maximum length of the JSON document produced! Consider a JSON document rendered using a LogEvent containing 20,000 thread context map (aka. MDC) entries such that each key/value is less than 16,384 characters. This document will certainly exceed 16,384 characters, yet it will not be subject to any truncation, since every string value in the JSON document is less than 16,384 characters. That is, An example JSON document exceeding 16,384 characters in length, yet subject to no truncation { "mdc": { "key00001": "value00001", "key00002": "value00002", "key00003": "value00003", // ... "key16384": "value16384" } } truncatedStringSuffix Type String Default value … Configuration property log4j.layout.jsonTemplate.truncatedStringSuffix Suffix to append to strings truncated due to exceeding maxStringLength recyclerFactory Type String Default value Refer to Recycling strategy Configuration property log4j.layout.jsonTemplate.recyclerFactory Name of the Recycling strategy employed Plugin elements JSON Template Layout plugin configuration accepts the following elements: EventTemplateAdditionalField Additional event template fields are convenient shortcuts to add custom fields to a template or override existing ones. You can specify an additional event template field using an EventTemplateAdditionalField element composed of following attributes: key Entry key value Entry value format Format of the value. Accepted values are: STRING (default) Indicates that the entry value will be string-encoded JSON Indicates the the entry value is already formatted in JSON and will be appended to the event template verbatim Below we share an example configuration overriding the GelfLayout.json event template with certain custom fields: XML JSON YAML Properties Snippet from an example log4j2.xml <JsonTemplateLayout eventTemplateUri="classpath:GelfLayout.json"> <EventTemplateAdditionalField key="aString" value="foo"/> (1) <EventTemplateAdditionalField key="marker" format="JSON" value='{"$resolver": "marker", "field": "name"}'/> <EventTemplateAdditionalField key="aNumber" format="JSON" value="1"/> <EventTemplateAdditionalField key="aList" format="JSON" value='[1, 2, "three"]'/> </JsonTemplateLayout> Snippet from an example log4j2.json "JsonTemplateLayout": { "eventTemplateUri": "classpath:GelfLayout.json", "eventTemplateAdditionalField": [ { "key": "aString", "value": "foo" (1) }, { "key": "marker", "value": "{\"$resolver\": \"marker\", \"field\": \"name\"}", "format": "JSON" }, { "key": "aNumber", "value": "1", "format": "JSON" }, { "key": "aList", "value": "[1, 2, \"three\"]", "format": "JSON" } ] } Snippet from an example log4j2.yaml JsonTemplateLayout: eventTemplateUri: "classpath:GelfLayout.json" eventTemplateAdditionalField: - key: "aString" value: "foo" (1) - key: "marker" value: '{"$resolver": "marker", "field": "name"}' format: "JSON" - key: "aNumber" value: "1" format: "JSON" - key: "aList" value: '[1, 2, "three"]' format: "JSON" Snippet from an example log4j2.properties appender.0.layout.type = JsonTemplateLayout appender.0.layout.eventTemplateUri = classpath:GelfLayout.json appender.0.layout.eventTemplateAdditionalField[0].type = EventTemplateAdditionalField appender.0.layout.eventTemplateAdditionalField[0].key = aString appender.0.layout.eventTemplateAdditionalField[0].value = foo (1) appender.0.layout.eventTemplateAdditionalField[1].type = EventTemplateAdditionalField appender.0.layout.eventTemplateAdditionalField[1].key = marker appender.0.layout.eventTemplateAdditionalField[1].value = {"$resolver": "marker", "field": "name"} appender.0.layout.eventTemplateAdditionalField[1].format = JSON appender.0.layout.eventTemplateAdditionalField[2].type = EventTemplateAdditionalField appender.0.layout.eventTemplateAdditionalField[2].key = aNumber appender.0.layout.eventTemplateAdditionalField[2].value = 1 appender.0.layout.eventTemplateAdditionalField[2].format = JSON appender.0.layout.eventTemplateAdditionalField[3].type = EventTemplateAdditionalField appender.0.layout.eventTemplateAdditionalField[3].key = aList appender.0.layout.eventTemplateAdditionalField[3].value = [1, 2, "three"] appender.0.layout.eventTemplateAdditionalField[3].format = JSON 1 Since the format attribute is not explicitly set, the default (i.e., STRING ) will be used Template configuration Templates are JSON documents, where objects containing $resolver members, such as, { "$resolver": "message", (1) "stringified": true (2) } 1 Indicating that this object should be replaced with the output from the message event template resolver 2 Passing a configuration to the message event template resolver are interpreted by the JSON Template Layout compiler, and replaced with the referenced event or stack trace template resolver rendering that particular item. Templates are configured by means of the following JSON Template Layout plugin attributes: eventTemplate and eventTemplateUri (for encoding LogEvent s) stackTraceElementTemplate and stackTraceElementTemplateUri (for encoding StackStraceElement s) EventTemplateAdditionalField (for extending the event template) Event templates eventTemplate and eventTemplateUri describe the JSON structure JSON Template Layout uses to encode LogEvent s. JSON Template Layout contains the following predefined event templates: EcsLayout.json The default event template modelling the Elastic Common Schema (ECS) specification LogstashJsonEventLayoutV1.json Models Logstash json_event pattern for Log4j GelfLayout.json Models the Graylog Extended Log Format (GELF) payload specification with additional _thread and _logger fields. If used, it is advised to override the obligatory host field with a user provided constant via additional event template fields to avoid hostName property lookup at runtime, which incurs an extra cost. GcpLayout.json Models the structure described by Google Cloud Platform structured logging with additional _thread , _logger and _exception fields. The exception trace, if any, is written to the _exception field as well as the message field – the former is useful for explicitly searching/analyzing structured exception information, while the latter is Google’s expected place for the exception, and integrates with Google Error Reporting . JsonLayout.json Models the exact structure generated by the deprecated JsonLayout with the exception of thrown field. JsonLayout serializes the Throwable as is via Jackson ObjectMapper , whereas JsonLayout.json event template employs the StackTraceElementLayout.json stack trace template for stack traces to generate a document-store-friendly flat structure. This event template is only meant for existing users of the deprecated JsonLayout to migrate to JSON Template Layout without much trouble, and other than that purpose, is not recommended to be used! Event template resolvers Event template resolvers consume a LogEvent and render a certain property of it at the point of the JSON where they are declared. For instance, marker resolver renders the marker of the event, level resolver renders the level, and so on. An event template resolver is denoted with a special object containing a $resolver key: Example event template demonstrating the usage of level resolver { "version": "1.0", "level": { "$resolver": "level", "field": "name" } } Here version field will be rendered as is, while level field will be populated by the level resolver . That is, this template will generate JSON similar to the following: Example JSON generated from the demonstrated event template { "version": "1.0", "level": "INFO" } The complete list of available event template resolvers are provided below in detail. counter Resolves a number from an internal counter counter event template resolver grammar config = [ start ] , [ overflowing ] , [ stringified ] start = "start" -> number overflowing = "overflowing" -> boolean stringified = "stringified" -> boolean Unless provided, start and overflowing are respectively set to 0 (zero) and true by default. When overflowing is set to true , the internal counter is created using a long , which is subject to overflow while incrementing, though faster and garbage-free. Otherwise, a BigInteger is used, which does not overflow, but incurs allocation costs. When stringified is enabled, which is set to false by default, the resolved number will be converted to a string. See examples Resolves a sequence of numbers starting from 0. Once Long.MAX_VALUE is reached, counter overflows to Long.MIN_VALUE . { "$resolver": "counter" } Resolves a sequence of numbers starting from 1000. Once Long.MAX_VALUE is reached, counter overflows to Long.MIN_VALUE . { "$resolver": "counter", "start": 1000 } Resolves a sequence of numbers starting from 0 and keeps on doing as long as JVM heap allows. { "$resolver": "counter", "overflowing": false } caseConverter Converts the case of string values caseConverter event template resolver grammar config = case , input , [ locale ] , [ errorHandlingStrategy ] input = JSON case = "case" -> ( "upper" | "lower" ) locale = "locale" -> ( language | ( language , "_" , country ) | ( language , "_" , country , "_" , variant ) ) errorHandlingStrategy = "errorHandlingStrategy" -> ( "fail" | "pass" | "replace" ) replacement = "replacement" -> JSON input can be any available template value; e.g., a JSON literal, a lookup string, an object pointing to another resolver. Unless provided, locale points to the one returned by JsonTemplateLayoutDefaults.getLocale() , which is configured by the log4j.layout.jsonTemplate.locale system property and by default set to the default system locale. errorHandlingStrategy determines the behavior when either the input doesn’t resolve to a string value or case conversion throws an exception: fail Propagates the failure pass Causes the resolved value to be passed as is replace Suppresses the failure and replaces it with the replacement , which is set to null by default errorHandlingStrategy is set to replace by default. Most of the time JSON logs are persisted to a storage solution (e.g., Elasticsearch) that keeps a statically-typed index on fields. Hence, if a field is always expected to be of type string, using non-string replacement s or pass in errorHandlingStrategy might result in type incompatibility issues at the storage level. Unless the input value is pass ed intact or replace d, case conversion is not garbage-free. See examples Convert the resolved log level strings to upper-case: { "$resolver": "caseConverter", "case": "upper", "input": { "$resolver": "level", "field": "name" } } Convert the resolved USER environment variable to lower-case using nl_NL locale: { "$resolver": "caseConverter", "case": "lower", "locale": "nl_NL", "input": "${env:USER}" } Convert the resolved sessionId thread context data (MDC) to lower-case: { "$resolver": "caseConverter", "case": "lower", "input": { "$resolver": "mdc", "key": "sessionId" } } Above, if sessionId MDC resolves to a, say, number, case conversion will fail. Since errorHandlingStrategy is set to replace and replacement is set to null by default, the resolved value will be null . One can suppress this behavior and let the resolved sessionId number be left as is: { "$resolver": "caseConverter", "case": "lower", "input": { "$resolver": "mdc", "key": "sessionId" }, "errorHandlingStrategy": "pass" } or replace it with a custom string: { "$resolver": "caseConverter", "case": "lower", "input": { "$resolver": "mdc", "key": "sessionId" }, "errorHandlingStrategy": "replace", "replacement": "unknown" } endOfBatch Resolves LogEvent#isEndOfBatch() boolean flag exception Resolves fields of the Throwable returned by LogEvent#getThrown() exception event template resolver grammar config = field , [ stringified ] , [ stackTrace ] field = "field" -> ( "className" | "message" | "stackTrace" ) stackTrace = "stackTrace" -> ( [ stringified ] , [ elementTemplate ] ) stringified = "stringified" -> ( boolean | truncation ) truncation = "truncation" -> ( [ suffix ] , [ pointMatcherStrings ] , [ pointMatcherRegexes ] ) suffix = "suffix" -> string pointMatcherStrings = "pointMatcherStrings" -> string[] pointMatcherRegexes = "pointMatcherRegexes" -> string[] elementTemplate = "elementTemplate" -> object stringified is set to false by default. stringified at the root level is deprecated in favor of stackTrace.stringified , which has precedence if both are provided. pointMatcherStrings and pointMatcherRegexes enable the truncation of stringified stack traces after the given matching point. If both parameters are provided, pointMatcherStrings will be checked first. If a stringified stack trace truncation takes place, it will be indicated with a`suffix`, which by default is set to the configure truncatedStringSuffix in the layout, unless explicitly provided. Every truncation suffix is prefixed with a newline. Stringified stack trace truncation operates in Caused by: and Suppressed: label blocks. That is, matchers are executed against each label in isolation. elementTemplate is an object describing the template to be used while resolving the StackTraceElement array. If stringified is set to true , elementTemplate will be discarded. By default, elementTemplate is set to null and rather populated from the layout configuration. That is, the stack trace element template can also be provided using stackTraceElementTemplate and stackTraceElementTemplateUri layout configuration attributes. The template to be employed is determined in the following order: elementTemplate provided in the resolver configuration stackTraceElementTemplate layout configuration attribute (The default is populated from the log4j.layout.jsonTemplate.stackTraceElementTemplate system property.) stackTraceElementTemplateUri layout configuration attribute (The default is populated from the log4j.layout.jsonTemplate.stackTraceElementTemplateUri system property.) See Stack trace element templates for the list of available resolvers in a stack trace element template. Note that this resolver is toggled by the stackTraceEnabled layout configuration attribute. Since Throwable#getStackTrace() clones the original StackTraceElement[] , access to (and hence rendering of) stack traces are not garbage-free. Each pointMatcherRegexes item triggers a Pattern#matcher() call, which is not garbage-free either. See examples Resolve LogEvent#getThrown().getClass().getCanonicalName() : { "$resolver": "exception", "field": "className" } Resolve the stack trace into a list of StackTraceElement objects: { "$resolver": "exception", "field": "stackTrace" } Resolve the stack trace into a string field: { "$resolver": "exception", "field": "stackTrace", "stackTrace": { "stringified": true } } Resolve the stack trace into a string field such that the content will be truncated after the given point matcher: { "$resolver": "exception", "field": "stackTrace", "stackTrace": { "stringified": { "truncation": { "suffix": "... [truncated]", "pointMatcherStrings": ["at javax.servlet.http.HttpServlet.service"] } } } } Resolve the stack trace into an object described by the provided stack trace element template: { "$resolver": "exception", "field": "stackTrace", "stackTrace": { "elementTemplate": { "class": { "$resolver": "stackTraceElement", "field": "className" }, "method": { "$resolver": "stackTraceElement", "field": "methodName" }, "file": { "$resolver": "stackTraceElement", "field": "fileName" }, "line": { "$resolver": "stackTraceElement", "field": "lineNumber" } } } } See Stack trace element templates for further details on resolvers available for StackTraceElement templates. exceptionRootCause Resolves the fields of the innermost Throwable returned by LogEvent#getThrown() . Its syntax and garbage-footprint are identical to the exception resolver. level Resolves the fields of the LogEvent#getLevel() level event template resolver grammar config = field , [ severity ] field = "field" -> ( "name" | "severity" ) severity = severity-field severity-field = "field" -> ( "keyword" | "code" ) See examples Resolve the level name: { "$resolver": "level", "field": "name" } Resolve the Syslog severity keyword: { "$resolver": "level", "field": "severity", "severity": { "field": "keyword" } } Resolve the Syslog severity code: { "$resolver": "level", "field": "severity", "severity": { "field": "code" } } logger Resolves LogEvent#getLoggerFqcn() and LogEvent#getLoggerName() . logger event template grammar config = "field" -> ( "name" | "fqcn" ) See examples Resolve the logger name: { "$resolver": "logger", "field": "name" } Resolve the logger’s fully qualified class name: { "$resolver": "logger", "field": "fqcn" } main Performs Main Argument Lookup for the given index or key main event template resolver grammar config = ( index | key ) index = "index" -> number key = "key" -> string See examples Resolve the 1st main() method argument: { "$resolver": "main", "index": 0 } Resolve the argument coming right after --userId : { "$resolver": "main", "key": "--userId" } map Resolves MapMessage s. See Map resolver for details. marker Resolves the marker of the event marker event template resolver grammar config = "field" -> ( "name" | "parents" ) See examples Resolve the marker name: { "$resolver": "marker", "field": "name" } Resolve the names of the marker’s parents: { "$resolver": "marker", "field": "parents" } mdc Resolves the thread context map, aka. Mapped Diagnostic Context (MDC). See Map resolver for details. log4j2.garbagefreeThreadContextMap flag needs to be turned on to iterate the map without allocations. message Resolves the message of the event message event template resolver grammar config = [ stringified ] , [ fallbackKey ] stringified = "stringified" -> boolean fallbackKey = "fallbackKey" -> string For simple string messages, the resolution is performed without allocations. For ObjectMessage s and MultiformatMessage s, it depends. See examples Resolve the message into a string: { "$resolver": "message", "stringified": true } Resolve the message such that if it is an ObjectMessage or a MultiformatMessage with JSON support, its type (string, list, object, etc.) will be retained: { "$resolver": "message" } Given the above configuration, a SimpleMessage will generate a "sample log message" , whereas a MapMessage will generate a {"action": "login", "sessionId": "87asd97a"} . Certain indexed log storage systems (e.g., Elasticsearch ) will not allow both values to coexist due to type mismatch: one is a string while the other is an object . Here one can use a fallbackKey to work around this problem: { "$resolver": "message", "fallbackKey": "formattedMessage" } Using this configuration, a SimpleMessage will generate a {"formattedMessage": "sample log message"} , and a MapMessage will generate a {"action": "login", "sessionId": "87asd97a"} . Note that both emitted JSONs are of type object and have no type-conflicting fields. messageParameter Resolves LogEvent#getMessage().getParameters() messageParameter event template resolver grammar config = [ stringified ] , [ index ] stringified = "stringified" -> boolean index = "index" -> number Regarding garbage footprint, stringified flag translates to String.valueOf(value) , hence mind values that are not String -typed. Further, LogEvent#getMessage() is expected to implement ParameterVisitable interface, which is the case if log4j2.enableThreadlocals property set to true . See examples Resolve the message parameters into an array: { "$resolver": "messageParameter" } Resolve the string representation of all message parameters into an array: { "$resolver": "messageParameter", "stringified": true } Resolve the first message parameter: { "$resolver": "messageParameter", "index": 0 } Resolve the string representation of the first message parameter: { "$resolver": "messageParameter", "index": 0, "stringified": true } ndc Resolves the thread context stack – aka. Nested Diagnostic Context (NDC), aka – String[] returned by LogEvent#getContextStack() ndc event template resolver grammar config = [ pattern ] pattern = "pattern" -> string See examples Resolve all NDC values into a list: { "$resolver": "ndc" } Resolve all NDC values matching with the pattern regex: { "$resolver": "ndc", "pattern": "user(Role|Rank):\\w+" } pattern Resolver delegating to Pattern Layout pattern event template resolver grammar config = pattern , [ stackTraceEnabled ] pattern = "pattern" -> string stackTraceEnabled = "stackTraceEnabled" -> boolean Unlike providing the pattern attribute to Pattern Layout in a configuration file, property substitutions found in the pattern will not be resolved. The default value of stackTraceEnabled is inherited from the parent JSON Template Layout. This resolver is mostly intended as an emergency lever when all other JSON Template Layout resolvers fall short of addressing your need. If you find yourself using this, it is highly likely you are either doing something wrong, or JSON Template Layout needs some improvement. In either case, you are advised to share your use case with maintainers in one of our support channels . See examples Resolve the string produced by %p %c{1.} [%t] %X{userId} %X %m%ex pattern: { "$resolver": "pattern", "pattern": "%p %c{1.} [%t] %X{userId} %X %m%ex" } source Resolves the fields of the StackTraceElement returned by LogEvent#getSource() source event template resolver grammar config = "field" -> ( "className" | "fileName" | "methodName" | "lineNumber" ) Note that this resolver is toggled by the locationInfoEnabled layout configuration attribute. Capturing the source location information is an expensive operation, and is not garbage-free. The logger resolver can generally be used as a zero-cost substitute for className . See this section of the layouts page for details. See examples Resolve the line number: { "$resolver": "source", "field": "lineNumber" } thread Resolves LogEvent#getThreadId() , LogEvent#getThreadName() , LogEvent#getThreadPriority() thread event template resolver grammar config = "field" -> ( "name" | "id" | "priority" ) See examples Resolve the thread name: { "$resolver": "thread", "field": "name" } timestamp Resolves LogEvent#getInstant() timestamp event template resolver grammar config = [ patternConfig | epochConfig ] patternConfig = "pattern" -> ( [ format ] , [ timeZone ] , [ locale ] ) format = "format" -> string timeZone = "timeZone" -> string locale = "locale" -> ( language | ( language , "_" , country ) | ( language , "_" , country , "_" , variant ) ) epochConfig = "epoch" -> ( unit , [ rounded ] ) unit = "unit" -> ( "nanos" | "millis" | "secs" | "millis.nanos" | "secs.nanos" | ) rounded = "rounded" -> boolean The resolvers based on the epochConfig expression are garbage-free. The resolvers based on the patternConfig expression are low-garbage and generate temporary objects only once a minute. See examples Table 1. timestamp template resolver examples Configuration Output { "$resolver": "timestamp" } 2020-02-07T13:38:47.098+02:00 { "$resolver": "timestamp", "pattern": { "format": "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'", "timeZone": "UTC", "locale": "en_US" } } 2020-02-07T13:38:47.098Z { "$resolver": "timestamp", "epoch": { "unit": "secs" } } 1581082727.982123456 { "$resolver": "timestamp", "epoch": { "unit": "secs", "rounded": true } } 1581082727 { "$resolver": "timestamp", "epoch": { "unit": "secs.nanos" } } 982123456 { "$resolver": "timestamp", "epoch": { "unit": "millis" } } 1581082727982.123456 { "$resolver": "timestamp", "epoch": { "unit": "millis", "rounded": true } } 1581082727982 { "$resolver": "timestamp", "epoch": { "unit": "millis.nanos" } } 123456 { "$resolver": "timestamp", "epoch": { "unit": "nanos" } } 1581082727982123456 Map resolver ReadOnlyStringMap is Log4j’s Map<String, Object> equivalent with garbage-free accessors, and is heavily employed throughout the code base. It is the data structure backing both thread context map (aka., Mapped Diagnostic Context (MDC)) and MapMessage implementations. Hence template resolvers for both of these are provided by a single backend: ReadOnlyStringMapResolver . Put another way, both mdc and map resolvers support identical configuration, behaviour, and garbage footprint, which are detailed below. Map resolver grammar config = singleAccess | multiAccess singleAccess = key , [ stringified ] key = "key" -> string stringified = "stringified" -> boolean multiAccess = [ pattern ] , [ replacement ] , [ flatten ] , [ stringified ] pattern = "pattern" -> string replacement = "replacement" -> string flatten = "flatten" -> ( boolean | flattenConfig ) flattenConfig = [ flattenPrefix ] flattenPrefix = "prefix" -> string singleAccess resolves a single field, whilst multiAccess resolves a multitude of fields. If flatten is provided, multiAccess merges the fields with the parent, otherwise creates a new JSON object containing the values. Enabling stringified flag converts each value to its string representation. Regex provided in the pattern is used to match against the keys. If provided, replacement will be used to replace the matched keys. These two are analogous to Pattern.compile(pattern).matcher(key).matches() and Pattern.compile(pattern).matcher(key).replaceAll(replacement) calls. Regarding garbage footprint, stringified flag translates to String.valueOf(value) , hence mind values that are not String -typed. pattern and replacement incur pattern matcher allocation costs. Writing certain non-primitive values (e.g., BigDecimal , Set , etc.) to JSON generates garbage, though most (e.g., int , long , String , List , boolean[] , etc.) don’t. See examples "$resolver" is left out in the following examples, since it is to be defined by the actual resolver, e.g., map , mdc . Resolve the value of the field keyed with user:role : { "$resolver": "…", "key": "user:role" } Resolve the string representation of the user:rank field value: { "$resolver": "…", "key": "user:rank", "stringified": true } Resolve all fields into an object: { "$resolver": "…" } Resolve all fields into an object such that values are converted to string: { "$resolver": "…", "stringified": true } Resolve all fields whose keys match with the user:(role|rank) regex into an object: { "$resolver": "…", "pattern": "user:(role|rank)" } Resolve all fields whose keys match with the user:(role|rank) regex into an object after removing the user: prefix in the key: { "$resolver": "…", "pattern": "user:(role|rank)", "replacement": "$1" } Merge all fields whose keys are matching with the user:(role|rank) regex into the parent: { "$resolver": "…", "flatten": true, "pattern": "user:(role|rank)" } After converting the corresponding field values to string, merge all fields to parent such that keys are prefixed with _ : { "$resolver": "…", "stringified": true, "flatten": { "prefix": "_" } } Stack trace element templates exception and exceptionRootCause event template resolvers can encode an exception stack trace (i.e., StackTraceElement returned by Throwable#getStackTrace() ) into a JSON array. While doing so, they employ the underlying JSON templating infrastructure again. The stack trace template used by these event template resolvers to encode StackTraceElement s can be configured in following ways: elementTemplate provided in the resolver configuration stackTraceElementTemplate layout configuration attribute (The default is populated from the log4j.layout.jsonTemplate.stackTraceElementTemplate system property.) stackTraceElementTemplateUri layout configuration attribute (The default is populated from the log4j.layout.jsonTemplate.stackTraceElementTemplateUri system property.) By default, stackTraceElementTemplateUri is set to classpath:StackTraceElementLayout.json , which references to a classpath resource bundled with JSON Template Layout: StackTraceElementLayout.json bundled as a classpath resource { "class": { "$resolver": "stackTraceElement", "field": "className" }, "method": { "$resolver": "stackTraceElement", "field": "methodName" }, "file": { "$resolver": "stackTraceElement", "field": "fileName" }, "line": { "$resolver": "stackTraceElement", "field": "lineNumber" } } Stack trace element template resolvers Similar to Event template resolvers consuming a LogEvent and rendering a certain property of it at the point of the JSON where they are declared, stack trace element template resolvers consume a StackTraceElement . The complete list of available event template resolvers are provided below in detail. stackTraceElement Resolves certain fields of a StackTraceElement Stack trace template resolver grammar config = "field" -> ( "className" | "fileName" | "methodName" | "lineNumber" ) All above accesses to StackTraceElement is garbage-free. Property substitution Property substitutions (e.g., ${myProperty} ), including lookups (e.g., ${java:version} , ${env:USER} , ${date:MM-dd-yyyy} ), are supported, but extra care needs to be taken. We strongly advise you to carefully read the configuration manual before using them. Lookups are intended as a very generic, convenience utility to perform string interpolation for, in particular, configuration files and components (e.g., layouts) lacking this mechanism. JSON Template Layout has a rich template resolver collection, and you should always prefer it whenever possible over lookups. Which resolvers can I use to replace lookups? Instead of this lookup Use this resolver Context Map Lookup mdc Date Lookup timestamp Event Lookup exception level logger marker message thread timestamp Lower Lookup caseConverter Main Arguments Lookup main Map Lookup map Marker Lookup marker Upper Lookup caseConverter Property substitution in event templates JSON Template Layout performs property substitution in string literals in templates, except if they are located in configuration object of resolvers . Consider the following event template file provided using the eventTemplateUri attribute : { "java-version": "${java:version}", (1) "pid": { "$resolver": "pattern", "pattern": "${env:NO_SUCH_KEY:-%pid}" (2) } } 1 This works. ${java:version} will be replaced with the corresponding value. 2 This won’t work! That is, ${env:NO_SUCH_KEY:-%pid} literal will not get substituted, since it is located in a configuration object of a resolver . Property substitution in configuration files If the very same event template shared above is inlined in a configuration file using the eventTemplate attribute or additional event template fields , then all substitutions will be replaced, once, at configuration-time. This has nothing to do with the JSON Template Layout, but the substitution performed by the configuration mechanism when the configuration is read. Consider the following example: XML JSON YAML Properties Snippet from an example log4j2.xml <JsonTemplateLayout eventTemplate='{"instant": {"$resolver": "pattern", "pattern": "${env:LOG4J_DATE_PATTERN:-%d}"}}'> (1) <EventTemplateAdditionalField key="message" format="JSON" value='{"$resolver": "pattern", "pattern": "${env:LOG4J_MESSAGE_PATTERN:-%m}"}'/> (2) </JsonTemplateLayout> Snippet from an example log4j2.json "JsonTemplateLayout": { "eventTemplate": "{\"instant\": {\"$resolver\": \"pattern\", \"pattern\": \"${env:LOG4J_DATE_PATTERN:-%d}\"}}", (1) "eventTemplateAdditionalField": [ { "key": "message", "format": "JSON", "value": "{\"$resolver\": \"pattern\", \"pattern\": \"${env:LOG4J_MESSAGE_PATTERN:-%m}\"}" (2) } ] } Snippet from an example log4j2.yaml JsonTemplateLayout: eventTemplate: '{"instant": {"$resolver": "pattern", "pattern": "${env:LOG4J_DATE_PATTERN:-%d}"}}' (1) eventTemplateAdditionalField: - key: "message" format: "JSON" value: '{"$resolver": "pattern", "pattern": "${env:LOG4J_MESSAGE_PATTERN:-%m}"}' (2) Snippet from an example log4j2.properties appender.0.layout.type = JsonTemplateLayout appender.0.layout.eventTemplate = {"instant": {"$resolver": "pattern", "pattern": "${env:LOG4J_DATE_PATTERN:-%d}"}} (1) appender.0.layout.eventTemplateAdditionalField[0].type = EventTemplateAdditionalField appender.0.layout.eventTemplateAdditionalField[0].key = message appender.0.layout.eventTemplateAdditionalField[0].format = JSON appender.0.layout.eventTemplateAdditionalField[0].value = {"$resolver": "pattern", "pattern": "${env:LOG4J_MESSAGE_PATTERN:-%m}"} (2) 1 eventTemplate will be passed to the layout with ${env:…​} substituted 2 value will be passed to the layout with ${env:…​} substituted External values injected this way can corrupt your JSON schema. It is your responsibility to ensure the sanitization and safety of the substitution source. Recycling strategy Encoding a LogEvent without a load on the garbage-collector can yield significant performance benefits. JSON Template Layout contains the Recycler interface to implement Garbage-free logging . A Recycler is created using a RecyclerFactory , which implements a particular recycling strategy. JSON Template Layout contains the following predefined recycler factories: dummy It performs no recycling, hence each recycling attempt will result in a new instance. This will obviously create a load on the garbage-collector. It is a good choice for applications with low and medium log rate. threadLocal It performs the best, since every instance is stored in ThreadLocal s and accessed without any synchronization cost. Though this might not be a desirable option for applications running with a considerable number of threads, e.g., a web servlet. queue It is the best of both worlds. It allows recycling of objects up to a certain number ( capacity ). When this limit is exceeded due to excessive concurrent load (e.g., capacity is 50 but there are 51 threads concurrently trying to encode), it starts allocating. queue is a good strategy where threadLocal is not desirable. queue accepts following optional parameters: supplier of type java.util.Queue , defaults to org.jctools.queues.MpmcArrayQueue.new if JCTools is in the classpath; otherwise java.util.concurrent.ArrayBlockingQueue.new capacity of type int , defaults to max(8,2*cpuCount+1) Some example configurations of queue recycling strategy are as follows: queue:supplier=org.jctools.queues.MpmcArrayQueue.new (use MpmcArrayQueue from JCTools) queue:capacity=10 (set the queue capacity to 10) queue:supplier=java.util.concurrent.ArrayBlockingQueue.new,capacity=50 (use ArrayBlockingQueue with a capacity of 50) The default RecyclerFactory is threadLocal , if log4j2.enable.threadlocals=true ; otherwise, queue . The effective recycler factory can be configured using the recyclerFactory plugin attribute. Next to predefined ones, you can introduce custom RecyclerFactory implementations too. See Extending recycler factories for details. Extending JSON Template Layout relies on the Log4j plugin system to compose the features it provides. This makes it possible for users to extend the plugin-based feature set as they see fit. As of this moment, following features are implemented by means of plugins: Event template resolvers (e.g., exception , message , level event template resolvers) Event template interceptors (e.g., injection of eventTemplateAdditionalField ) Recycler factories Following sections cover how to extend these in detail. While existing features should address most common use cases, you might find yourself needing to implement a custom one. If this is the case, we really appreciate it if you can share your use case in a user support channel . Plugin preliminaries Log4j plugin system is the de facto extension mechanism embraced by various Log4j components. Plugins provide extension points to components, that can be used to implement new features, without modifying the original component. It is analogous to a dependency injection framework, but curated for Log4j-specific needs. In a nutshell, you annotate your classes with @Plugin and their ( static ) factory methods with @PluginFactory . Last, you inform the Log4j plugin system to discover these custom classes. This is done using running the PluginProcessor annotation processor while building your project. Refer to Plugins for details. Extending event template resolvers All available event template resolvers are simple plugins employed by JSON Template Layout. To add new ones, one just needs to create their own EventResolver and instruct its injection via a @Plugin -annotated EventResolverFactory class. For demonstration purposes, below we will create a randomNumber event template resolver. Let’s start with the actual resolver: Custom random number event template resolver package com.acme.logging.log4j.layout.template.json; import org.apache.logging.log4j.core.LogEvent; import org.apache.logging.log4j.layout.template.json.resolver.EventResolver; import org.apache.logging.log4j.layout.template.json.util.JsonWriter; /** * Resolves a random floating point number. * * <h3>Configuration</h3> * * <pre> * config = ( [ range ] ) * range = number[] * </pre> * * {@code range} is a number array with two elements, where the first number * denotes the start (inclusive) and the second denotes the end (exclusive). * {@code range} is optional and by default set to {@code [0, 1]}. * * <h3>Examples</h3> * * Resolve a random number between 0 and 1: * * <pre> * { * "$resolver": "randomNumber" * } * </pre> * * Resolve a random number between -0.123 and 0.123: * * <pre> * { * "$resolver": "randomNumber", * "range": [-0.123, 0.123] * } * </pre> */ public final class RandomNumberResolver implements EventResolver { private final double loIncLimit; private final double hiExcLimit; RandomNumberResolver(TemplateResolverConfig config) { List<Number> rangeArray = config.getList("range", Number.class); if (rangeArray == null) { this.loIncLimit = 0D; this.hiExcLimit = 1D; } else if (rangeArray.size() != 2) { throw new IllegalArgumentException( "range array must be of size two: " + config); } else { this.loIncLimit = rangeArray.get(0).doubleValue(); this.hiExcLimit = rangeArray.get(1).doubleValue(); if (loIncLimit > hiExcLimit) { throw new IllegalArgumentException("invalid range: " + config); } } } static String getName() { return "randomNumber"; } @Override public void resolve(LogEvent value, JsonWriter jsonWriter) { double randomNumber = loIncLimit + (hiExcLimit - loIncLimit) * Math.random(); jsonWriter.writeNumber(randomNumber); } } Next, create an EventResolverFactory class to admit RandomNumberResolver into the event resolver factory plugin registry. Resolver factory class to admit RandomNumberResolver into the event resolver factory plugin registry package com.acme.logging.log4j.layout.template.json; import org.apache.logging.log4j.core.config.plugins.Plugin; import org.apache.logging.log4j.core.config.plugins.PluginFactory; import org.apache.logging.log4j.layout.template.json.resolver.EventResolverContext; import org.apache.logging.log4j.layout.template.json.resolver.EventResolverFactory; import org.apache.logging.log4j.layout.template.json.resolver.TemplateResolver; import org.apache.logging.log4j.layout.template.json.resolver.TemplateResolverConfig; import org.apache.logging.log4j.layout.template.json.resolver.TemplateResolverFactory; /** * {@link RandomNumberResolver} factory. */ @Plugin(name = "RandomNumberResolverFactory", category = TemplateResolverFactory.CATEGORY) public final class RandomNumberResolverFactory implements EventResolverFactory { private static final RandomNumberResolverFactory INSTANCE = new RandomNumberResolverFactory(); private RandomNumberResolverFactory() {} @PluginFactory public static RandomNumberResolverFactory getInstance() { return INSTANCE; } @Override public String getName() { return RandomNumberResolver.getName(); } @Override public RandomNumberResolver create(EventResolverContext context, TemplateResolverConfig config) { return new RandomNumberResolver(config); } } Done! Let’s use our custom event resolver: Log4j configuration employing the custom randomNumber event resolver <?xml version="1.0" encoding="UTF-8"?> <Configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="https://logging.apache.org/xml/ns" xsi:schemaLocation=" https://logging.apache.org/xml/ns https://logging.apache.org/xml/ns/log4j-config-2.xsd"> <!-- ... --> <JsonTemplateLayout> <EventTemplateAdditionalField key="id" format="JSON" value='{"$resolver": "randomNumber", "range": [0, 1000000]}'/> </JsonTemplateLayout> <!-- ... --> </Configuration> All available event template resolvers are located in org.apache.logging.log4j.layout.template.json.resolver package. It is a fairly rich resource for inspiration while implementing new resolvers. Intercepting the template resolver compiler JSON Template Layout allows interception of the template resolver compilation, which is the process converting a template into a Java function performing the JSON encoding. This interception mechanism is internally used to implement eventTemplateRootObjectKey and EventTemplateAdditionalField features. In a nutshell, one needs to create a @Plugin -annotated class extending from the EventResolverInterceptor interface. To see the interception in action, check out the EventRootObjectKeyInterceptor class which is responsible for implementing the eventTemplateRootObjectKey feature: Event interceptor to add eventTemplateRootObjectKey , if present import org.apache.logging.log4j.layout.template.json.resolver.EventResolverContext; import org.apache.logging.log4j.layout.template.json.resolver.Even | 2026-01-13T09:30:34 |
https://support.microsoft.com/zh-hk/windows/%E5%9C%A8-microsoft-edge-%E4%B8%AD%E7%AE%A1%E7%90%86-cookie-%E6%AA%A2%E8%A6%96-%E5%85%81%E8%A8%B1-%E5%B0%81%E9%8E%96-%E5%88%AA%E9%99%A4%E5%92%8C%E4%BD%BF%E7%94%A8-168dab11-0753-043d-7c16-ede5947fc64d | 在 Microsoft Edge 中管理 Cookie: 檢視、允許、封鎖、刪除和使用 - Microsoft Support Related topics × Windows 安全性、安全和隱私權 概觀 安全性、安全和隱私權概觀 Windows 安全性 取得 Windows 安全性說明 使用 Windows 安全性維持受保護狀態 在您回收、銷售或贈送您的 Xbox 或 Windows 電腦裝置之前 從 Windows 電腦移除惡意程式碼 Windows 安全 取得 Windows 安全的說明 檢視及刪除 Microsoft Edge 中的瀏覽器歷程記錄 刪除與管理 Cookie 重新安裝 Windows 時,安全地移除您的寶貴內容 尋找並鎖定遺失的 Windows 裝置 Windows 隱私權 取得 Windows 隱私權的說明 應用程式使用的 Windows 隱私權設定 在隱私權儀表板上檢視您的資料 跳到主要內容 Microsoft 支持 支持 支持 Home Microsoft 365 Office 產品 Microsoft 365 Outlook Microsoft Teams OneDrive Microsoft Copilot OneNote Windows 更多信息 ... 设备 Surface PC Accessories Xbox PC Gaming HoloLens Surface Hub 硬件保修 Account & billing Account Microsoft Store & billing 资源 新功能 社区论坛 Microsoft 365 管理员 小型企业门户 开发人员 教育 上报支持欺诈 Product safety More Buy Microsoft 365 Microsoft 所有 Global Microsoft 365 Teams Windows Surface Xbox 支援 軟件 軟件 Windows Apps 人工智能 OneDrive Outlook 從 Skype 轉用 Teams Microsoft Teams PC 及 裝置 PC 及 裝置 PC 與平板電腦 配件 娛樂 娛樂 Xbox Game Pass Ultimate Xbox Live 金會員 電腦遊戲 企業 企業 Microsoft 安全性 Azure Dynamics 365 商務用 Microsoft 365 Microsoft Industry Microsoft Power Platform Windows 365 開發人員與資訊科技人員 開發人員與資訊科技人員 Microsoft 開發人員 Microsoft 學習 AI 市集 App 支援 Microsoft 科技社群 Microsoft Marketplace Visual Studio Marketplace Rewards 其他 其他 Microsoft Rewards 免費下載項目與安全性 教育 查看網站地圖 搜尋 Search for help 沒有結果 Cancel 登入 Sign in with Microsoft Sign in or create an account. Hello, Select a different account. You have multiple accounts Choose the account you want to sign in with. Related topics Windows 安全性、安全和隱私權 概觀 安全性、安全和隱私權概觀 Windows 安全性 取得 Windows 安全性說明 使用 Windows 安全性維持受保護狀態 在您回收、銷售或贈送您的 Xbox 或 Windows 電腦裝置之前 從 Windows 電腦移除惡意程式碼 Windows 安全 取得 Windows 安全的說明 檢視及刪除 Microsoft Edge 中的瀏覽器歷程記錄 刪除與管理 Cookie 重新安裝 Windows 時,安全地移除您的寶貴內容 尋找並鎖定遺失的 Windows 裝置 Windows 隱私權 取得 Windows 隱私權的說明 應用程式使用的 Windows 隱私權設定 在隱私權儀表板上檢視您的資料 在 Microsoft Edge 中管理 Cookie: 檢視、允許、封鎖、刪除和使用 Applies To Windows 10 Windows 11 Microsoft Edge Cookie 是您造訪的網站儲存在您的裝置上的一小段資料。 它們有多種用途,例如記住登入憑證、網站首選項和追蹤使用者行為。 但是,出於隱私原因或解決瀏覽問題,您可能想要刪除 cookie。 本文提供如何執行下列動作的指示: 查看所有 cookie 允許所有 cookie 允許來自特定網站的 cookie 封鎖第三方 Cookie [封鎖所有 Cookie] 封鎖來自特定網站的 Cookie 刪除所有 Cookie 刪除來自特定網站的 Cookie 每次關閉瀏覽器時都要刪除 Cookie 使用 cookie 預先載入頁面以加快瀏覽速度 查看所有 cookie 開啟 Edge 瀏覽器,選擇瀏覽器視窗右上角的「 設定及更多內容 」。 選取 設定 > 隱私權、搜尋和服務 。 選取 Cookie, 然後按一下 查看所有 Cookie 和網站資料, 以檢視所有儲存的 Cookie 和相關網站資訊。 允許所有 cookie 透過允許 cookie,網站將能夠在您的瀏覽器上保存和檢索數據,這可以透過記住您的偏好和登入資訊來增強您的瀏覽體驗。 開啟 Edge 瀏覽器,選擇瀏覽器視窗右上角的「 設定及更多內容 」。 選取 設定 > 隱私權、搜尋和服務 。 選取 Cookie 並 啟用 允許網站儲存和讀取 Cookie 資料 切換 (建議) 允許所有 Cookie。 允許來自特定網站的 Cookie 透過允許 cookie,網站將能夠在您的瀏覽器上保存和檢索數據,這可以透過記住您的偏好和登入資訊來增強您的瀏覽體驗。 開啟 Edge 瀏覽器,選擇瀏覽器視窗右上角的「 設定及更多內容 」。 選取 設定 > 隱私權、搜尋和服務 。 選取 Cookie 並移至 允許 以儲存 Cookie。 選取 [ 新增網站 ] ,輸入網站的 URL 以允許每個網站的 Cookie。 封鎖第三方 Cookie 如果您不希望第三方網站在您的 PC 上存儲 cookie,您可以阻止 cookie。 但是這樣做可能會使某些網頁無法正確顯示,您也可能會收到來自網站的訊息,告知您必須允許 Cookie 才能檢視該網站。 開啟 Edge 瀏覽器,選擇瀏覽器視窗右上角的「 設定及更多內容 」。 選取 設定 > 隱私權、搜尋和服務 。 選取 Cookie 並 啟用 封鎖 第三方 Cookie 切換。 [封鎖所有 Cookie] 如果您不希望第三方網站在您的 PC 上存儲 cookie,您可以阻止 cookie。 但是這樣做可能會使某些網頁無法正確顯示,您也可能會收到來自網站的訊息,告知您必須允許 Cookie 才能檢視該網站。 開啟 Edge 瀏覽器,選擇瀏覽器視窗右上角的「 設定及更多內容 」。 選取 設定 > 隱私權、搜尋和服務 。 選取 Cookie 並停用 允許網站儲存和讀取 Cookie 資料, (建議) 封鎖所有 Cookie。 封鎖來自特定網站的 Cookie Microsoft Edge 允許您阻止來自特定網站的 Cookie,但這樣做可能會阻止某些頁面正確顯示,或者您可能會從網站收到一條消息,讓您知道您需要允許 Cookie 查看該網站。 若要封鎖來自特定網站的 Cookie: 開啟 Edge 瀏覽器,選擇瀏覽器視窗右上角的「 設定及更多內容 」。 選取 設定 > 隱私權、搜尋和服務 。 選取 Cookie 並移至 不允許儲存 和讀取 Cookie 。 選取 [ 新增 網站 ] ,輸入網站的 URL 來封鎖每個網站的 Cookie。 刪除所有 Cookie 開啟 Edge 瀏覽器,選擇瀏覽器視窗右上角的「 設定及更多內容 」。 選取 [設定] > [隱私權、搜尋與服務] 。 選取 [清除瀏覽資料] ,然後選取 [ 立即清除瀏覽資料] 旁邊的 [ 選擇要清除的內容 ]。 在 [時間範圍] 底下,從清單中選擇時間範圍。 選取 [Cookie 與其他網站資料],然後選取 [立即清除]。 附註: 或者,您可以同時按 CTRL + SHIFT + DELETE ,然後繼續執行步驟 4 和 5,以刪除 cookie。 將立即刪除您所選取時間範圍的所有 Cookie 和其他網站資料。 這會將您從大部分的網站中登出。 刪除來自特定網站的 Cookie 開啟 Edge 瀏覽器,選擇 「設定」>「設定 」> 「 隱私權、搜尋和服務 」圖示 。 選取 Cookie, 然後按一下 查看所有 Cookie 和網站資料, 然後搜尋您要刪除其 Cookie 的網站。 選取向下箭號 至右側您想要刪除 Cookie 的網站上,然後選取 [刪除] 。 您所選網站的 Cookie 現已刪除。 針對您想要刪除 Cookie 的任何網站重複此步驟。 每次關閉瀏覽器時都要刪除 Cookie 開啟 Edge 瀏覽器,選擇 「設定」等 >「 設定 」 >「 隱私權、搜尋和服務 」。 選取 [清除瀏覽資料] ,然後選取 [ 選擇每次關閉瀏覽器時要清除的內容]。 開啟 [Cookie 與其他網站資料] 切換。 開啟此功能後,每次關閉 Edge 瀏覽器時,所有 cookie 和其他網站資料都會被刪除。 這會將您從大部分的網站中登出。 使用 cookie 預先載入頁面以加快瀏覽速度 開啟 Edge 瀏覽器,選擇瀏覽器視窗右上角的「 設定及更多內容 」。 選取 [設定] > [隱私權、搜尋與服務] 。 選擇 Cookie 並啟用切換 預加載頁面 以加快瀏覽和搜索速度。 SUBSCRIBE RSS FEEDS Need more help? Want more options? 探索 社群 與我們連絡 探索訂閱權益、瀏覽訓練課程、瞭解如何保護您的裝置等等。 Microsoft 365 訂閱權益 Microsoft 365 訓練 Microsoft 安全性 協助工具中心 社群可協助您詢問並回答問題、提供意見反應,以及聆聽來自具有豐富知識的專家意見。 在 Microsoft 社群中提出問題 Microsoft 技術社群 Windows 測試人員 Microsoft 365 測試人員 尋找常見問題的解決方案,或向支援專員取得協助。 線上支援人員 Was this information helpful? Yes No Thank you! Any more feedback for Microsoft? Can you help us improve? (Send feedback to Microsoft so we can help.) How satisfied are you with the translation quality? What affected your experience? Resolved my issue Clear instructions Easy to follow No jargon Pictures helped Translation quality Didn't match my screen Incorrect instructions Too technical Not enough information Not enough pictures Translation quality Any additional feedback? (Optional) Submit feedback By pressing submit, your feedback will be used to improve Microsoft products and services. Your IT admin will be able to collect this data. Privacy Statement. Thank you for your feedback! × 最新消息 Surface Pro Surface Laptop Copilot 機構版 Copilot 個人版 AI 登陸 Windows Microsoft 365 探索 Microsoft 產品 Windows 11 Apps Microsoft Store 帳戶個人檔案 下載中心 訂單追蹤 教育 Microsoft 教學版 教學裝置 Microsoft Teams 教學版 Microsoft 365 教學版 Office 教學版 教職員培訓與發展 學生與家長尊享優惠 Azure 學生版 企業 Microsoft 安全性 Azure Dynamics 365 Microsoft 365 Microsoft 廣告 Microsoft 365 Copilot Microsoft Teams 開發人員與資訊科技人員 Microsoft 開發人員 Microsoft 學習 AI 市集 App 支援 Microsoft 科技社群 Microsoft Marketplace Microsoft Power Platform Marketplace Rewards Visual Studio 公司 招聘 公司新聞 Microsoft 私隱權 投資者 可持續發展 中文(香港特別行政區) Your Privacy Choices Opt-Out Icon 您在加州的隱私選擇 Your Privacy Choices Opt-Out Icon 您在加州的隱私選擇 Consumer Health Privacy 聯絡 Microsoft 私隱權 管理 Cookie 使用協議條款 商標 關於我們的廣告 © Microsoft 2026 | 2026-01-13T09:30:34 |
https://bugs.php.net/bug.php?id=49106&edit=2 | PHP :: Bug #49106 :: PHP incorrectly sets no_local_copy=1 on response as Apache 2 module php.net | support | documentation | report a bug | advanced search | search howto | statistics | random bug | login go to bug id or search bugs for Bug #49106 PHP incorrectly sets no_local_copy=1 on response as Apache 2 module Submitted: 2009-07-30 02:40 UTC Modified: 2021-10-27 09:18 UTC Votes: 28 Avg. Score: 4.0 ± 1.2 Reproduced: 8 of 15 (53.3%) Same Version: 8 (100.0%) Same OS: 7 (87.5%) From: n dot sherlock at gmail dot com Assigned: Status: Analyzed Package: Apache2 related PHP Version: 5.*, 6 OS: * Private report: No CVE-ID: None View Developer Edit [2009-07-30 02:40 UTC] n dot sherlock at gmail dot com Description: ------------ If PHP (5.3.0) is running as an (Apache 2) module, it currently sets no_local_copy to 1 on the response it sends to Apache (sapi/apache2handler/sapi_apache2.c:463). It looks like this flag was set to disallow Apache from erroneously creating its own "304 Not Modified" responses based on the ETag or Last-Modified-Date of the PHP scripts' sourcecode itself, which would result in stale pages being served if the scripts' output changes over time. But there's a serious side effect of setting this flag in combination with Apache's mod_cache. If the browser makes a conditional request for a cached PHP document, but the document is expired in the cache, mod_cache correctly passes on the conditional request to PHP. If the PHP script responds with a "304 Not Modified" code, mod_cache should generate a 304 response for the browser. But due to no_local_copy, Apache is denied from creating a 304 code in response to a request for a PHP document. This forces it to resend the (still valid) body of the PHP document from the cache with a 200 code. But setting "no_local_copy=1" is not needed anyway. Just below the r->no_local_copy=1 line in sapi_apache2.c is a series of calls to apr_table_unset which remove any headers that Apache might have generated based on the PHP source itself and could be using to accept conditional requests. Starting at line 468 in php_apache_request_ctor, we have: apr_table_unset(r->headers_out, "Content-Length"); apr_table_unset(r->headers_out, "Last-Modified"); apr_table_unset(r->headers_out, "Expires"); apr_table_unset(r->headers_out, "ETag"); It seems to me that removing the r->no_local_copy=1 will therefore not result in erroneous "304 Not Modified" responses being sent by Apache for PHP scripts. At the moment if you request a mod_cache'd PHP page which itself sends no special caching directives (e.g. an empty script), the reply from the server is (trimmed to include only cache-relevant directives): Status=OK - 200 Date=Sun, 26 Jul 2009 10:07:58 GMT PHP has correctly suppressed the generation of Last-Modified-Date and ETag headers based on the source of the script itself. No conditional request is possible and you won't get a stale page. Now, if you request a PHP document that does set "ETag", such as the attached code "index.php", the response from the server is: Status=OK - 200 Date=Sun, 26 Jul 2009 10:11:02 GMT Etag="ComputedETag" Expires=Tue, 25 Aug 2009 10:11:02 GMT And the error log shows that the script correctly returned a 200 response code to Apache. Now if you press "refresh" in Firefox, the browser sends this request: If-None-Match="ComputedETag" Cache-Control=max-age=0 This is a conditional get which will also result in Apache revalidating its cache (since max-age=0). So Apache passes the conditional request onto PHP, and PHP sends back a 304 Not Modified response. But due to no_local_copy, mod_cache cannot send a 304, it responds to the browser with: Status=OK - 200 Date=Sun, 26 Jul 2009 10:11:35 GMT Etag="ComputedETag" Expires=Tue, 25 Aug 2009 10:11:35 GMT So, I removed the line that sets no_local_copy in my PHP. This had no impact on the way that the empty PHP document that sets no cache directives was served, Apache never served erroneous 304 responses because it never saw ETag or Last-Modified headers based on the PHP source. But the ETag-conditional request for index.php by the browser now gives the correct 304 response code: Status=Not Modified - 304 Date=Sun, 26 Jul 2009 10:16:23 GMT Etag="ComputedETag1" Expires=Tue, 25 Aug 2009 10:16:23 GMT Reproduce code: --------------- <?php $etag="\"ComputedETag\""; header("Etag: $etag"); //Expires ages away header("Expires: " . gmdate("D, d M Y H:i:s", time() + 60 * 60 * 24 * 30) . " GMT"); if (isset($_SERVER['HTTP_IF_NONE_MATCH']) && $_SERVER['HTTP_IF_NONE_MATCH'] == $etag) { /* At a users' request, the cache has been bypassed, but the * document is still the same. Avoid costly response generation * and waste of bandwidth by just sending not-modified. * (it is illegal to send a response-body by the HTTP spec). */ header('HTTP/1.0 304 Not Modified'); error_log(date('r')." - Response: 304 Not Modified"); exit(); //Don't generate or send the body } error_log(date('r')." - Response: 200. Generated document."); echo "Document body goes here"; ?> Patches csxigxvg (last revision 2015-06-19 06:26 UTC by sample at email dot tst) bstjkqgo (last revision 2015-06-19 06:23 UTC by sample at email dot tst) Pull Requests History All Comments Changes Git/SVN commits Related reports [2010-01-26 15:03 UTC] minfrin at sharp dot fm The httpd mod_cache is designed to work as a self contained cache that bolts onto the front of the server (or with httpd v2.3+, can be inserted anywhere in the httpd filter stack for more targeted caching). In theory, php shouldn't be touching any of the cache fields in request_rec at all, nor should php be trying to obscure any HTTP headers if it detects caching has been enabled. It should be possible for a php script to support conditional requests, in other words php should be able to detect the If-None-Match header and respond with 200 OK or 304 Not Modified as appropriate. Does anyone know what problem was being solved that made php want to care as to whether mod_cache was present? [2010-01-26 16:56 UTC] rasmus@php.net I have been looking at this code a bit this morning. It does indeed look like the no_local_copy is not needed here since both Last-Modified and ETag that may be present prior to PHP being executed are removed. And to minfrin, I think PHP does need to remove any ETag or Last- Modified headers that are generated prior to PHP execution. It simply makes no sense for Apache to generate an ETag for a request prior to PHP executing on that request. How could that possibly be valid? [2010-01-26 21:49 UTC] n dot sherlock at gmail dot com minfrin, the caching headers ETag and Last-Modified are added by Apache before PHP gets to run, and whether mod_cache is turned on or not. They aren't mod_cache specific. [2021-10-27 09:18 UTC] cmb@php.net I also think that the no_local_copy is wrong, but given the long standing behavior, I'd be uncomfortable to remove it for any stable version. [2022-01-13 07:19 UTC] 1625502548 at qq dot com 13518804228 Copyright © 2001-2026 The PHP Group All rights reserved. Last updated: Tue Jan 13 09:00:01 2026 UTC | 2026-01-13T09:30:34 |
http://docs.buildbot.net/current/manual/configuration/buildsets.html | 2.5.11. Build Sets — Buildbot 4.3.0 documentation Buildbot 1. Buildbot Tutorial 2. Buildbot Manual 2.1. Introduction 2.2. Installation 2.3. Concepts 2.4. Secret Management 2.5. Configuration 2.5.1. Configuring Buildbot 2.5.2. Global Configuration 2.5.3. Change Sources and Changes 2.5.4. Changes 2.5.5. Schedulers 2.5.6. Workers 2.5.7. Builder Configuration 2.5.8. Projects 2.5.9. Codebases 2.5.10. Build Factories 2.5.11. Build Sets 2.5.12. Properties 2.5.13. Build Steps 2.5.14. Interlocks 2.5.15. Report Generators 2.5.16. Reporters 2.5.17. Web Server 2.5.18. Change Hooks 2.5.19. Custom Services 2.5.20. DbConfig 2.5.21. Configurators 2.5.22. Manhole 2.5.23. Multimaster 2.5.24. Multiple-Codebase Builds 2.5.25. Miscellaneous Configuration 2.5.26. Testing Utilities 2.6. Customization 2.7. Command-line Tool 2.8. Resources 2.9. Optimization 2.10. Plugin Infrastructure in Buildbot 2.11. Deployment 2.12. Upgrading 3. Buildbot Development 4. Release Notes 5. Older Release Notes 6. API Indices Buildbot 2. Buildbot Manual 2.5. Configuration 2.5.11. Build Sets View page source 2.5.11. Build Sets A BuildSet represents a set of Build s that all compile and/or test the same version of the source tree. Usually, these builds are created by multiple Builder s and will thus execute different steps. The BuildSet is tracked as a single unit, which fails if any of the component Build s have failed, and therefore can succeed only if all of the component Build s have succeeded. There are two kinds of status notification messages that can be emitted for a BuildSet : the firstFailure type (which fires as soon as we know the BuildSet will fail), and the Finished type (which fires once the BuildSet has completely finished, regardless of whether the overall set passed or failed). A BuildSet is created with a set of one or more source stamp tuples of (branch, revision, changes, patch) , some of which may be None , and a list of Builder s on which it is to be run. They are then given to the BuildMaster, which is responsible for creating a separate BuildRequest for each Builder . There are a couple of different likely values for the SourceStamp : (revision=None, changes= CHANGES , patch=None) This is a SourceStamp used when a series of Change s have triggered a build. The VC step will attempt to check out a tree that contains CHANGES (and any changes that occurred before CHANGES , but not any that occurred after them.) (revision=None, changes=None, patch=None) This builds the most recent code on the default branch. This is the sort of SourceStamp that would be used on a Build that was triggered by a user request, or a Periodic scheduler. It is also possible to configure the VC Source Step to always check out the latest sources rather than paying attention to the Change s in the SourceStamp , which will result in the same behavior as this. (branch= BRANCH , revision=None, changes=None, patch=None) This builds the most recent code on the given BRANCH . Again, this is generally triggered by a user request or a Periodic scheduler. (revision= REV , changes=None, patch=( LEVEL , DIFF , SUBDIR_ROOT )) This checks out the tree at the given revision REV , then applies a patch (using patch -pLEVEL <DIFF ) from inside the relative directory SUBDIR_ROOT . Item SUBDIR_ROOT is optional and defaults to the builder working directory. The try command creates this kind of SourceStamp . If patch is None , the patching step is bypassed. The buildmaster is responsible for turning the BuildSet into a set of BuildRequest objects and queueing them on the appropriate Builder s. Previous Next © Copyright Buildbot Team Members. Built with Sphinx using a theme provided by Read the Docs . | 2026-01-13T09:30:34 |
http://docs.buildbot.net/current/manual/configuration/manhole.html | 2.5.22. Manhole — Buildbot 4.3.0 documentation Buildbot 1. Buildbot Tutorial 2. Buildbot Manual 2.1. Introduction 2.2. Installation 2.3. Concepts 2.4. Secret Management 2.5. Configuration 2.5.1. Configuring Buildbot 2.5.2. Global Configuration 2.5.3. Change Sources and Changes 2.5.4. Changes 2.5.5. Schedulers 2.5.6. Workers 2.5.7. Builder Configuration 2.5.8. Projects 2.5.9. Codebases 2.5.10. Build Factories 2.5.11. Build Sets 2.5.12. Properties 2.5.13. Build Steps 2.5.14. Interlocks 2.5.15. Report Generators 2.5.16. Reporters 2.5.17. Web Server 2.5.18. Change Hooks 2.5.19. Custom Services 2.5.20. DbConfig 2.5.21. Configurators 2.5.22. Manhole AuthorizedKeysManhole PasswordManhole TelnetManhole 2.5.22.1. Using manhole 2.5.23. Multimaster 2.5.24. Multiple-Codebase Builds 2.5.25. Miscellaneous Configuration 2.5.26. Testing Utilities 2.6. Customization 2.7. Command-line Tool 2.8. Resources 2.9. Optimization 2.10. Plugin Infrastructure in Buildbot 2.11. Deployment 2.12. Upgrading 3. Buildbot Development 4. Release Notes 5. Older Release Notes 6. API Indices Buildbot 2. Buildbot Manual 2.5. Configuration 2.5.22. Manhole View page source 2.5.22. Manhole Manhole is an interactive Python shell that gives full access to the Buildbot master instance. It is probably only useful for Buildbot developers. Using Manhole requires the cryptography and pyasn1 python packages to be installed. These are not part of the normal Buildbot dependencies. There are several implementations of Manhole available, which differ by the authentication mechanisms and the security of the connection. Note Manhole exposes full access to the buildmaster’s account (including the ability to modify and delete files). It’s recommended not to expose the manhole to the Internet and to use a strong password. class buildbot.plugins.util. AuthorizedKeysManhole ( port , keyfile , ssh_hostkey_dir ) A manhole implementation that accepts encrypted ssh connections and authenticates by ssh keys. The prospective client must have an ssh private key that matches one of the public keys in manhole’s authorized keys file. Parameters : port ( string or int ) – The port to listen on. This is a strports specification string, like tcp:12345 or tcp:12345:interface=127.0.0.1 . Bare integers are treated as a simple tcp port. keyfile ( string ) – The path to the file containing public parts of the authorized SSH keys. The path is interpreted relative to the buildmaster’s basedir. The file should contain one public SSH key per line. This is the exact same format as used by sshd in ~/.ssh/authorized_keys . ssh_hostkey_dir ( string ) – The path to the directory which contains ssh host keys for this server. class buildbot.plugins.util. PasswordManhole ( port , username , password , ssh_hostkey_dir ) A manhole implementation that accepts encrypted ssh connections and authenticates by username and password. Parameters : port ( string or int ) – The port to listen on. This is a strports specification string, like tcp:12345 or tcp:12345:interface=127.0.0.1 . Bare integers are treated as a simple tcp port. username ( string ) – The username to authenticate. password ( string ) – The password of the user to authenticate. ssh_hostkey_dir ( string ) – The path to the directory which contains ssh host keys for this server. class buildbot.plugins.util. TelnetManhole ( port , username , password ) A manhole implementation that accepts unencrypted telnet connections and authenticates by username and password. Note This connection method is not secure and should not be used anywhere where the port is exposed to the Internet. Parameters : port ( string or int ) – The port to listen on. This is a strports specification string, like tcp:12345 or tcp:12345:interface=127.0.0.1 . Bare integers are treated as a simple tcp port. username ( string ) – The username to authenticate. password ( string ) – The password of the user to authenticate. 2.5.22.1. Using manhole The interactive Python shell can be entered by simply connecting to the host in question. For instance, in the case of ssh password-based manhole, the configuration may look like this: from buildbot import manhole c [ 'manhole' ] = manhole . PasswordManhole ( "tcp:1234:interface=127.0.0.1" , "admin" , "passwd" , ssh_hostkey_dir = "data/ssh_host_keys" ) The above ssh_hostkey_dir declares a path relative to the buildmaster’s basedir to look for ssh keys. To create an ssh key, navigate to the buildmaster’s basedir and run: mkdir -p data/ssh_host_keys ckeygen3 -t rsa -f "data/ssh_host_keys/ssh_host_rsa_key" Restart Buildbot and then try to connect to the running buildmaster like this: ssh -p1234 admin@127.0.0.1 # enter passwd at prompt After connection has been established, objects can be explored in more depth using dir(x) or the helper function show(x) . For example: >>> master . workers . workers {'example-worker': <Worker 'example-worker', current builders: runtests>} >>> show ( master ) data attributes of <buildbot.master.BuildMaster instance at 0x7f7a4ab7df38> basedir : '/home/dustin/code/buildbot/t/buildbot/'... botmaster : <type 'instance'> buildCacheSize : None buildHorizon : None buildbotURL : http://localhost:8010/ changeCacheSize : None change_svc : <type 'instance'> configFileName : master.cfg db : <class 'buildbot.db.connector.DBConnector'> db_url : sqlite:///state.sqlite ... >>> show ( master . botmaster . builders [ 'win32' ]) data attributes of <Builder ''builder'' at 48963528> The buildmaster’s SSH server will use a different host key than the normal sshd running on a typical unix host. This will cause the ssh client to complain about a host key mismatch , because it does not realize there are two separate servers running on the same host. To avoid this, use a clause like the following in your .ssh/config file: Host remotehost-buildbot HostName remotehost HostKeyAlias remotehost-buildbot Port 1234 # use 'user' if you use PasswordManhole and your name is not 'admin'. # if you use AuthorizedKeysManhole, this probably doesn't matter. User admin Previous Next © Copyright Buildbot Team Members. Built with Sphinx using a theme provided by Read the Docs . | 2026-01-13T09:30:34 |
https://www.visma.com/voiceofvisma/ep-11-ari-pekka-salovaara | Ep 11: From Founder to Segment Director with Ari-Pekka Salovaara Who we are About us Connected by software – driven by people Become a Visma company Join our family of thriving SaaS companies Technology and AI at Visma Innovation with customer value at its heart Our sponsorship Team Visma | Lease a Bike Sustainability A better impact through software Contact us Find the right contact information What we offer Cloud software We create brilliant ways to work For medium businesses Lead your business with clarity For small businesses Start, run and grow with ease For public sector Empower efficient societies For accounting offices Build your dream accounting office For partners Help us keep customers ahead For investors For investors Latest results, news and strategy Financials Key figures, quarterly and annual results Events Financial calendar Governance Policies, management, board and owners Careers Careers at Visma Join the business software revolution Locations Find your nearest office Open positions Turn your passion into a career Resources News For small businesses Cloud accounting software built for small businesses Who we are About us Technology and AI at Visma Sustainability Become a Visma company Our sponsorship What we offer Cloud software For small businesses For accounting offices For enterprises Public sector For partners For investors Overview Financials Governance News and press Events Careers Careers at Visma Open positions Hubs Resources Blog Visma Developer Trust Centre News Press releases Team Visma | Lease a Bike Podcast Ep 11: From Founder to Segment Director with Ari-Pekka Salovaara Voice of Visma January 22, 2025 Spotify Created with Sketch. YouTube Apple Podcasts Amazon Music <iframe style="border-radius:12px" src="https://www.youtube.com/embed/3X65vQMhKPM?si=TQg8cI4OqUuf5GPC" width="100%" height="500" frameBorder="0" allowfullscreen="" allow="autoplay; clipboard-write; encrypted-media; fullscreen; picture-in-picture" loading="lazy"></iframe> About the episode Ari-Pekka is a serial entrepreneur who joined Visma when his company was acquired in 2010. He now leads the small business segment, which currently includes 34 companies and 1.6 million customers. Today, Ari-Pekka joins host Filip Matz to talk about his journey, his leadership style, and where he hopes to see the segment go from here. Share More from Voice of Visma We're sitting down with leaders and colleagues from around Visma to share their stories, industry knowledge, and valuable career lessons. With the Voice of Visma podcast, we’re bringing our people and culture closer to you. Get to know the podcast Ep 22: Building, learning, and accelerating growth in the SaaS world with Maxin Schneider Entrepreneurial leadership often grows through experience, and Maxin Schneider has seen that up close. Read more Ep 21: How DEI fuels business success with Iveta Bukane Why DEI isn't just a moral imperative—it’s a business necessity. Read more Ep 20: Driving tangible sustainability outcomes with Freja Landewall Discover how ESG goes far beyond the environment, encompassing people, governance, and the long-term resilience of business. Read more Ep 19: Future-proofing public services in Sweden with Marie Ceder Between demographic changes, the rise in AI, and digitalisation, the public sector is at a pivotal moment. Read more Ep 18: Making inclusion part of our everyday work with Ida Algotsson What does inclusion truly mean at Visma – not just as values, but as everyday actions? Read more Ep 17: Sustainability at the heart of business with Robin Åkerberg Honouring our responsibility goes well beyond the numbers – it starts with a shared purpose and values. Read more Ep 16: Innovation for the public good with Kasper Lyhr Serving the public sector goes way beyond software – it’s about shaping the future of society as a whole. Read more Ep 15: Leading with transparency and vulnerability with Ellen Sano What does it mean to be a “firestarter” in business? Read more Ep 14: Women, innovation, and the future of Visma with Merete Hverven Our CEO, Merete, knows that great leadership takes more than just hard work – it takes vision. Read more Ep 13: Building partnerships beyond software with Daniel Ognøy Kaspersen What does it look like when an accounting software company delivers more than just great software? Read more Ep 12: AI in the accounting sphere with Joris Joppe Artificial intelligence is changing industries across the board, and accounting is no exception. But in such a highly specialised field, what does change actually look like? Read more Ep 11: From Founder to Segment Director with Ari-Pekka Salovaara Ari-Pekka is a serial entrepreneur who joined Visma when his company was acquired in 2010. He now leads the small business segment. Read more Ep 10: When brave choices can save a company with Charlotte von Sydow What’s it like stepping in as the Managing Director for a company in decline? Read more Ep 09: Revolutionising tax tech in Italy with Enrico Mattiazzi and Vito Lomele Take one look at their product, their customer reviews, or their workplace awards, and it’s clear why Fiscozen leads Italy’s tax tech scene. Read more Ep 08: Navigating the waters of entrepreneurship with Steffen Torp When it comes to being an entrepreneur, the journey is as personal as it is unpredictable. Read more Ep 07: The untold stories of Visma with Øystein Moan What did Visma look like in its early days? Are there any decisions our former CEO would have made differently? Read more Ep 06: Measure what matters: Employee engagement with Vibeke Müller Research shows that having engaged, happy employees is so important for building a great company culture and performing better financially. Read more Ep 05: Our Team Visma | Lease a Bike sponsorship with Anne-Grethe Thomle Karlsen It’s one thing to sponsor the world’s best cycling team; it’s a whole other thing to provide software and expertise that helps them do what they do best. Read more Ep 04: “How do you make people care about security?” with Joakim Tauren With over 700 applications across the Visma Group (and counting!), cybersecurity is make-or-break for us. Read more Ep 03: The human side of enterprise with Yvette Hoogewerf As a software company, our products are central to our business… but that’s only one part of the equation. Read more Ep 02: From Management Trainee to CFO with Stian Grindheim How does someone work their way up from Management Trainee to CFO by the age of 30? And balance fatherhood alongside it all? Read more Ep 01: An optimistic look at the future of AI with Jacob Nyman We’re all-too familiar with the fears surrounding artificial intelligence. So today, Jacob and Johan are flipping the script. Read more (Trailer) Introducing: Voice of Visma These are the stories that shape us... and the reason Visma is unlike anywhere else. Read more Visma Software International AS Organisation number: 980858073 MVA (Foretaksregisteret/The Register of Business Enterprises) Main office Karenslyst allé 56 0277 Oslo Norway Postal address PO box 733, Skøyen 0214 Oslo Norway visma@visma.com Visma on LinkedIn Who we are About us Technology at Visma Sustainability Become a Visma company Our sponsorship Contact us What we offer For small businesses For accounting offices For medium businesses For public sector For partners e-invoicing Digital signature For investors Overview Financials Governance Events Careers Careers at Visma Open positions Hubs Resources Blog Trust Centre Community News Press ©️ 2026 Visma Privacy policy Cookie policy Whistleblowing Cookies settings Transparency Act Change country | 2026-01-13T09:30:34 |
https://aws.amazon.com/pt/waf/ | Web Application Firewall – Proteção de APIs Web – AWS WAF - AWS Pular para o conteúdo principal Filter: Todos English Entre em contato conosco AWS Marketplace Suporte Minha conta Pesquisar Filter: Todos Faça login no console Criar conta AWS Web Application Firewall Visão geral Recursos Preços Conceitos básicos Recursos Mais Produtos › Segurança, identidade e conformidade › AWS WAF Obtenha dez milhões de solicitações comuns de controle de bots por mês com o nível gratuito da AWS → AWS WAF Proteja suas aplicações da Web contra explorações comuns Comece a usar o AWS WAF Benefícios do AWS WAF Economize tempo com regras gerenciadas Economize tempo com as regras gerenciadas para que seja possível gastar mais tempo construindo aplicações. Monitorar, bloquear ou limitar a taxa de bots Monitore, bloqueie ou limite a taxa de bots comuns e generalizados. Reduzir as etapas de configuração de segurança Agilize a configuração complexa de segurança com uma interface consolidada que reduz a complexidade e as etapas da implantação de segurança em até 80%. Visibilidade centralizada e prática Uma interface única e abrangente combina funções essenciais de segurança com proteções especializadas de parceiros para aprimorar a visibilidade e os controles de segurança. Essa abordagem unificada transforma dados de segurança em insights acionáveis, eliminando atritos operacionais e acelerando a resposta a riscos. Reforce a postura de segurança Os pacotes de proteção pré-configurados aproveitam a experiência em segurança da AWS para fornecer modelos de proteção instantânea para setores específicos e tipos de workload, como APIs, aplicações PHP e serviços Web. Esses modelos são continuamente otimizados para garantir segurança atualizada sem a necessidade de profundo conhecimento em implantação. Receba recomendações de segurança contínuas para fortalecer a postura geral de segurança. Por que usar o AWS WAF? Com o AWS WAF, você pode criar regras de segurança que controlam o tráfego de bots e bloqueiam padrões de ataque comuns, como injeção de SQL ou cross-site scripting (XSS). Reproduzir Casos de uso Filtre o tráfego na Web Crie regras para filtrar solicitações da Web com base em condições que incluem endereços IP, cabeçalhos e corpo HTTP ou URIs personalizados. Saiba mais sobre a criação de regras Evite fraude de invasão de conta Monitore a página de login da sua aplicação para acesso não autorizado a contas de usuários usando credenciais comprometidas. Saiba mais sobre a prevenção contra fraudes Proteção automática contra DDoS de camada 7 Concebido para monitorizar continuamente e mitigar automaticamente eventos de negação de serviço distribuída (DDoS) na camada de aplicação (camada 7) em segundos. Implementação rápida de segurança Implemente novas aplicações com confiança utilizando a configuração simplificada e guiada, com uma interface de página única para ativar predefinições de segurança pré-configuradas e adaptadas às suas necessidades. Reforce a postura de segurança Por meio de pacotes de regras criados por especialistas, visibilidade consolidada e recomendações contínuas, você obtém proteção imediata para otimizar sua postura de segurança. Comece a usar o AWS WAF Comece a usar o AWS WAF Explorar o AWS WAF Entre em contato com um especialista Entre em contato conosco Crie uma conta da AWS Aprenda O que é a AWS? O que é a computação em nuvem? O que é a IA agêntica? Hub de conceitos de computação em nuvem Segurança na Nuvem AWS Novidades Blogs Comunicados à imprensa Recursos Conceitos básicos Treinamento Centro de Confiança da AWS Biblioteca de Soluções da AWS Centro de arquitetura Perguntas frequentes sobre produtos e tópicos técnicos Relatórios de analistas Parceiros da AWS Desenvolvedores Builder Center SDKs e ferramentas .NET na AWS Python na AWS Java na AWS PHP na AWS JavaScript na AWS Ajuda Entre em contato conosco Crie um tíquete de suporte AWS re:Post Centro de Conhecimento Visão geral do AWS Support Obtenha ajuda especializada Acessibilidade da AWS Jurídico English Voltar ao início A Amazon é uma empresa empregadora orientada pelos fundamentos de igualdade de oportunidades e ações afirmativas, que não faz distinção entre minorias, mulheres, portadores de deficiência, veteranos, identidade de gênero, orientação sexual nem idade. x facebook linkedin instagram twitch youtube podcasts email Privacidade Termos do site Preferências de cookies © 2026, Amazon Web Services, Inc. ou suas afiliadas. Todos os direitos reservados. | 2026-01-13T09:30:34 |
http://anh.cs.luc.edu/handsonPythonTutorial/ifstatements.html#strange-function-exercise | 3.1. If Statements — Hands-on Python Tutorial for Python 3 Navigation index next | previous | Hands-on Python Tutorial » 3. More On Flow of Control » 3.1. If Statements ¶ 3.1.1. Simple Conditions ¶ The statements introduced in this chapter will involve tests or conditions . More syntax for conditions will be introduced later, but for now consider simple arithmetic comparisons that directly translate from math into Python. Try each line separately in the Shell 2 < 5 3 > 7 x = 11 x > 10 2 * x < x type ( True ) You see that conditions are either True or False . These are the only possible Boolean values (named after 19th century mathematician George Boole). In Python the name Boolean is shortened to the type bool . It is the type of the results of true-false conditions or tests. Note The Boolean values True and False have no quotes around them! Just as '123' is a string and 123 without the quotes is not, 'True' is a string, not of type bool. 3.1.2. Simple if Statements ¶ Run this example program, suitcase.py. Try it at least twice, with inputs: 30 and then 55. As you an see, you get an extra result, depending on the input. The main code is: weight = float ( input ( "How many pounds does your suitcase weigh? " )) if weight > 50 : print ( "There is a $25 charge for luggage that heavy." ) print ( "Thank you for your business." ) The middle two line are an if statement. It reads pretty much like English. If it is true that the weight is greater than 50, then print the statement about an extra charge. If it is not true that the weight is greater than 50, then don’t do the indented part: skip printing the extra luggage charge. In any event, when you have finished with the if statement (whether it actually does anything or not), go on to the next statement that is not indented under the if . In this case that is the statement printing “Thank you”. The general Python syntax for a simple if statement is if condition : indentedStatementBlock If the condition is true, then do the indented statements. If the condition is not true, then skip the indented statements. Another fragment as an example: if balance < 0 : transfer = - balance # transfer enough from the backup account: backupAccount = backupAccount - transfer balance = balance + transfer As with other kinds of statements with a heading and an indented block, the block can have more than one statement. The assumption in the example above is that if an account goes negative, it is brought back to 0 by transferring money from a backup account in several steps. In the examples above the choice is between doing something (if the condition is True ) or nothing (if the condition is False ). Often there is a choice of two possibilities, only one of which will be done, depending on the truth of a condition. 3.1.3. if - else Statements ¶ Run the example program, clothes.py . Try it at least twice, with inputs 50 and then 80. As you can see, you get different results, depending on the input. The main code of clothes.py is: temperature = float ( input ( 'What is the temperature? ' )) if temperature > 70 : print ( 'Wear shorts.' ) else : print ( 'Wear long pants.' ) print ( 'Get some exercise outside.' ) The middle four lines are an if-else statement. Again it is close to English, though you might say “otherwise” instead of “else” (but else is shorter!). There are two indented blocks: One, like in the simple if statement, comes right after the if heading and is executed when the condition in the if heading is true. In the if - else form this is followed by an else: line, followed by another indented block that is only executed when the original condition is false . In an if - else statement exactly one of two possible indented blocks is executed. A line is also shown de dented next, removing indentation, about getting exercise. Since it is dedented, it is not a part of the if-else statement: Since its amount of indentation matches the if heading, it is always executed in the normal forward flow of statements, after the if - else statement (whichever block is selected). The general Python if - else syntax is if condition : indentedStatementBlockForTrueCondition else: indentedStatementBlockForFalseCondition These statement blocks can have any number of statements, and can include about any kind of statement. See Graduate Exercise 3.1.4. More Conditional Expressions ¶ All the usual arithmetic comparisons may be made, but many do not use standard mathematical symbolism, mostly for lack of proper keys on a standard keyboard. Meaning Math Symbol Python Symbols Less than < < Greater than > > Less than or equal ≤ <= Greater than or equal ≥ >= Equals = == Not equal ≠ != There should not be space between the two-symbol Python substitutes. Notice that the obvious choice for equals , a single equal sign, is not used to check for equality. An annoying second equal sign is required. This is because the single equal sign is already used for assignment in Python, so it is not available for tests. Warning It is a common error to use only one equal sign when you mean to test for equality, and not make an assignment! Tests for equality do not make an assignment, and they do not require a variable on the left. Any expressions can be tested for equality or inequality ( != ). They do not need to be numbers! Predict the results and try each line in the Shell : x = 5 x x == 5 x == 6 x x != 6 x = 6 6 == x 6 != x 'hi' == 'h' + 'i' 'HI' != 'hi' [ 1 , 2 ] != [ 2 , 1 ] An equality check does not make an assignment. Strings are case sensitive. Order matters in a list. Try in the Shell : 'a' > 5 When the comparison does not make sense, an Exception is caused. [1] Following up on the discussion of the inexactness of float arithmetic in String Formats for Float Precision , confirm that Python does not consider .1 + .2 to be equal to .3: Write a simple condition into the Shell to test. Here is another example: Pay with Overtime. Given a person’s work hours for the week and regular hourly wage, calculate the total pay for the week, taking into account overtime. Hours worked over 40 are overtime, paid at 1.5 times the normal rate. This is a natural place for a function enclosing the calculation. Read the setup for the function: def calcWeeklyWages ( totalHours , hourlyWage ): '''Return the total weekly wages for a worker working totalHours, with a given regular hourlyWage. Include overtime for hours over 40. ''' The problem clearly indicates two cases: when no more than 40 hours are worked or when more than 40 hours are worked. In case more than 40 hours are worked, it is convenient to introduce a variable overtimeHours. You are encouraged to think about a solution before going on and examining mine. You can try running my complete example program, wages.py, also shown below. The format operation at the end of the main function uses the floating point format ( String Formats for Float Precision ) to show two decimal places for the cents in the answer: def calcWeeklyWages ( totalHours , hourlyWage ): '''Return the total weekly wages for a worker working totalHours, with a given regular hourlyWage. Include overtime for hours over 40. ''' if totalHours <= 40 : totalWages = hourlyWage * totalHours else : overtime = totalHours - 40 totalWages = hourlyWage * 40 + ( 1.5 * hourlyWage ) * overtime return totalWages def main (): hours = float ( input ( 'Enter hours worked: ' )) wage = float ( input ( 'Enter dollars paid per hour: ' )) total = calcWeeklyWages ( hours , wage ) print ( 'Wages for {hours} hours at ${wage:.2f} per hour are ${total:.2f}.' . format ( ** locals ())) main () Here the input was intended to be numeric, but it could be decimal so the conversion from string was via float , not int . Below is an equivalent alternative version of the body of calcWeeklyWages , used in wages1.py . It uses just one general calculation formula and sets the parameters for the formula in the if statement. There are generally a number of ways you might solve the same problem! if totalHours <= 40 : regularHours = totalHours overtime = 0 else : overtime = totalHours - 40 regularHours = 40 return hourlyWage * regularHours + ( 1.5 * hourlyWage ) * overtime The in boolean operator : There are also Boolean operators that are applied to types others than numbers. A useful Boolean operator is in , checking membership in a sequence: >>> vals = [ 'this' , 'is' , 'it] >>> 'is' in vals True >>> 'was' in vals False It can also be used with not , as not in , to mean the opposite: >>> vals = [ 'this' , 'is' , 'it] >>> 'is' not in vals False >>> 'was' not in vals True In general the two versions are: item in sequence item not in sequence Detecting the need for if statements : Like with planning programs needing``for`` statements, you want to be able to translate English descriptions of problems that would naturally include if or if - else statements. What are some words or phrases or ideas that suggest the use of these statements? Think of your own and then compare to a few I gave: [2] 3.1.4.1. Graduate Exercise ¶ Write a program, graduate.py , that prompts students for how many credits they have. Print whether of not they have enough credits for graduation. (At Loyola University Chicago 120 credits are needed for graduation.) 3.1.4.2. Head or Tails Exercise ¶ Write a program headstails.py . It should include a function flip() , that simulates a single flip of a coin: It randomly prints either Heads or Tails . Accomplish this by choosing 0 or 1 arbitrarily with random.randrange(2) , and use an if - else statement to print Heads when the result is 0, and Tails otherwise. In your main program have a simple repeat loop that calls flip() 10 times to test it, so you generate a random sequence of 10 Heads and Tails . 3.1.4.3. Strange Function Exercise ¶ Save the example program jumpFuncStub.py as jumpFunc.py , and complete the definitions of functions jump and main as described in the function documentation strings in the program. In the jump function definition use an if - else statement (hint [3] ). In the main function definition use a for -each loop, the range function, and the jump function. The jump function is introduced for use in Strange Sequence Exercise , and others after that. 3.1.5. Multiple Tests and if - elif Statements ¶ Often you want to distinguish between more than two distinct cases, but conditions only have two possible results, True or False , so the only direct choice is between two options. As anyone who has played “20 Questions” knows, you can distinguish more cases by further questions. If there are more than two choices, a single test may only reduce the possibilities, but further tests can reduce the possibilities further and further. Since most any kind of statement can be placed in an indented statement block, one choice is a further if statement. For instance consider a function to convert a numerical grade to a letter grade, ‘A’, ‘B’, ‘C’, ‘D’ or ‘F’, where the cutoffs for ‘A’, ‘B’, ‘C’, and ‘D’ are 90, 80, 70, and 60 respectively. One way to write the function would be test for one grade at a time, and resolve all the remaining possibilities inside the next else clause: def letterGrade ( score ): if score >= 90 : letter = 'A' else : # grade must be B, C, D or F if score >= 80 : letter = 'B' else : # grade must be C, D or F if score >= 70 : letter = 'C' else : # grade must D or F if score >= 60 : letter = 'D' else : letter = 'F' return letter This repeatedly increasing indentation with an if statement as the else block can be annoying and distracting. A preferred alternative in this situation, that avoids all this indentation, is to combine each else and if block into an elif block: def letterGrade ( score ): if score >= 90 : letter = 'A' elif score >= 80 : letter = 'B' elif score >= 70 : letter = 'C' elif score >= 60 : letter = 'D' else : letter = 'F' return letter The most elaborate syntax for an if - elif - else statement is indicated in general below: if condition1 : indentedStatementBlockForTrueCondition1 elif condition2 : indentedStatementBlockForFirstTrueCondition2 elif condition3 : indentedStatementBlockForFirstTrueCondition3 elif condition4 : indentedStatementBlockForFirstTrueCondition4 else: indentedStatementBlockForEachConditionFalse The if , each elif , and the final else lines are all aligned. There can be any number of elif lines, each followed by an indented block. (Three happen to be illustrated above.) With this construction exactly one of the indented blocks is executed. It is the one corresponding to the first True condition, or, if all conditions are False , it is the block after the final else line. Be careful of the strange Python contraction. It is elif , not elseif . A program testing the letterGrade function is in example program grade1.py . See Grade Exercise . A final alternative for if statements: if - elif -.... with no else . This would mean changing the syntax for if - elif - else above so the final else: and the block after it would be omitted. It is similar to the basic if statement without an else , in that it is possible for no indented block to be executed. This happens if none of the conditions in the tests are true. With an else included, exactly one of the indented blocks is executed. Without an else , at most one of the indented blocks is executed. if weight > 120 : print ( 'Sorry, we can not take a suitcase that heavy.' ) elif weight > 50 : print ( 'There is a $25 charge for luggage that heavy.' ) This if - elif statement only prints a line if there is a problem with the weight of the suitcase. 3.1.5.1. Sign Exercise ¶ Write a program sign.py to ask the user for a number. Print out which category the number is in: 'positive' , 'negative' , or 'zero' . 3.1.5.2. Grade Exercise ¶ In Idle, load grade1.py and save it as grade2.py Modify grade2.py so it has an equivalent version of the letterGrade function that tests in the opposite order, first for F, then D, C, .... Hint: How many tests do you need to do? [4] Be sure to run your new version and test with different inputs that test all the different paths through the program. Be careful to test around cut-off points. What does a grade of 79.6 imply? What about exactly 80? 3.1.5.3. Wages Exercise ¶ * Modify the wages.py or the wages1.py example to create a program wages2.py that assumes people are paid double time for hours over 60. Hence they get paid for at most 20 hours overtime at 1.5 times the normal rate. For example, a person working 65 hours with a regular wage of $10 per hour would work at $10 per hour for 40 hours, at 1.5 * $10 for 20 hours of overtime, and 2 * $10 for 5 hours of double time, for a total of 10*40 + 1.5*10*20 + 2*10*5 = $800. You may find wages1.py easier to adapt than wages.py . Be sure to test all paths through the program! Your program is likely to be a modification of a program where some choices worked before, but once you change things, retest for all the cases! Changes can mess up things that worked before. 3.1.6. Nesting Control-Flow Statements ¶ The power of a language like Python comes largely from the variety of ways basic statements can be combined . In particular, for and if statements can be nested inside each other’s indented blocks. For example, suppose you want to print only the positive numbers from an arbitrary list of numbers in a function with the following heading. Read the pieces for now. def printAllPositive ( numberList ): '''Print only the positive numbers in numberList.''' For example, suppose numberList is [3, -5, 2, -1, 0, 7] . You want to process a list, so that suggests a for -each loop, for num in numberList : but a for -each loop runs the same code body for each element of the list, and we only want print ( num ) for some of them. That seems like a major obstacle, but think closer at what needs to happen concretely. As a human, who has eyes of amazing capacity, you are drawn immediately to the actual correct numbers, 3, 2, and 7, but clearly a computer doing this systematically will have to check every number. In fact, there is a consistent action required: Every number must be tested to see if it should be printed. This suggests an if statement, with the condition num > 0 . Try loading into Idle and running the example program onlyPositive.py , whose code is shown below. It ends with a line testing the function: def printAllPositive ( numberList ): '''Print only the positive numbers in numberList.''' for num in numberList : if num > 0 : print ( num ) printAllPositive ([ 3 , - 5 , 2 , - 1 , 0 , 7 ]) This idea of nesting if statements enormously expands the possibilities with loops. Now different things can be done at different times in loops, as long as there is a consistent test to allow a choice between the alternatives. Shortly, while loops will also be introduced, and you will see if statements nested inside of them, too. The rest of this section deals with graphical examples. Run example program bounce1.py . It has a red ball moving and bouncing obliquely off the edges. If you watch several times, you should see that it starts from random locations. Also you can repeat the program from the Shell prompt after you have run the script. For instance, right after running the program, try in the Shell bounceBall ( - 3 , 1 ) The parameters give the amount the shape moves in each animation step. You can try other values in the Shell , preferably with magnitudes less than 10. For the remainder of the description of this example, read the extracted text pieces. The animations before this were totally scripted, saying exactly how many moves in which direction, but in this case the direction of motion changes with every bounce. The program has a graphic object shape and the central animation step is shape . move ( dx , dy ) but in this case, dx and dy have to change when the ball gets to a boundary. For instance, imagine the ball getting to the left side as it is moving to the left and up. The bounce obviously alters the horizontal part of the motion, in fact reversing it, but the ball would still continue up. The reversal of the horizontal part of the motion means that the horizontal shift changes direction and therefore its sign: dx = - dx but dy does not need to change. This switch does not happen at each animation step, but only when the ball reaches the edge of the window. It happens only some of the time - suggesting an if statement. Still the condition must be determined. Suppose the center of the ball has coordinates (x, y). When x reaches some particular x coordinate, call it xLow, the ball should bounce. The edge of the window is at coordinate 0, but xLow should not be 0, or the ball would be half way off the screen before bouncing! For the edge of the ball to hit the edge of the screen, the x coordinate of the center must be the length of the radius away, so actually xLow is the radius of the ball. Animation goes quickly in small steps, so I cheat. I allow the ball to take one (small, quick) step past where it really should go ( xLow ), and then we reverse it so it comes back to where it belongs. In particular if x < xLow : dx = - dx There are similar bounding variables xHigh , yLow and yHigh , all the radius away from the actual edge coordinates, and similar conditions to test for a bounce off each possible edge. Note that whichever edge is hit, one coordinate, either dx or dy, reverses. One way the collection of tests could be written is if x < xLow : dx = - dx if x > xHigh : dx = - dx if y < yLow : dy = - dy if y > yHigh : dy = - dy This approach would cause there to be some extra testing: If it is true that x < xLow , then it is impossible for it to be true that x > xHigh , so we do not need both tests together. We avoid unnecessary tests with an elif clause (for both x and y): if x < xLow : dx = - dx elif x > xHigh : dx = - dx if y < yLow : dy = - dy elif y > yHigh : dy = - dy Note that the middle if is not changed to an elif , because it is possible for the ball to reach a corner , and need both dx and dy reversed. The program also uses several methods to read part of the state of graphics objects that we have not used in examples yet. Various graphics objects, like the circle we are using as the shape, know their center point, and it can be accessed with the getCenter() method. (Actually a clone of the point is returned.) Also each coordinate of a Point can be accessed with the getX() and getY() methods. This explains the new features in the central function defined for bouncing around in a box, bounceInBox . The animation arbitrarily goes on in a simple repeat loop for 600 steps. (A later example will improve this behavior.) def bounceInBox ( shape , dx , dy , xLow , xHigh , yLow , yHigh ): ''' Animate a shape moving in jumps (dx, dy), bouncing when its center reaches the low and high x and y coordinates. ''' delay = . 005 for i in range ( 600 ): shape . move ( dx , dy ) center = shape . getCenter () x = center . getX () y = center . getY () if x < xLow : dx = - dx elif x > xHigh : dx = - dx if y < yLow : dy = - dy elif y > yHigh : dy = - dy time . sleep ( delay ) The program starts the ball from an arbitrary point inside the allowable rectangular bounds. This is encapsulated in a utility function included in the program, getRandomPoint . The getRandomPoint function uses the randrange function from the module random . Note that in parameters for both the functions range and randrange , the end stated is past the last value actually desired: def getRandomPoint ( xLow , xHigh , yLow , yHigh ): '''Return a random Point with coordinates in the range specified.''' x = random . randrange ( xLow , xHigh + 1 ) y = random . randrange ( yLow , yHigh + 1 ) return Point ( x , y ) The full program is listed below, repeating bounceInBox and getRandomPoint for completeness. Several parts that may be useful later, or are easiest to follow as a unit, are separated out as functions. Make sure you see how it all hangs together or ask questions! ''' Show a ball bouncing off the sides of the window. ''' from graphics import * import time , random def bounceInBox ( shape , dx , dy , xLow , xHigh , yLow , yHigh ): ''' Animate a shape moving in jumps (dx, dy), bouncing when its center reaches the low and high x and y coordinates. ''' delay = . 005 for i in range ( 600 ): shape . move ( dx , dy ) center = shape . getCenter () x = center . getX () y = center . getY () if x < xLow : dx = - dx elif x > xHigh : dx = - dx if y < yLow : dy = - dy elif y > yHigh : dy = - dy time . sleep ( delay ) def getRandomPoint ( xLow , xHigh , yLow , yHigh ): '''Return a random Point with coordinates in the range specified.''' x = random . randrange ( xLow , xHigh + 1 ) y = random . randrange ( yLow , yHigh + 1 ) return Point ( x , y ) def makeDisk ( center , radius , win ): '''return a red disk that is drawn in win with given center and radius.''' disk = Circle ( center , radius ) disk . setOutline ( "red" ) disk . setFill ( "red" ) disk . draw ( win ) return disk def bounceBall ( dx , dy ): '''Make a ball bounce around the screen, initially moving by (dx, dy) at each jump.''' win = GraphWin ( 'Ball Bounce' , 290 , 290 ) win . yUp () radius = 10 xLow = radius # center is separated from the wall by the radius at a bounce xHigh = win . getWidth () - radius yLow = radius yHigh = win . getHeight () - radius center = getRandomPoint ( xLow , xHigh , yLow , yHigh ) ball = makeDisk ( center , radius , win ) bounceInBox ( ball , dx , dy , xLow , xHigh , yLow , yHigh ) win . close () bounceBall ( 3 , 5 ) 3.1.6.1. Short String Exercise ¶ Write a program short.py with a function printShort with heading: def printShort ( strings ): '''Given a list of strings, print the ones with at most three characters. >>> printShort(['a', 'long', one']) a one ''' In your main program, test the function, calling it several times with different lists of strings. Hint: Find the length of each string with the len function. The function documentation here models a common approach: illustrating the behavior of the function with a Python Shell interaction. This part begins with a line starting with >>> . Other exercises and examples will also document behavior in the Shell. 3.1.6.2. Even Print Exercise ¶ Write a program even1.py with a function printEven with heading: def printEven ( nums ): '''Given a list of integers nums, print the even ones. >>> printEven([4, 1, 3, 2, 7]) 4 2 ''' In your main program, test the function, calling it several times with different lists of integers. Hint: A number is even if its remainder, when dividing by 2, is 0. 3.1.6.3. Even List Exercise ¶ Write a program even2.py with a function chooseEven with heading: def chooseEven ( nums ): '''Given a list of integers, nums, return a list containing only the even ones. >>> chooseEven([4, 1, 3, 2, 7]) [4, 2] ''' In your main program, test the function, calling it several times with different lists of integers and printing the results in the main program. (The documentation string illustrates the function call in the Python shell, where the return value is automatically printed. Remember, that in a program, you only print what you explicitly say to print.) Hint: In the function, create a new list, and append the appropriate numbers to it, before returning the result. 3.1.6.4. Unique List Exercise ¶ * The madlib2.py program has its getKeys function, which first generates a list of each occurrence of a cue in the story format. This gives the cues in order, but likely includes repetitions. The original version of getKeys uses a quick method to remove duplicates, forming a set from the list. There is a disadvantage in the conversion, though: Sets are not ordered, so when you iterate through the resulting set, the order of the cues will likely bear no resemblance to the order they first appeared in the list. That issue motivates this problem: Copy madlib2.py to madlib2a.py , and add a function with this heading: def uniqueList ( aList ): ''' Return a new list that includes the first occurrence of each value in aList, and omits later repeats. The returned list should include the first occurrences of values in aList in their original order. >>> vals = ['cat', 'dog', 'cat', 'bug', 'dog', 'ant', 'dog', 'bug'] >>> uniqueList(vals) ['cat', 'dog', 'bug', 'ant'] ''' Hint: Process aList in order. Use the in syntax to only append elements to a new list that are not already in the new list. After perfecting the uniqueList function, replace the last line of getKeys , so it uses uniqueList to remove duplicates in keyList . Check that your madlib2a.py prompts you for cue values in the order that the cues first appear in the madlib format string. 3.1.7. Compound Boolean Expressions ¶ To be eligible to graduate from Loyola University Chicago, you must have 120 credits and a GPA of at least 2.0. This translates directly into Python as a compound condition : credits >= 120 and GPA >= 2.0 This is true if both credits >= 120 is true and GPA >= 2.0 is true. A short example program using this would be: credits = float ( input ( 'How many units of credit do you have? ' )) GPA = float ( input ( 'What is your GPA? ' )) if credits >= 120 and GPA >= 2.0 : print ( 'You are eligible to graduate!' ) else : print ( 'You are not eligible to graduate.' ) The new Python syntax is for the operator and : condition1 and condition2 The compound condition is true if both of the component conditions are true. It is false if at least one of the conditions is false. See Congress Exercise . In the last example in the previous section, there was an if - elif statement where both tests had the same block to be done if the condition was true: if x < xLow : dx = - dx elif x > xHigh : dx = - dx There is a simpler way to state this in a sentence: If x < xLow or x > xHigh, switch the sign of dx. That translates directly into Python: if x < xLow or x > xHigh : dx = - dx The word or makes another compound condition: condition1 or condition2 is true if at least one of the conditions is true. It is false if both conditions are false. This corresponds to one way the word “or” is used in English. Other times in English “or” is used to mean exactly one alternative is true. Warning When translating a problem stated in English using “or”, be careful to determine whether the meaning matches Python’s or . It is often convenient to encapsulate complicated tests inside a function. Think how to complete the function starting: def isInside ( rect , point ): '''Return True if the point is inside the Rectangle rect.''' pt1 = rect . getP1 () pt2 = rect . getP2 () Recall that a Rectangle is specified in its constructor by two diagonally oppose Point s. This example gives the first use in the tutorials of the Rectangle methods that recover those two corner points, getP1 and getP2 . The program calls the points obtained this way pt1 and pt2 . The x and y coordinates of pt1 , pt2 , and point can be recovered with the methods of the Point type, getX() and getY() . Suppose that I introduce variables for the x coordinates of pt1 , point , and pt2 , calling these x-coordinates end1 , val , and end2 , respectively. On first try you might decide that the needed mathematical relationship to test is end1 <= val <= end2 Unfortunately, this is not enough: The only requirement for the two corner points is that they be diagonally opposite, not that the coordinates of the second point are higher than the corresponding coordinates of the first point. It could be that end1 is 200; end2 is 100, and val is 120. In this latter case val is between end1 and end2 , but substituting into the expression above 200 <= 120 <= 100 is False. The 100 and 200 need to be reversed in this case. This makes a complicated situation. Also this is an issue which must be revisited for both the x and y coordinates. I introduce an auxiliary function isBetween to deal with one coordinate at a time. It starts: def isBetween ( val , end1 , end2 ): '''Return True if val is between the ends. The ends do not need to be in increasing order.''' Clearly this is true if the original expression, end1 <= val <= end2 , is true. You must also consider the possible case when the order of the ends is reversed: end2 <= val <= end1 . How do we combine these two possibilities? The Boolean connectives to consider are and and or . Which applies? You only need one to be true, so or is the proper connective: A correct but redundant function body would be: if end1 <= val <= end2 or end2 <= val <= end1 : return True else : return False Check the meaning: if the compound expression is True , return True . If the condition is False , return False – in either case return the same value as the test condition. See that a much simpler and neater version is to just return the value of the condition itself! return end1 <= val <= end2 or end2 <= val <= end1 Note In general you should not need an if - else statement to choose between true and false values! Operate directly on the boolean expression. A side comment on expressions like end1 <= val <= end2 Other than the two-character operators, this is like standard math syntax, chaining comparisons. In Python any number of comparisons can be chained in this way, closely approximating mathematical notation. Though this is good Python, be aware that if you try other high-level languages like Java and C++, such an expression is gibberish. Another way the expression can be expressed (and which translates directly to other languages) is: end1 <= val and val <= end2 So much for the auxiliary function isBetween . Back to the isInside function. You can use the isBetween function to check the x coordinates, isBetween ( point . getX (), p1 . getX (), p2 . getX ()) and to check the y coordinates, isBetween ( point . getY (), p1 . getY (), p2 . getY ()) Again the question arises: how do you combine the two tests? In this case we need the point to be both between the sides and between the top and bottom, so the proper connector is and . Think how to finish the isInside method. Hint: [5] Sometimes you want to test the opposite of a condition. As in English you can use the word not . For instance, to test if a Point was not inside Rectangle Rect, you could use the condition not isInside ( rect , point ) In general, not condition is True when condition is False , and False when condition is True . The example program chooseButton1.py , shown below, is a complete program using the isInside function in a simple application, choosing colors. Pardon the length. Do check it out. It will be the starting point for a number of improvements that shorten it and make it more powerful in the next section. First a brief overview: The program includes the functions isBetween and isInside that have already been discussed. The program creates a number of colored rectangles to use as buttons and also as picture components. Aside from specific data values, the code to create each rectangle is the same, so the action is encapsulated in a function, makeColoredRect . All of this is fine, and will be preserved in later versions. The present main function is long, though. It has the usual graphics starting code, draws buttons and picture elements, and then has a number of code sections prompting the user to choose a color for a picture element. Each code section has a long if - elif - else test to see which button was clicked, and sets the color of the picture element appropriately. '''Make a choice of colors via mouse clicks in Rectangles -- A demonstration of Boolean operators and Boolean functions.''' from graphics import * def isBetween ( x , end1 , end2 ): '''Return True if x is between the ends or equal to either. The ends do not need to be in increasing order.''' return end1 <= x <= end2 or end2 <= x <= end1 def isInside ( point , rect ): '''Return True if the point is inside the Rectangle rect.''' pt1 = rect . getP1 () pt2 = rect . getP2 () return isBetween ( point . getX (), pt1 . getX (), pt2 . getX ()) and \ isBetween ( point . getY (), pt1 . getY (), pt2 . getY ()) def makeColoredRect ( corner , width , height , color , win ): ''' Return a Rectangle drawn in win with the upper left corner and color specified.''' corner2 = corner . clone () corner2 . move ( width , - height ) rect = Rectangle ( corner , corner2 ) rect . setFill ( color ) rect . draw ( win ) return rect def main (): win = GraphWin ( 'pick Colors' , 400 , 400 ) win . yUp () # right side up coordinates redButton = makeColoredRect ( Point ( 310 , 350 ), 80 , 30 , 'red' , win ) yellowButton = makeColoredRect ( Point ( 310 , 310 ), 80 , 30 , 'yellow' , win ) blueButton = makeColoredRect ( Point ( 310 , 270 ), 80 , 30 , 'blue' , win ) house = makeColoredRect ( Point ( 60 , 200 ), 180 , 150 , 'gray' , win ) door = makeColoredRect ( Point ( 90 , 150 ), 40 , 100 , 'white' , win ) roof = Polygon ( Point ( 50 , 200 ), Point ( 250 , 200 ), Point ( 150 , 300 )) roof . setFill ( 'black' ) roof . draw ( win ) msg = Text ( Point ( win . getWidth () / 2 , 375 ), 'Click to choose a house color.' ) msg . draw ( win ) pt = win . getMouse () if isInside ( pt , redButton ): color = 'red' elif isInside ( pt , yellowButton ): color = 'yellow' elif isInside ( pt , blueButton ): color = 'blue' else : color = 'white' house . setFill ( color ) msg . setText ( 'Click to choose a door color.' ) pt = win . getMouse () if isInside ( pt , redButton ): color = 'red' elif isInside ( pt , yellowButton ): color = 'yellow' elif isInside ( pt , blueButton ): color = 'blue' else : color = 'white' door . setFill ( color ) win . promptClose ( msg ) main () The only further new feature used is in the long return statement in isInside . return isBetween ( point . getX (), pt1 . getX (), pt2 . getX ()) and \ isBetween ( point . getY (), pt1 . getY (), pt2 . getY ()) Recall that Python is smart enough to realize that a statement continues to the next line if there is an unmatched pair of parentheses or brackets. Above is another situation with a long statement, but there are no unmatched parentheses on a line. For readability it is best not to make an enormous long line that would run off your screen or paper. Continuing to the next line is recommended. You can make the final character on a line be a backslash ( '\\' ) to indicate the statement continues on the next line. This is not particularly neat, but it is a rather rare situation. Most statements fit neatly on one line, and the creator of Python decided it was best to make the syntax simple in the most common situation. (Many other languages require a special statement terminator symbol like ‘;’ and pay no attention to newlines). Extra parentheses here would not hurt, so an alternative would be return ( isBetween ( point . getX (), pt1 . getX (), pt2 . getX ()) and isBetween ( point . getY (), pt1 . getY (), pt2 . getY ()) ) The chooseButton1.py program is long partly because of repeated code. The next section gives another version involving lists. 3.1.7.1. Congress Exercise ¶ A person is eligible to be a US Senator who is at least 30 years old and has been a US citizen for at least 9 years. Write an initial version of a program congress.py to obtain age and length of citizenship from the user and print out if a person is eligible to be a Senator or not. A person is eligible to be a US Representative who is at least 25 years old and has been a US citizen for at least 7 years. Elaborate your program congress.py so it obtains age and length of citizenship and prints out just the one of the following three statements that is accurate: You are eligible for both the House and Senate. You eligible only for the House. You are ineligible for Congress. 3.1.8. More String Methods ¶ Here are a few more string methods useful in the next exercises, assuming the methods are applied to a string s : s .startswith( pre ) returns True if string s starts with string pre : Both '-123'.startswith('-') and 'downstairs'.startswith('down') are True , but '1 - 2 - 3'.startswith('-') is False . s .endswith( suffix ) returns True if string s ends with string suffix : Both 'whoever'.endswith('ever') and 'downstairs'.endswith('airs') are True , but '1 - 2 - 3'.endswith('-') is False . s .replace( sub , replacement , count ) returns a new string with up to the first count occurrences of string sub replaced by replacement . The replacement can be the empty string to delete sub . For example: s = '-123' t = s . replace ( '-' , '' , 1 ) # t equals '123' t = t . replace ( '-' , '' , 1 ) # t is still equal to '123' u = '.2.3.4.' v = u . replace ( '.' , '' , 2 ) # v equals '23.4.' w = u . replace ( '.' , ' dot ' , 5 ) # w equals '2 dot 3 dot 4 dot ' 3.1.8.1. Article Start Exercise ¶ In library alphabetizing, if the initial word is an article (“The”, “A”, “An”), then it is ignored when ordering entries. Write a program completing this function, and then testing it: def startsWithArticle ( title ): '''Return True if the first word of title is "The", "A" or "An".''' Be careful, if the title starts with “There”, it does not start with an article. What should you be testing for? 3.1.8.2. Is Number String Exercise ¶ ** In the later Safe Number Input Exercise , it will be important to know if a string can be converted to the desired type of number. Explore that here. Save example isNumberStringStub.py as isNumberString.py and complete it. It contains headings and documentation strings for the functions in both parts of this exercise. A legal whole number string consists entirely of digits. Luckily strings have an isdigit method, which is true when a nonempty string consists entirely of digits, so '2397'.isdigit() returns True , and '23a'.isdigit() returns False , exactly corresponding to the situations when the string represents a whole number! In both parts be sure to test carefully. Not only confirm that all appropriate strings return True . Also be sure to test that you return False for all sorts of bad strings. Recognizing an integer string is more involved, since it can start with a minus sign (or not). Hence the isdigit method is not enough by itself. This part is the most straightforward if you have worked on the sections String Indices and String Slices . An alternate approach works if you use the count method from Object Orientation , and some methods from this section. Complete the function isIntStr . Complete the function isDecimalStr , which introduces the possibility of a decimal point (though a decimal point is not required). The string methods mentioned in the previous part remain useful. [1] This is an improvement that is new in Python 3. [2] “In this case do ___; otherwise”, “if ___, then”, “when ___ is true, then”, “___ depends on whether”, [3] If you divide an even number by 2, what is the remainder? Use this idea in your if condition. [4] 4 tests to distinguish the 5 cases, as in the previous version [5] Once again, you are calculating and returning a Boolean result. You do not need an if - else statement. Table Of Contents 3.1. If Statements 3.1.1. Simple Conditions 3.1.2. Simple if Statements 3.1.3. if - else Statements 3.1.4. More Conditional Expressions 3.1.4.1. Graduate Exercise 3.1.4.2. Head or Tails Exercise 3.1.4.3. Strange Function Exercise 3.1.5. Multiple Tests and if - elif Statements 3.1.5.1. Sign Exercise 3.1.5.2. Grade Exercise 3.1.5.3. Wages Exercise 3.1.6. Nesting Control-Flow Statements 3.1.6.1. Short String Exercise 3.1.6.2. Even Print Exercise 3.1.6.3. Even List Exercise 3.1.6.4. Unique List Exercise 3.1.7. Compound Boolean Expressions 3.1.7.1. Congress Exercise 3.1.8. More String Methods 3.1.8.1. Article Start Exercise 3.1.8.2. Is Number String Exercise Previous topic 3. More On Flow of Control Next topic 3.2. Loops and Tuples This Page Show Source Quick search Enter search terms or a module, class or function name. Navigation index next | previous | Hands-on Python Tutorial » 3. More On Flow of Control » © Copyright 2019, Dr. Andrew N. Harrington. Last updated on Jan 05, 2020. Created using Sphinx 1.3.1+. | 2026-01-13T09:30:34 |
https://penneo.com/da/product-updates/ | Product Updates - Penneo Produkter Penneo Sign Validator Hvorfor Penneo Integrationer Løsninger Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Brancher Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Priser Ressourcer Vidensunivers Trust Center Produktopdateringer SIGN Hjælpecenter KYC Hjælpecenter Systemstatus LOG PÅ Penneo Sign Log ind på Penneo Sign. LOG PÅ Penneo KYC Log ind på Penneo KYC. LOG PÅ BOOK ET MØDE GRATIS PRØVEPERIODE DA EN NO FR NL Produkter Penneo Sign Validator Hvorfor Penneo Integrationer Løsninger Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Priser Ressourcer Vidensunivers Trust Center Produktopdateringer SIGN Hjælpecenter KYC Hjælpecenter Systemstatus BOOK ET MØDE GRATIS PRØVEPERIODE LOG PÅ DA EN NO FR NL Penneo Sign Log ind på Penneo Sign. LOG PÅ Penneo KYC Log ind på Penneo KYC. LOG PÅ Produktopdateringer Produktopdateringer fra Q4 2025: Hurtigere afsendelse, global QES & API-opdateringer december 3, 2025 I sidste kvartal af 2025 fokuserede vi på to hovedmål: at fjerne friktion fra dine daglige arbejdsgange og styrke de... Læs mere Penneo QES: Højeste juridiske gyldighed og dokumenteret sikkerhed december 3, 2025 Penneo tilbyder nu kvalificerede elektroniske signaturer gennem både ID Verifier og itsme® QES. Guldstandarden for digitale signaturer En kvalificeret elektronisk... Læs mere Send dokumenter til underskrift på få sekunder med Penneo Sign december 3, 2025 En hurtigere og enklere måde at sende dokumenter til underskrift er landet. Vi introducerer en ny og strømlinet mulighed for... Læs mere Hvad er nyt i Penneo Sign? Produktopdateringer fra Q2 2025 september 1, 2025 I andet kvartal 2025 har vi fokuseret på vigtige forbedringer for at øge stabiliteten af vores platform, styrke vores engagement... Læs mere Vi introducerer Penneos nye analysepanel: Forbrugsoversigt september 1, 2025 Vi lancerer Forbrugsoversigt – et helt nyt værktøj, som samler jeres vigtigste data ét sted. Med Forbrugsoversigt har du fået... Læs mere Q1 2025 Penneo Sign Opdateringer marts 31, 2025 Første kvartal af 2025 bød på en række forbedringer i Penneo Sign – med fokus på bedre ydeevne, brugeroplevelse og... Læs mere Har du brug for at indsamle underskrifter fra internationale interessenter? Penneo Sign gør det nu nemmere marts 5, 2025 Arbejder du med internationale bestyrelsesmedlemmer, kunder eller medarbejdere, som ikke kan underskrive med lokale eID-metoder som MitID eller MitID Erhverv?... Læs mere Hvad er nyt i Penneo Sign? Produktopdateringer fra H2 2024 december 18, 2024 Hold dig opdateret med de nyeste forbedringer i Penneo Sign! I anden halvdel af 2024 har vi introduceret nye funktioner... Læs mere Hvad er nyt i Penneo Sign? Produktopdateringer fra H1 2024 juni 26, 2024 Hold dig opdateret med de nyeste forbedringer i Penneo Sign! I de første måneder af 2024 har vi introduceret nye... Læs mere SMS-verifikation i Penneo Sign: Et ekstra lag af sikkerhed i dokumentunderskrift april 20, 2024 I vores fortsatte bestræbelser på at forbedre sikkerheden og effektiviteten af dine digitale underskrifter introducerer vi SMS-verification. Læs mere SE FLERE ARTIKLER Hold dig opdateret Er du allerede Penneo bruger? Tilmeld dig vores nyhedsbrev for at få produktnyheder direkte i din indbakke, og være blandt de første til at høre om produktopdateringer, der kan lette dine arbejdsgange! URL Dette felt er til validering og bør ikke ændres. Email * * Ja tak, jeg vil gerne modtage produktnyheder og tips fra Penneo via email, og jeg er opmærksom på at jeg kan afmelde mig dette nyhedsbrev når som helst. Produkter Penneo Sign Priser Integrationer Åben API Validator Hvorfor Penneo Løsninger Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Ressourcer Vidensunivers Trust Center Produktopdateringer SUPPORT SIGN Hjælpecenter KYC Hjælpecenter Systemstatus Virksomhed Om os Karriere Privatlivspolitik Vilkår Brug af cookies Accessibility Statement Whistleblower Policy Kontakt os PENNEO A/S - Gærtorvet 1-5, DK-1799 København V - CVR: 35633766 | 2026-01-13T09:30:34 |
https://support.microsoft.com/lt-lt/windows/valdykite-slapukus-microsoft-edge-per%C5%BEi%C5%ABra-leidimas-blokavimas-naikinimas-ir-naudojimas-168dab11-0753-043d-7c16-ede5947fc64d | Valdykite slapukus „Microsoft Edge“: peržiūra, leidimas, blokavimas, naikinimas ir naudojimas - „Microsoft“ palaikymas Susijusios temos × „Windows“ sauga ir privatumas Apžvalga Saugos ir privatumo apžvalga „Windows“ sauga Gaukite pagalbos naudodami „Windows“ saugą Būkite apsaugoti naudodami „Windows“ saugą Prieš perduodant perdirbti, parduodant ar dovanojant „Xbox“ arba „Windows 10“ įrenginį Kenkėjiškų programų pašalinimas iš „Windows“ kompiuterio „Windows“ sauga Gaukite pagalbos naudojant „Windows“ saugą Naršyklės retrospektyvos peržiūra ir naikinimas naršyklėje „Microsoft Edge“ Slapukų naikinimas ir valdymas Saugus vertingo turinio pašalinimas iš naujo įdiegiant „Windows“ Pamesto „Windows“ įrenginio radimas ir užrakinimas „Windows“ privatumas Gaukite pagalbos naudojant „Windows“ privatumą Programų naudojami „Windows“ privatumo parametrai Duomenų peržiūra privatumo valdymo portale Pereiti prie pagrindinio turinio Microsoft Palaikymas Palaikymas Palaikymas Pagrindinis Microsoft 365 Office Produktai Microsoft 365 Outlook Microsoft Teams OneDrive Microsoft Copilot OneNote Windows daugiau... Įrenginiai Surface Kompiuterių priedai Xbox PC žaidimai HoloLens Surface Hub Aparatūros garantijos Paskyros & atsiskaitymas Paskyra „Microsoft Store“ & atsiskaitymas Ištekliai Kas nauja Bendruomenės forumai „Microsoft 365“ administratoriai Smulkiojo verslo portalas Kūrėjas Švietimas Pranešimas apie palaikymo sukčius Produkto sauga Daugiau Įsigyti „Microsoft 365“ Viskas „Microsoft“ Global Microsoft 365 Teams Copilot Windows Surface Xbox Palaikymas Programinė įranga Programinė įranga „Windows“ programėlės DI „OneDrive“ Outlook „OneNote“ „Microsoft Teams“ Kompiuteriai ir įrenginiai Kompiuteriai ir įrenginiai Accessories Pramogos Pramogos Kompiuteriniai žaidimai Verslui Verslui „Microsoft“ sauga „Azure” „Dynamics 365“ „Microsoft 365“ verslui „Microsoft“ pramonei „Microsoft Power Platform“ Windows 365 Kūrėjas ir IT Kūrėjas ir IT „Microsoft“ kūrėjas „Microsoft Learn“ DI parduotuvės programų palaikymas „Microsoft“ techninės pagalbos bendruomenė Microsoft Marketplace „Visual Studio“ Marketplace Rewards Kiti Kiti Nemokami atsisiuntimai ir sauga Rodyti svetainės struktūrą Ieškoti Ieškoti žinyno Rezultatų nėra Atšaukti Prisijungti Prisijunkite prie „Microsoft“ Prisijunkite arba sukurkite paskyrą. Sveiki, Pasirinkti kitą paskyrą. Turite kelias paskyras Pasirinkite paskyrą, kurią naudodami norite prisijungti. Susijusios temos „Windows“ sauga ir privatumas Apžvalga Saugos ir privatumo apžvalga „Windows“ sauga Gaukite pagalbos naudodami „Windows“ saugą Būkite apsaugoti naudodami „Windows“ saugą Prieš perduodant perdirbti, parduodant ar dovanojant „Xbox“ arba „Windows 10“ įrenginį Kenkėjiškų programų pašalinimas iš „Windows“ kompiuterio „Windows“ sauga Gaukite pagalbos naudojant „Windows“ saugą Naršyklės retrospektyvos peržiūra ir naikinimas naršyklėje „Microsoft Edge“ Slapukų naikinimas ir valdymas Saugus vertingo turinio pašalinimas iš naujo įdiegiant „Windows“ Pamesto „Windows“ įrenginio radimas ir užrakinimas „Windows“ privatumas Gaukite pagalbos naudojant „Windows“ privatumą Programų naudojami „Windows“ privatumo parametrai Duomenų peržiūra privatumo valdymo portale Valdykite slapukus „Microsoft Edge“: peržiūra, leidimas, blokavimas, naikinimas ir naudojimas Taikoma Windows 10 Windows 11 Microsoft Edge Slapukai yra nedidelės duomenų dalys, kurias jūsų įrenginyje saugo svetainės, kuriose lankotės. Jie naudojami įvairiais tikslais, pvz., prisijungimo kredencialų įsiminimas, svetainės nuostatos ir vartotojo veikimo stebėjimas. Tačiau slapukus galite panaikinti dėl privatumo priežasčių arba norėdami išspręsti naršymo problemas. Šiame straipsnyje pateikiamos instrukcijos, kaip: Peržiūrėti visus slapukus Leisti visus slapukus Leisti slapukus iš konkrečios svetainės Trečiosios šalies slapukų blokavimas Blokuoti visus slapukus Slapukų blokavimas konkrečioje svetainėje Visų slapukų naikinimas Slapukų naikinimas iš konkrečios svetainės Slapukų naikinimas kaskart, kai uždarote naršyklę Naudokite slapukus, kad iš anksto įkeltumėte puslapį, kad greičiau naršytumėte Peržiūrėti visus slapukus Atidarykite "Edge" naršyklę, pasirinkite Parametrai ir kita viršutiniame dešiniajame naršyklės lango kampe. Pasirinkite Parametrai > Privatumas, ieška ir paslaugos . Pasirinkite Slapukai , tada spustelėkite Peržiūrėti visus slapukus ir svetainės duomenis , kad peržiūrėtumėte visus saugomus slapukus ir susijusią svetainės informaciją. Leisti visus slapukus Leisdami slapukus svetainės galės įrašyti ir nuskaityti duomenis jūsų naršyklėje, o tai gali pagerinti jūsų naršymo patirtį prisimindama jūsų nuostatas ir prisijungimo informaciją. Atidarykite "Edge" naršyklę, pasirinkite Parametrai ir kita viršutiniame dešiniajame naršyklės lango kampe. Pasirinkite Parametrai > Privatumas, ieška ir paslaugos . Pasirinkite Slapukai ir įjunkite jungiklį Leisti svetainėms įrašyti ir skaityti slapukų duomenis (rekomenduojama), kad leistumėte naudoti visus slapukus. Leisti slapukus iš konkrečios svetainės Leisdami slapukus svetainės galės įrašyti ir nuskaityti duomenis jūsų naršyklėje, o tai gali pagerinti jūsų naršymo patirtį prisimindama jūsų nuostatas ir prisijungimo informaciją. Atidarykite "Edge" naršyklę, pasirinkite Parametrai ir kita viršutiniame dešiniajame naršyklės lango kampe. Pasirinkite Parametrai > Privatumas, ieška ir paslaugos . Pasirinkite Slapukai ir eikite į Leidžiama įrašyti slapukus. Pasirinkite Įtraukti svetainę , kad leistumėte slapukus pagal kiekvieną svetainę įvesdami svetainės URL. Trečiosios šalies slapukų blokavimas Jei nenorite, kad trečiųjų šalių svetainės saugotų slapukus jūsų kompiuteryje, galite blokuoti slapukus. Tačiau užblokavus slapukus kai kurie puslapiai gali būti rodomi netinkamai arba galite gauti svetainės pranešimą, kad norėdami peržiūrėti svetainę turite leisti naudoti slapukus. Atidarykite "Edge" naršyklę, pasirinkite Parametrai ir kita viršutiniame dešiniajame naršyklės lango kampe. Pasirinkite Parametrai > Privatumas, ieška ir paslaugos . Pasirinkite Slapukai ir įjunkite perjungiklį Blokuoti trečiųjų šalių slapukus. Blokuoti visus slapukus Jei nenorite, kad trečiųjų šalių svetainės saugotų slapukus jūsų kompiuteryje, galite blokuoti slapukus. Tačiau užblokavus slapukus kai kurie puslapiai gali būti rodomi netinkamai arba galite gauti svetainės pranešimą, kad norėdami peržiūrėti svetainę turite leisti naudoti slapukus. Atidarykite "Edge" naršyklę, pasirinkite Parametrai ir kita viršutiniame dešiniajame naršyklės lango kampe. Pasirinkite Parametrai > Privatumas, ieška ir paslaugos . Pasirinkite Slapukai ir išjunkite Leisti svetainėms įrašyti ir skaityti slapukų duomenis (rekomenduojama), kad užblokuotumėte visus slapukus. Slapukų blokavimas konkrečioje svetainėje "Microsoft Edge" leidžia blokuoti slapukus konkrečioje svetainėje, tačiau tai gali trukdyti tinkamai rodyti kai kuriuos puslapius arba galite gauti svetainės pranešimą, kad norint peržiūrėti svetainę reikia leisti naudoti slapukus. Norėdami blokuoti slapukus konkrečioje svetainėje: Atidarykite "Edge" naršyklę, pasirinkite Parametrai ir kita viršutiniame dešiniajame naršyklės lango kampe. Pasirinkite Parametrai > Privatumas, ieška ir paslaugos . Pasirinkite Slapukai ir eikite į Neleidžiama įrašyti ir skaityti slapukų . Pasirinkite Įtraukti svetainę , kad užblokuotumėte slapukus kiekvienoje svetainėje įvesdami svetainės URL. Visų slapukų naikinimas Atidarykite "Edge" naršyklę, pasirinkite Parametrai ir kita viršutiniame dešiniajame naršyklės lango kampe. Pasirinkite Parametrai > Privatumas, ieška ir paslaugos . Pasirinkite Išvalyti naršymo duomenis , tada pasirinkite Pasirinkti, ką valyti , esančią šalia Valyti naršymo duomenis dabar . Dalyje Laiko diapazonas iš sąrašo pasirinkite laiko intervalą. Pasirinkite Slapukai ir kiti svetainės duomenys , o paskui – Išvalyti dabar . Pastaba: Arba galite panaikinti slapukus kartu paspausdami CTRL + SHIFT + DELETE ir atlikdami 4 ir 5 veiksmus. Visi jūsų slapukai ir kiti svetainės duomenys bus panaikinti pagal pasirinktą laiko intervalą. Taip išjungsite daugelį svetainių. Slapukų naikinimas iš konkrečios svetainės Atidarykite "Edge" naršyklę, pasirinkite Parametrai ir kita > Parametrai > Privatumas, ieška ir paslaugos . Pasirinkite Slapukai , tada spustelėkite Peržiūrėti visus slapukus ir svetainės duomenis ir ieškokite svetainės, kurios slapukus norite panaikinti. Pasirinkite rodyklę žemyn, esančią į dešinę nuo svetainės, kurios slapukus norite panaikinti, tada pasirinkite Naikinti . Pasirinktos svetainės slapukai dabar panaikinami. Kartokite šį veiksmą su bet kokia svetaine, kurios slapukus norite panaikinti. Slapukų naikinimas kaskart, kai uždarote naršyklę Atidarykite "Edge" naršyklę, pasirinkite Parametrai ir kita > Parametrai > Privatumas, ieška ir paslaugos . Pasirinkite Išvalyti naršymo duomenis , tada pasirinkite Pasirinkti, ką valyti kaskart, kai uždarote naršyklę . Įjunkite jungiklį Slapukai ir kiti svetainės duomenys . Kai ši funkcija įjungta, kaskart uždarius naršyklę "Edge", visi slapukai ir kiti svetainės duomenys panaikinami. Taip išjungsite daugelį svetainių. Naudokite slapukus, kad iš anksto įkeltumėte puslapį, kad greičiau naršytumėte Atidarykite "Edge" naršyklę, pasirinkite Parametrai ir kita viršutiniame dešiniajame naršyklės lango kampe. Pasirinkite Parametrai > Privatumas, ieška ir paslaugos . Pasirinkite Slapukai ir įjunkite perjungiklį Iš anksto įkelti puslapius, kad naršytumėte ir ieškotumėte greičiau. UŽSISAKYTI RSS INFORMACIJOS SANTRAUKAS Reikia daugiau pagalbos? Norite daugiau parinkčių? Galimos funkcijos Bendruomenė Susisiekite su mumis Sužinokite apie prenumeratos pranašumus, peržiūrėkite mokymo kursus, sužinokite, kaip apsaugoti savo įrenginį ir kt. „Microsoft 365“ prenumeratos pranašumai „Microsoft 365“ mokymas „Microsoft“ sauga Pritaikymo neįgaliesiems centras Bendruomenės padeda užduoti klausimus ir į juos atsakyti, pateikti atsiliepimų ir išgirsti iš ekspertų, turinčių daug žinių. Klauskite „Microsoft“ bendruomenės „Microsoft“ technologijų bendruomenė „Windows Insider Preview“ programos dalyviai „Microsoft 365“ „Insider“ programos dalyviai Raskite įprastų problemų sprendimus arba gaukite pagalbos iš palaikymo agento. Palaikymas internetu Ar ši informacija buvo naudinga? Taip Ne Ačiū! Turite daugiau atsiliepimų apie „Microsoft“? Ar galite padėti mums tobulėti? (Siųskite atsiliepimą „Microsoft“, kad galėtume jums padėti.) Ar esate patenkinti kalbos kokybe? Kas turėjo įtakos jūsų įspūdžiams? Išsprendė mano problemą Valyti instrukcijas Lengva vykdyti Nėra žargono Nuotraukos padėjo Vertimo kokybė Neatitiko mano ekrano Neteisingos instrukcijos Per daug techniška Nepakanka informacijos Nepakanka paveikslėlių Vertimo kokybė Turite daugiau atsiliepimų? (Pasirinktinai) Pateikti atsiliepimą Paspaudus mygtuką Pateikti, jūsų atsiliepimai bus naudojami tobulinant „Microsoft“ produktus ir paslaugas. Jūsų IT administratorius galės rinkti šiuos duomenis. Privatumo patvirtinimas. Dėkojame už jūsų atsiliepimą! × Kas nauja Organizacijoms skirtas „Copilot“ Asmeniniam naudojimui skirtas „Copilot“ Microsoft 365 „Windows 11“ programos Microsoft Store Abonento profilis Atsisiuntimo centras Grąžinimas Užsakymo sekimas Perdirbimas Komercinės garantijos Švietimas „Microsoft Education“ Įrenginiai švietimo įstaigoms „Microsoft Teams“ švietimui „Microsoft 365 Education“ „Office Education“ Pedagogų mokymai ir tobulėjimas Pasiūlymai studentams ir tėvams „Azure“ studentams Verslas „Microsoft“ sauga „Azure” „Dynamics 365“ „Microsoft 365“ „Microsoft Advertising“ Microsoft 365 Copilot „Microsoft Teams“ Kūrėjas ir IT „Microsoft“ kūrėjas „Microsoft Learn“ DI parduotuvės programų palaikymas „Microsoft“ techninės pagalbos bendruomenė Microsoft Marketplace „Microsoft Power Platform“ Marketplace Rewards „Visual Studio“ Įmonė Karjera Apie „Microsoft“ „Microsoft“ privatumas Investuotojai Tvarumas Lietuvių (Lietuva) Jūsų privatumo pasirinkimų atsisakymo piktograma Jūsų privatumo pasirinkimai Jūsų privatumo pasirinkimų atsisakymo piktograma Jūsų privatumo pasirinkimai Vartotojų sveikatos privatumas Susisiekti su „Microsoft“ Privatumas Slapukų valdymas Naudojimosi sąlygos Prekių ženklai Apie mūsų reklamą EU Compliance DoCs © Microsoft 2026 | 2026-01-13T09:30:34 |
https://hackmd.io/education?utm_source=blog&utm_medium=nav-bar | The HackMD Blog: Education Blog Product Company Changelog Education Sign in Sign in Get HackMD free Education Read about the ways you can harness Markdown to level up your workflow, productivity, teamwork, and more. # en # education How AI is Shaping Collaborative Markdown Editors in 2025 Discover why HackMD stands out as the best collaborative markdown editor for creating, sharing, and scaling knowledge. Aug 27, 2025 By Chaseton Collins # en # education Fostering Community Growth: How Prosocial thrives on HackMD Explore how combining the ACT Behavior Matrix and Prosocial with HackMD, a collaborative online markdown editor, can strengthen your community’s psychological flexibility, enhance teamwork, and foster sustainable growth. Jul 23, 2025 By Chaseton Collins # en # education Keeping AI teams in sync: Collaboration tools that work Explore what tools AI professionals and technical teams are using to stay in sync and how your company can integrate these tools to collaborate better and stay up to date on the latest technology. Jul 2, 2025 By Chaseton Collins # en # education How Markdown heats up AI Explore how Markdown's structured simplicity powers seamless communication with AI tools, optimizing productivity and accuracy. Jun 18, 2025 By Chaseton Collins # en # education Spring clean your workspace with Folders Staying organize can be difficult but Spring is the perfect time to clean up your messy workspace, with HackMD's Folder feature you can quickly create folders, organize notes, and share with others. Apr 23, 2025 By Chaseton Collins # en # use-case # education Plan and achieve New Year's resolutions with HackMD in 2025 Discover how to transform your New Year's resolutions into actionable plans for 2025. Learn to collaborate, organize, and share ideas seamlessly with your community using HackMD's powerful features. Jan 3, 2025 By Chaseton Collins 1 2 3 4 Subscribe to our newsletter Build with confidence. Never miss a beat. Learn about the latest product updates, company happenings, and technical guides in our monthly newsletter. Subscribe Build together with the ultimate Markdown editor. Learning Features Tutorial book Resources Blog Changelog Enterprise Pricing Company About Press Kit Trust Center Terms of use Privacy policy English 中文 日本語 © 2026 HackMD. All Rights Reserved. | 2026-01-13T09:30:34 |
https://support.microsoft.com/lt-lt/microsoft-edge/-microsoft-edge-nar%C5%A1ymo-duomenys-ir-privatumas-bb8174ba-9d73-dcf2-9b4a-c582b4e640dd | „Microsoft Edge“, naršymo duomenys ir privatumas - „Microsoft“ palaikymas Pereiti prie pagrindinio turinio Microsoft Palaikymas Palaikymas Palaikymas Pagrindinis Microsoft 365 Office Produktai Microsoft 365 Outlook Microsoft Teams OneDrive Microsoft Copilot OneNote Windows daugiau... Įrenginiai Surface Kompiuterių priedai Xbox PC žaidimai HoloLens Surface Hub Aparatūros garantijos Paskyros & atsiskaitymas Paskyra „Microsoft Store“ & atsiskaitymas Ištekliai Kas nauja Bendruomenės forumai „Microsoft 365“ administratoriai Smulkiojo verslo portalas Kūrėjas Švietimas Pranešimas apie palaikymo sukčius Produkto sauga Daugiau Įsigyti „Microsoft 365“ Viskas „Microsoft“ Global Microsoft 365 Teams Copilot Windows Surface Xbox Palaikymas Programinė įranga Programinė įranga „Windows“ programėlės DI „OneDrive“ Outlook „OneNote“ „Microsoft Teams“ Kompiuteriai ir įrenginiai Kompiuteriai ir įrenginiai Accessories Pramogos Pramogos Kompiuteriniai žaidimai Verslui Verslui „Microsoft“ sauga „Azure” „Dynamics 365“ „Microsoft 365“ verslui „Microsoft“ pramonei „Microsoft Power Platform“ Windows 365 Kūrėjas ir IT Kūrėjas ir IT „Microsoft“ kūrėjas „Microsoft Learn“ DI parduotuvės programų palaikymas „Microsoft“ techninės pagalbos bendruomenė Microsoft Marketplace „Visual Studio“ Marketplace Rewards Kiti Kiti Nemokami atsisiuntimai ir sauga Rodyti svetainės struktūrą Ieškoti Ieškoti žinyno Rezultatų nėra Atšaukti Prisijungti Prisijunkite prie „Microsoft“ Prisijunkite arba sukurkite paskyrą. Sveiki, Pasirinkti kitą paskyrą. Turite kelias paskyras Pasirinkite paskyrą, kurią naudodami norite prisijungti. „Microsoft Edge“, naršymo duomenys ir privatumas Taikoma Privacy Microsoft Edge Windows 10 Windows 11 "Microsoft Edge" padeda naršyti, ieškoti, pirkti internetu ir kt. Kaip ir visos šiuolaikinės naršyklės, „Microsoft Edge“ leidžia jums įrenginyje rinkti ir saugoti konkrečius duomenis, pvz., slapukus, ir siųsti mums informaciją, pvz., naršymo retrospektyvą, kad galėtumėte naudotis kiek įmanoma išsamesnėmis, spartesnėmis ir asmeniniams poreikiams pritaikytomis funkcijomis. Kaskart rinkdami duomenis norime įsitikinti, kad tai yra jums tinkamas pasirinkimas. Kai kurie žmonės nerimauja, kad renkama jų naršymo žiniatinklyje retrospektyvos informacija. Būtent todėl pranešame jums, kokie duomenys saugomi jūsų įrenginyje arba kokius juos renkame. Mes leidžiame jums valdyti, kokie duomenys bus renkami. Jei reikia daugiau informacijos apie privatumą naršyklėje "Microsoft Edge", rekomenduojame peržiūrėti mūsų privatumo patvirtinimą . Kokie duomenys renkami ir saugomi ir kodėl „Microsoft“ naudoja diagnostikos duomenis, kad galėtų patobulinti savo produktus ir paslaugas. Šiuos duomenis naudojame siekdami geriau suprasti, kaip veikia mūsų produktai ir ką reikia patobulinti. "Microsoft Edge" renka būtinų diagnostikos duomenų rinkinį, kad užtikrintų "Microsoft Edge" saugą, atnaujinimą ir veikimą, kaip numatyta. „Microsoft“ laikosi principo, kad reikia rinkti tik tiek informacijos, kiek būtina. Stengiamės rinkti tik reikiamą informaciją ir saugoti ją tik tol, kol jos reikia paslaugai teikti ar analizei. Be to, galite valdyti, ar pasirinktiniai diagnostikos duomenys, susiję su jūsų įrenginiu, bendrinami su "Microsoft", kad būtų galima išspręsti produkto problemas ir tobulinti "Microsoft" produktus ir paslaugas. Kai naudojate funkcijas ir paslaugas „Microsoft Edge“, diagnostikos duomenys apie tai, kaip naudojate šias funkcijas, siunčiami „Microsoft“. "Microsoft Edge" jūsų įrenginyje įrašo naršymo retrospektyvą – informaciją apie jūsų lankomas svetaines. Atsižvelgiant į jūsų parametrus, ši naršymo retrospektyva siunčiama į „Microsoft“, todėl galime lengviau rasti ir ištaisyti problemas ir tobulinti visų vartotojų naudojamus mūsų gaminius ir paslaugas. Pasirinktinių diagnostikos duomenų rinkimą naršyklėje galite valdyti pasirinkdami Parametrai ir kita > Parametrai > Privatumas, ieška ir paslaugos > Privatumas ir įjungdami arba išjungdami Parinktį Siųsti pasirinktinius diagnostikos duomenis, kad būtų galima tobulinti "Microsoft" produktus . Tai apima duomenis apie naujų funkcijų testavimą. Norėdami pritaikyti šio parametro pakeitimus, iš naujo paleiskite „Microsoft Edge“. Įjungus šį parametrą, šie pasirinktiniai diagnostikos duomenys bus bendrinami su "Microsoft" iš kitų programų, naudojančių "Microsoft Edge", pvz., vaizdo įrašų srautinio perdavimo programą, kuri nuomoja "Microsoft Edge" žiniatinklio platformą vaizdo įrašui transliuoti. "Microsoft Edge" žiniatinklio platforma "Microsoft" išsiųs informaciją apie tai, kaip naudojate žiniatinklio platformą ir svetaines, kuriose lankotės programoje. Šis duomenų rinkimas nustatomas pagal pasirinktinių diagnostikos duomenų parametrą "Microsoft Edge" privatumo, ieškos ir paslaugų parametruose. Sistemoje „Windows 10“ šiuos parametrus apibrėžia „Windows“ diagnostikos duomenų parametras. Norėdami pakeisti diagnostikos duomenų parametrą, pasirinkite Pradžia > Parametrai > Privatumas > Diagnostika & atsiliepimą . Nuo 2024 m. kovo 6 d. "Microsoft Edge" diagnostikos duomenys renkami atskirai nuo "Windows" diagnostikos duomenų Windows 10 (22H2 ir naujesnės versijos) ir Windows 11 (23H2 ir naujesnės versijos) įrenginiuose Europos ekonominėje erdvėje. Šiose "Windows" versijose ir visose kitose platformose galite pakeisti parametrus "Microsoft Edge" pasirinkdami Parametrai ir kita > Parametrai > Privatumas, ieška ir paslaugos . Kai kuriais atvejais jūsų organizacija gali valdyti jūsų diagnostikos duomenų parametrus. Kai ko nors ieškote, "Microsoft Edge" gali pateikti pasiūlymų apie tai, ko ieškote. Norėdami įjungti šią funkciją, pasirinkite Parametrai ir kita > Parametrai > Privatumas, ieška ir paslaugos > Ieškos ir prijungtosios funkcijos > Adreso juosta ir Ieška > Ieškos pasiūlymai ir filtrai , tada įjunkite Rodyti ieškos ir svetainių pasiūlymus naudojant mano įvestus simbolius . Jums pradėjus įvesti tekstą, informacija, kurią įvedate adreso juostoje, bus siunčiama numatytajam ieškos paslaugų teikėjui, kad nedelsdami gautumėte ieškos ir svetainių pasiūlymus. Kai naudojate "InPrivate" naršymo arba svečio režimą , "Microsoft Edge" renka tam tikrą informaciją apie tai, kaip naudojate naršyklę, atsižvelgiant į jūsų "Windows" diagnostikos duomenų parametrą arba "Microsoft Edge" privatumo parametrus, tačiau automatiniai pasiūlymai yra išjungti ir informacija apie jūsų lankomas svetaines nerenkama. Kai uždarysite visus „InPrivate“ langus, „Microsoft Edge“ panaikins jūsų naršymo retrospektyvą, slapukus ir svetainių duomenis, taip pat slaptažodžius, adresus ir formų duomenis. Galite pradėti naują "InPrivate" seansą pasirinkdami Parametrai ir kita kompiuteryje arba Skirtukai mobiliajame įrenginyje. „Microsoft Edge“ taip pat turi funkcijų, padedančių užtikrinti jūsų ir jūsų turinio saugą internete. "Windows" sargybos "SmartScreen" automatiškai blokuoja svetaines ir atsisiunčiamą turinį, apie kuriuos pranešama kaip kenkėjiškus. „Windows“ sargybos „SmartScreen“ filtras patikrina lankomo tinklalapio adresą pagal jūsų įrenginyje saugomą sąrašą tinklalapių adresų, kuriuos „Microsoft“ laiko teisėtais. Adresai, kurių nėra įrenginio sąraše, ir atsisiunčiamų failų adresai bus nusiųsti „Microsoft“ ir patikrinti pagal dažnai atnaujinamą tinklalapių ir atsisiuntimų, apie kuriuos buvo pranešta „Microsoft“ kaip apie nesaugius arba įtartinus. „Microsoft Edge“ gali padėti greičiau atlikti nuobodžias užduotis, tokias kaip formų pildymas ir slaptažodžių įvedimas, įrašydama informaciją. Jei pasirenkate naudoti šias funkcijas, „Microsoft Edge“ saugo informaciją jūsų įrenginyje. Jei esate įjungę sinchronizavimą formų pildymui, pvz., adresams ar slaptažodžiams, ši informacija bus siunčiama į "Microsoft" debesį ir saugoma su jūsų "Microsoft" paskyra, kad būtų sinchronizuojama visose "Microsoft Edge" versijose, kuriose esate prisijungę. Šiuos duomenis galite valdyti dalyje Parametrai ir kita > Parametrai > profiliai > Sinchronizuoti . Norint integruoti naršymo patirtį su kita veikla, kurią atliekate savo įrenginyje, "Microsoft Edge" bendrina jūsų naršymo retrospektyvą su "Microsoft Windows" naudodama indekso kūrimo priemonę. Ši informacija saugoma įrenginyje. Tai apima URL, kategoriją, kurioje URL gali būti aktualus, pvz., "dažniausiai lankoma", "neseniai aplankyta" arba "neseniai uždaryta", taip pat santykinį dažnį arba pasikartojimą kiekvienoje kategorijoje. Svetainės, kuriose lankotės naudodami "InPrivate" režimą, nebus bendrinamos. Tada ši informacija pasiekiama kitoms įrenginio programoms, pvz., pradžios meniu arba užduočių juostai. Šią funkciją galite valdyti pasirinkdami Parametrai ir kita > Parametrai > Profiliai ir įjungdami arba išjungdami parinktį Bendrinti naršymo duomenis su kitomis "Windows" funkcijomis . Jei išjungta, visi anksčiau bendrinami duomenys bus panaikinti. Siekdamos apsaugoti vaizdo įrašų ir muzikos turinį, kad jis nebūtų kopijuojamas, kai kurios srautinio perdavimo svetainės jūsų įrenginyje saugo skaitmeninio teisių valdymo (DRM) duomenis, įskaitant unikalųjį identifikatorių (ID) ir medijos licencijas. Kai naudojate vieną iš šių svetainių, ji nuskaito DRM informaciją, kad įsitikintų, jog turite leidimą naudoti turinį. „Microsoft Edge“ taip pat saugo slapukus – nedidelius failus, kurie įrašomi į jūsų įrenginį, kai naršote žiniatinklyje. Daugelyje svetainių naudojami slapukai, kurie saugo informaciją apie jūsų nuostatas ir parametrus, pvz., jie įrašo prekes jūsų pirkinių krepšelyje, kad kaskart apsilankius nereikėtų jų įtraukti. Kai kurios svetainės taip pat naudoja slapukus, kad rinktų informaciją apie jūsų veiklą internete, norėdamos rodyti jums pagal pomėgius pritaikytą reklamą. „Microsoft Edge“ turi parinkčių, leidžiančių išvalyti slapukus ir neleisti svetainėms ateityje įrašyti slapukų. „Microsoft Edge“ svetainėms nusiųs užklausas Nesekti, kai įjungtas parametras Siųsti užklausas Nesekti . Šis parametras pasiekiamas dalyje Parametrai ir kita > Parametrai > Privatumas, ieška ir paslaugos > privatumo > Siųsti užklausas Nesekti. Vis dėlto svetainės vis tiek gali sekti jūsų veiklą net išsiuntus užklausą Nesekti. Kaip išvalyti „Microsoft Edge“ surinktus arba išsaugotus duomenis Norėdami išvalyti įrenginyje saugomą informaciją, pvz., įrašytus slaptažodžius ar slapukus, atlikite šiuos veiksmus: Naršyklėje "Microsoft Edge" pasirinkite Parametrai ir kita > Parametrai > Privatumas, ieška ir paslaugos > Išvalyti naršymo duomenis . Pasirinkite Pasirinkti, ką išvalyti šalia Valyti naršymo duomenis dabar. Dalyje Laiko intervalas pasirinkite laiko intervalą. Pažymėkite žymės langelį šalia kiekvieno duomenų tipo, kurį norite išvalyti, tada pasirinkite Valyti dabar . Jei norite, galite pasirinkti Pasirinkti, ką valyti kaskart uždarius naršyklę ir pasirinkti, kurių tipų duomenis reikėtų išvalyti. Sužinokite daugiau apie tai, kas panaikinama iš kiekvieno naršyklės retrospektyvos elemento . Jei norite išvalyti „Microsoft“ surinktą naršymo retrospektyvą, atlikite toliau nurodytus veiksmus. Norėdami peržiūrėti naršymo retrospektyvą, susietą su jūsų paskyra, prisijunkite prie savo paskyros account.microsoft.com . Be to, galite išvalyti naršymo duomenis, kuriuos "Microsoft" surinko naudodama "Microsoft" privatumo valdymo portalą . Norėdami panaikinti naršymo retrospektyvą ir kitus diagnostikos duomenis, susietus su jūsų Windows 10 įrenginiu, pasirinkite Pradžia > Parametrai > Privatumas > Diagnostika & atsiliepimai , tada pasirinkite Naikinti dalyje Naikinti diagnostikos duomenis . Norėdami išvalyti naršymo retrospektyvą, bendrinamą su kitomis "Microsoft" funkcijomis vietiniame įrenginyje: Naršyklėje "Microsoft Edge" pasirinkite Parametrai ir kita > Parametrai > Profiliai . Pasirinkite Bendrinti naršymo duomenis su kitomis "Windows" funkcijomis . Perjunkite šį parametrą į išjungta . Kaip tvarkyti „Microsoft Edge“ privatumo parametrus Norėdami peržiūrėti ir tinkinti privatumo parametrus, pasirinkite Parametrai ir kita > Parametrai > Privatumas, ieška ir paslaugos . > privatumas. Norėdami sužinoti daugiau apie privatumą naršyklėje "Microsoft Edge", skaitykite "Microsoft Edge" privatumo techninę dokumentaciją . UŽSISAKYTI RSS INFORMACIJOS SANTRAUKAS Reikia daugiau pagalbos? Norite daugiau parinkčių? Galimos funkcijos Bendruomenė Susisiekite su mumis Sužinokite apie prenumeratos pranašumus, peržiūrėkite mokymo kursus, sužinokite, kaip apsaugoti savo įrenginį ir kt. „Microsoft 365“ prenumeratos pranašumai „Microsoft 365“ mokymas „Microsoft“ sauga Pritaikymo neįgaliesiems centras Bendruomenės padeda užduoti klausimus ir į juos atsakyti, pateikti atsiliepimų ir išgirsti iš ekspertų, turinčių daug žinių. Klauskite „Microsoft“ bendruomenės „Microsoft“ technologijų bendruomenė „Windows Insider Preview“ programos dalyviai „Microsoft 365“ „Insider“ programos dalyviai Raskite įprastų problemų sprendimus arba gaukite pagalbos iš palaikymo agento. Palaikymas internetu Ar ši informacija buvo naudinga? Taip Ne Ačiū! Turite daugiau atsiliepimų apie „Microsoft“? Ar galite padėti mums tobulėti? (Siųskite atsiliepimą „Microsoft“, kad galėtume jums padėti.) Ar esate patenkinti kalbos kokybe? Kas turėjo įtakos jūsų įspūdžiams? Išsprendė mano problemą Valyti instrukcijas Lengva vykdyti Nėra žargono Nuotraukos padėjo Vertimo kokybė Neatitiko mano ekrano Neteisingos instrukcijos Per daug techniška Nepakanka informacijos Nepakanka paveikslėlių Vertimo kokybė Turite daugiau atsiliepimų? (Pasirinktinai) Pateikti atsiliepimą Paspaudus mygtuką Pateikti, jūsų atsiliepimai bus naudojami tobulinant „Microsoft“ produktus ir paslaugas. Jūsų IT administratorius galės rinkti šiuos duomenis. Privatumo patvirtinimas. Dėkojame už jūsų atsiliepimą! × Kas nauja Organizacijoms skirtas „Copilot“ Asmeniniam naudojimui skirtas „Copilot“ Microsoft 365 „Windows 11“ programos Microsoft Store Abonento profilis Atsisiuntimo centras Grąžinimas Užsakymo sekimas Perdirbimas Komercinės garantijos Švietimas „Microsoft Education“ Įrenginiai švietimo įstaigoms „Microsoft Teams“ švietimui „Microsoft 365 Education“ „Office Education“ Pedagogų mokymai ir tobulėjimas Pasiūlymai studentams ir tėvams „Azure“ studentams Verslas „Microsoft“ sauga „Azure” „Dynamics 365“ „Microsoft 365“ „Microsoft Advertising“ Microsoft 365 Copilot „Microsoft Teams“ Kūrėjas ir IT „Microsoft“ kūrėjas „Microsoft Learn“ DI parduotuvės programų palaikymas „Microsoft“ techninės pagalbos bendruomenė Microsoft Marketplace „Microsoft Power Platform“ Marketplace Rewards „Visual Studio“ Įmonė Karjera Apie „Microsoft“ „Microsoft“ privatumas Investuotojai Tvarumas Lietuvių (Lietuva) Jūsų privatumo pasirinkimų atsisakymo piktograma Jūsų privatumo pasirinkimai Jūsų privatumo pasirinkimų atsisakymo piktograma Jūsų privatumo pasirinkimai Vartotojų sveikatos privatumas Susisiekti su „Microsoft“ Privatumas Slapukų valdymas Naudojimosi sąlygos Prekių ženklai Apie mūsų reklamą EU Compliance DoCs © Microsoft 2026 | 2026-01-13T09:30:34 |
https://support.microsoft.com/uk-ua/windows/%D0%BA%D0%B5%D1%80%D1%83%D0%B2%D0%B0%D0%BD%D0%BD%D1%8F-%D1%84%D0%B0%D0%B9%D0%BB%D0%B0%D0%BC%D0%B8-cookie-%D0%B2-microsoft-edge-%D0%BF%D0%B5%D1%80%D0%B5%D0%B3%D0%BB%D1%8F%D0%B4-%D0%B4%D0%BE%D0%B7%D0%B2%D1%96%D0%BB-%D0%B1%D0%BB%D0%BE%D0%BA%D1%83%D0%B2%D0%B0%D0%BD%D0%BD%D1%8F-%D0%B2%D0%B8%D0%B4%D0%B0%D0%BB%D0%B5%D0%BD%D0%BD%D1%8F-%D1%82%D0%B0-%D0%B2%D0%B8%D0%BA%D0%BE%D1%80%D0%B8%D1%81%D1%82%D0%B0%D0%BD%D0%BD%D1%8F-168dab11-0753-043d-7c16-ede5947fc64d | Керування файлами cookie в Microsoft Edge: перегляд, дозвіл, блокування, видалення та використання - Підтримка від Microsoft Пов’язані теми × Захист, безпека та конфіденційність у Windows Огляд Огляд захисту, безпеки та конфіденційності Безпека у Windows Помічник з Безпеки у Windows Захистіть себе за допомогою Безпеки у Windows Перш ніж утилізувати, продати чи подарувати консоль Xbox або ПК з Windows Видалення зловмисних програм із ПК з Windows Безпека у Windows Довідка з Безпеки у Windows Перегляд і видалення журналу браузера в Microsoft Edge Видалення файлів cookie та керування ними Безпечне вилучення цінного вмісту під час повторної інсталяції Windows Пошук і блокування загубленого пристрою Windows Конфіденційність у Windows Помічник із конфіденційності у Windows Параметри конфіденційності Windows, які використовують програми Перегляд даних на інформаційній панелі конфіденційності Перейти до основного Microsoft Підтримка Підтримка Підтримка Домашня сторінка Microsoft 365 Office Продукти Microsoft 365 Outlook Microsoft Teams OneDrive Microsoft Copilot OneNote Windows додатково… Пристрої Surface Приладдя для комп’ютерів Xbox Комп’ютерні ігри HoloLens Surface Hub Гарантії на обладнання Account & billing Бізнес-партнер Microsoft Store і виставлення рахунків Ресурси Нові можливості Форуми спільноти Адміністратори Microsoft 365 Портал для малого бізнесу Розробник Освіта Звіт про шахрайство від імені служби підтримки Безпека продуктів Більше Придбати Microsoft 365 Усі продукти Microsoft Global Microsoft 365 Teams Copilot Windows Xbox Підтримка Програмне забезпечення Програмне забезпечення Програми для Windows Штучний інтелект OneDrive Outlook Перехід зі Skype на Teams OneNote Microsoft Teams Комп'ютери та пристрої Комп'ютери та пристрої Комп’ютерні аксесуари Розваги Розваги Комп’ютерні ігри Бізнес Бізнес Захисний комплекс Microsoft Azure Dynamics 365 Microsoft 365 для бізнесу Microsoft для промисловості Microsoft Power Platform Windows 365 Розробка й ІТ Розробка й ІТ Розробник Microsoft Microsoft Learn Підтримка ШІ-програм на Marketplace Технічна спільнота Microsoft Microsoft Marketplace Visual Studio Marketplace Rewards Інші Інші Безкоштовні завантаження та безпека Освіта Переглянути карту сайту Знайти Пошук довідки Не знайдено результатів Скасувати Вхід Вхід за допомогою облікового запису Microsoft Увійдіть або створіть обліковий запис. Вітаємо, Виберіть інший обліковий запис. У вас є кілька облікових записів Виберіть обліковий запис, за допомогою якого потрібно ввійти. Пов’язані теми Захист, безпека та конфіденційність у Windows Огляд Огляд захисту, безпеки та конфіденційності Безпека у Windows Помічник з Безпеки у Windows Захистіть себе за допомогою Безпеки у Windows Перш ніж утилізувати, продати чи подарувати консоль Xbox або ПК з Windows Видалення зловмисних програм із ПК з Windows Безпека у Windows Довідка з Безпеки у Windows Перегляд і видалення журналу браузера в Microsoft Edge Видалення файлів cookie та керування ними Безпечне вилучення цінного вмісту під час повторної інсталяції Windows Пошук і блокування загубленого пристрою Windows Конфіденційність у Windows Помічник із конфіденційності у Windows Параметри конфіденційності Windows, які використовують програми Перегляд даних на інформаційній панелі конфіденційності Керування файлами cookie в Microsoft Edge: перегляд, дозвіл, блокування, видалення та використання Застосовується до Windows 10 Windows 11 Microsoft Edge Файли cookie – це невеликі фрагменти даних, які зберігаються на вашому пристрої веб-сайтами, які ви відвідуєте. Вони служать для різних цілей, таких як запам'ятовування облікових даних для входу, параметрів сайту та відстеження поведінки користувача. Однак може знадобитися видалити файли cookie з міркувань конфіденційності або вирішити проблеми з переглядом веб-сторінок. У цій статті наведено інструкції з: Переглянути всі файли cookie Дозволити всі файли cookie Дозволити файли cookie з певного веб-сайту Блокування сторонніх файлів cookie Блокувати всі файли cookie Блокування файлів cookie з певного сайту Видалити всі файли cookie Видалення файлів cookie з певного сайту Видалення файлів cookie щоразу під час закривання браузера Використання файлів cookie для попереднього завантаження сторінки для швидшого перегляду Переглянути всі файли cookie Відкрийте браузер Edge, виберіть настройки та інше у верхньому правому куті вікна браузера. Виберіть Настройки > Конфіденційність, пошук і служби . Виберіть файли cookie , а потім – Переглянути всі файли cookie та дані сайту , щоб переглянути всі збережені файли cookie та пов'язані відомості про сайт. Дозволити всі файли cookie Дозволяючи файли cookie, веб-сайти зможуть зберігати та отримувати дані у вашому браузері, що може покращити перегляд веб-сторінок, запам'ятовуючи ваші вподобання та відомості для входу. Відкрийте браузер Edge, виберіть настройки та інше у верхньому правому куті вікна браузера. Виберіть Настройки > конфіденційність, пошук і служби . Виберіть Файли cookie та ввімкніть перемикач Дозволити сайтам зберігати та читати дані файлів cookie (рекомендовано), щоб дозволити всі файли cookie. Дозволити файли cookie з певного сайту Дозволяючи файли cookie, веб-сайти зможуть зберігати та отримувати дані у вашому браузері, що може покращити перегляд веб-сторінок, запам'ятовуючи ваші вподобання та відомості для входу. Відкрийте браузер Edge, виберіть настройки та інше у верхньому правому куті вікна браузера. Виберіть Настройки > Конфіденційність, пошук і служби . Виберіть Файли cookie та перейдіть до розділу Дозволено зберігати файли cookie. Виберіть додати сайт , щоб дозволити використання файлів cookie на основі кожного сайту, ввівши URL-адресу сайту. Блокування сторонніх файлів cookie Якщо ви не хочете, щоб сторонні сайти зберігав файли cookie на вашому ПК, ви можете заблокувати файли cookie. Але через блокування файлів cookie деякі сторінки можуть відображатися неправильно або можуть з’являтися повідомлення про те, що для перегляду веб-сайту необхідно дозволити зберігання файлів cookie. Відкрийте браузер Edge, виберіть настройки та інше у верхньому правому куті вікна браузера. Виберіть Настройки > конфіденційність, пошук і служби . Виберіть Файли cookie та ввімкніть перемикач Блокувати сторонні файли cookie. Блокувати всі файли cookie Якщо ви не хочете, щоб сторонні сайти зберігав файли cookie на вашому ПК, ви можете заблокувати файли cookie. Але через блокування файлів cookie деякі сторінки можуть відображатися неправильно або можуть з’являтися повідомлення про те, що для перегляду веб-сайту необхідно дозволити зберігання файлів cookie. Відкрийте браузер Edge, виберіть настройки та інше у верхньому правому куті вікна браузера. Виберіть Настройки > Конфіденційність, пошук і служби . Виберіть файли cookie та вимкніть параметр Дозволити сайтам зберігати й читати дані файлів cookie (рекомендовано), щоб блокувати всі файли cookie. Блокування файлів cookie з певного сайту Microsoft Edge дає змогу блокувати файли cookie з певного сайту. Однак це може завадити правильному відображенню деяких сторінок або отримати повідомлення від сайту про те, що потрібно дозволити файли cookie для перегляду цього сайту. Щоб заблокувати файли cookie з певного сайту, виконайте наведені нижче дії. Відкрийте браузер Edge, виберіть настройки та інше у верхньому правому куті вікна браузера. Виберіть Настройки > Конфіденційність, пошук і служби . Виберіть файли cookie та перейдіть до розділу Заборонено зберігати та читати файли cookie . Виберіть додати сайт , щоб заблокувати файли cookie на основі кожного сайту, ввівши URL-адресу сайту. Видалити всі файли cookie Відкрийте браузер Edge, виберіть настройки та інше у верхньому правому куті вікна браузера. Виберіть Настройки > конфіденційність, пошук і служби . Виберіть очистити дані браузера , а потім виберіть елемент Вибрати елементи, які потрібно очистити поруч із пунктом Очистити дані браузера зараз . У розділі Проміжок часу виберіть діапазон часу зі списку. Виберіть пункт Файли cookie та інші дані сайтів , а потім – Очистити зараз . Примітка.: Крім того, файли cookie можна видалити, одночасно натиснувши клавіші Ctrl + Shift + Delete , а потім виконавши кроки 4 та 5. Усі файли cookie та інші дані сайту буде видалено для вибраного проміжку часу. Це підписує вас із більшості сайтів. Видалення файлів cookie з певного сайту Відкрийте браузер Edge, виберіть настройки та інше > Настройки > Конфіденційність, пошук і служби . Виберіть файли cookie , а потім – Переглянути всі файли cookie та дані сайту та знайдіть сайт, файли cookie якого потрібно видалити. Клацніть стрілку вниз праворуч від сайту, файли cookie якого потрібно видалити, і виберіть видалити . Файли cookie для вибраного сайту тепер видаляються. Повторіть цей крок для всіх сайтів, файли cookie яких потрібно видалити. Видалення файлів cookie щоразу під час закривання браузера Відкрийте браузер Edge, виберіть Настройки та інше > Настройки > Конфіденційність, пошук і служби . Виберіть очистити дані перегляду , а потім виберіть елемент Вибрати елементи, які потрібно видаляти щоразу, коли ви закриваєте браузер . Увімкніть перемикач Файли cookie та інші дані сайту . Після ввімкнення цієї функції щоразу, коли ви закриваєте браузер Edge, усі файли cookie та інші дані сайту видаляються. Це підписує вас із більшості сайтів. Використання файлів cookie для попереднього завантаження сторінки для швидшого перегляду Відкрийте браузер Edge, виберіть настройки та інше у верхньому правому куті вікна браузера. Виберіть Настройки > конфіденційність, пошук і служби . Виберіть Файли cookie та ввімкніть перемикач Попереднє завантаження сторінок для швидшого перегляду та пошуку. ПІДПИСАТИСЯ НА RSS-КАНАЛИ Потрібна додаткова довідка? Потрібні додаткові параметри? Виявити Спільнота Зверніться до нас Ознайомтеся з перевагами передплати, перегляньте навчальні курси, дізнайтесь, як захистити свій пристрій тощо. Переваги передплати на Microsoft 365 Навчальні матеріали з Microsoft 365 Захисний комплекс Microsoft Центр спеціальних можливостей Спільноти допомагають ставити запитання й відповідати на них, надавати відгуки та дізнаватися думки висококваліфікованих експертів. Спитати спільноту Microsoft Спільнота Microsoft Tech Оцінювачі Windows Оцінювачі Microsoft 365 Знайдіть вирішення поширених проблем або отримайте довідку від агента підтримки. онлайн-підтримка Чи ця інформація була корисною? Так Ні Дякуємо! Надіслати інші відгуки для корпорації Майкрософт? Допоможіть нам удосконалитися (Надішліть відгук до корпорації Майкрософт, щоб ми могли допомогти.) Наскільки ви задоволені якістю мови? Що вплинуло на ваші враження? Мою проблему вирішено Очистити інструкції Легко скористатися Немає жаргонних слів Корисні зображення Якість перекладу Не підходить для мого екрана Неправильні інструкції Занадто технічний текст Недостатньо інформації Недостатньо зображень Якість перекладу Маєте ще один відгук? (необов’язково) Надіслати відгук Натиснувши кнопку "Надіслати", ви надасте свій відгук для покращення продуктів і служб Microsoft. Ваш ІТ-адміністратор зможе збирати ці дані. Декларація про конфіденційність. Дякуємо за відгук! × Що нового? Copilot для організацій Copilot для особистого використання Microsoft 365 Ознайомтеся з продуктами Microsoft Програми для Windows 11 Microsoft Store Профіль облікового запису Центр завантажень Повернення Відстеження замовлення Освіта Microsoft для освіти Пристрої для освіти Microsoft Teams для освіти Microsoft 365 Education Office Education Підвищення кваліфікації викладачів Спеціальні пропозиції для студентів і батьків Azure для студентів Бізнес Захисний комплекс Microsoft Azure Dynamics 365 Microsoft 365 Microsoft Advertising Microsoft 365 Copilot Microsoft Teams Розробка й ІТ Розробник Microsoft Microsoft Learn Підтримка ШІ-програм на Marketplace Технічна спільнота Microsoft Microsoft Marketplace Microsoft Power Platform Marketplace Rewards Visual Studio Компанія Вакансії Конфіденційність у Microsoft Інвестори Сталий розвиток Українська (Україна) Піктограма відмови в параметрах конфіденційності Вибрані параметри конфіденційності Піктограма відмови в параметрах конфіденційності Вибрані параметри конфіденційності Конфіденційність інформації про здоров’я споживачів Звернутися до корпорації Microsoft Конфіденційність Керування файлами cookie Умови використання Товарні знаки Відомості про рекламу © Microsoft 2026 | 2026-01-13T09:30:34 |
https://support.microsoft.com/et-ee/windows/kaitse-s%C3%A4ilitamine-windowsi-turve-rakenduse-abil-2ae0363d-0ada-c064-8b56-6a39afb6a963 | Kaitse säilitamine Windowsi turve rakenduse abil - Microsofti tugiteenus Seotud teemad × Windowsi turve, ohutus ja privaatsus Overview Turbe, ohutuse ja privaatsuse ülevaade Windowsi turve Windowsi turbe kasutajaabi Windowsi turve tagab kaitse Enne Xboxi või Windowsi arvuti müümist, kinkimist või taaskasutusse andmist Ründevara eemaldamine Windowsi arvutist Windowsi ohutus Windowsi ohutuse kasutajaabi Brauseriajaloo kuvamine ja kustutamine Microsoft Edge’is Küpsiste kustutamine ja haldamine Windowsi uuesti installimisel saate väärtusliku sisu ohutult eemaldada Kaotsiläinud Windowsi seadme leidmine ja lukustamine Windowsi privaatsus Windowsi privaatsuse kasutajaabi Rakenduste kasutatavad Windowsi privaatsussätted Andmete vaatamine privaatsussätete armatuurlaual Põhisisu juurde Microsoft Tugi Tugi Tugi Avaleht Microsoft 365 Office Tooted Microsoft 365 Outlook Microsoft Teams OneDrive Microsoft Copilot OneNote Windows rohkem… Seadmed Surface Arvuti tarvikud Xbox Arvutimängud HoloLens Surface Hub Riistvara garantiid Konto ja arveldamine konto Microsoft Store ja arveldamine Ressursid Mis on uut? Kogukonnafoorumid Microsoft 365 administraatorid Väikeettevõtete portaal Arendaja Haridus Teatage tehnilise toega seotud pettusest Tooteohutus Rohkem Osta Microsoft 365 Kogu Microsoft Global Microsoft 365 Teams Copilot Windows Surface Xbox Tugi Tarkvara Tarkvara Windowsi rakendused AI OneDrive Outlook Üleminek Skype'ilt Teamsile OneNote Microsoft Teams Arvutid ja seadmed Arvutid ja seadmed Accessories Meelelahutus Meelelahutus PC-mängud Äri Äri Microsofti turve Azure Dynamics 365 Microsoft 365 ettevõtteversioon Microsoft Industry Microsoft Power Platform Windows 365 Arendaja ja IT Arendaja ja IT Microsofti arendaja Microsoft Learn Tehisintellekti-turuplatsi rakenduste tugi Microsofti tehnoloogiakogukond Microsoft Marketplace Visual Studio Marketplace Rewards Muud Muud Tasuta allalaadimised ja turve Kuva saidikaart Otsing Spikri otsing Tulemid puuduvad Loobu Logi sisse Logige sisse Microsofti kontoga Logige sisse või looge konto. Tere! Valige mõni muu konto. Teil on mitu kontot Valige konto, millega soovite sisse logida. Seotud teemad Windowsi turve, ohutus ja privaatsus Overview Turbe, ohutuse ja privaatsuse ülevaade Windowsi turve Windowsi turbe kasutajaabi Windowsi turve tagab kaitse Enne Xboxi või Windowsi arvuti müümist, kinkimist või taaskasutusse andmist Ründevara eemaldamine Windowsi arvutist Windowsi ohutus Windowsi ohutuse kasutajaabi Brauseriajaloo kuvamine ja kustutamine Microsoft Edge’is Küpsiste kustutamine ja haldamine Windowsi uuesti installimisel saate väärtusliku sisu ohutult eemaldada Kaotsiläinud Windowsi seadme leidmine ja lukustamine Windowsi privaatsus Windowsi privaatsuse kasutajaabi Rakenduste kasutatavad Windowsi privaatsussätted Andmete vaatamine privaatsussätete armatuurlaual Kaitse säilitamine Windowsi turve rakenduse abil Rakenduskoht Windows 11 Windows 10 Windowsi turve rakendus on Windowsi integreeritud terviklik turbelahendus, mille eesmärk on kaitsta teie seadet ja andmeid mitmesuguste ohtude eest. See sisaldab selliseid funktsioone nagu Microsoft Defender viirusetõrje, Windowsi tulemüür ja nutikate rakenduste kontroll, mis töötavad koos, et pakkuda reaalajas kaitset viiruste, ründevara ja muude turbeohtude eest. Rakendus on Windowsi sisse ehitatud ja tagab, et teie seade on kaitstud alates käivitamisest. Näpunäide.: Kui olete Microsoft 365 Family või Isiklik tellija, saate tellimuse raames Microsoft Defender meie täiustatud turbetarkvara Windowsi, Maci, iOS-i ja Androidi jaoks. Lisateavet leiate teemast Microsoft Defender. Windowsi turve rakenduse üks peamisi eeliseid on võimalus pakkuda reaalajas kaitset. Microsoft Defender Antivirus kontrollib pidevalt teie seadet võimalike ohtude suhtes ja võtab kohe meetmeid nende neutraliseerimiseks. See ennetav lähenemine aitab ära hoida ründevaranakkusi ja hoiab teie seadme sujuva töötamise. Lisaks jälgib tulemüürifunktsioon sissetulevat ja väljaminevat võrguliiklust, blokeerides kahtlased tegevused teie andmete ja privaatsuse kaitsmiseks. Windowsi turve rakendus avamiseks otsige seda menüüst Start või kasutage järgmist otseteed. Windowsi turve Levinud toimingud Siin on lühike loend levinumatest toimingutest, mida saate teha rakenduse Windowsi turve kaudu. Lisateabe saamiseks laiendage iga jaotist. Ründevarakontrolli käsitsi käivitamine Kui muretsete konkreetse faili või kausta pärast kohalikus seadmes, võite faili või kausta paremklõpsata File Explorer ja seejärel valida käsu Kontrolli Microsoft Defender abil. Näpunäide.: Windows 11 võib juhtuda, et pärast faili või kausta skannimise suvandi kuvamiseks paremklõpsamist peate valima käsu Kuva rohkem suvandeid . Kiirkontrolli käivitamine Kui kahtlustate, et seadmes võib olla viirus või ründevara, peaksite kohe käivitama kiirkontrolli. Valige arvutis Windowsi turve rakenduses valige Viiruse- & ohutõrje > Kiirkontroll või kasutage järgmist otseteed. Kiirkontrolli käivitamine Kui kontroll ei leia probleeme, kuid olete endiselt mures, võiksite oma seadet põhjalikumalt kontrollida. reaalajas kaitse Defenderi viirusetõrje sisse- või väljalülitamine Mõnikord võib juhtuda, et peate reaalajas kaitse kasutamise korraks lõpetama. Kui reaalajas kaitse on välja lülitatud, ei kontrollita avatavaid ega allalaaditavaid faile ohtude suhtes. Reaalajakaitse lülitub siiski mõne aja pärast automaatselt uuesti sisse, et teie seadet edasi kaitsta. Valige arvutis Windowsi turve rakenduses valige Viiruse- & ohutõrje > Halda sätteid või kasutage järgmist otseteed. Viiruse- & ohutõrje sätted Reaalajas kaitse saate sisse või välja lülitada tumblernupu abil. lisateavetWindowsi turve rakendus ja kõigi selle funktsioonide kohta leiate teemast Windowsi turve rakenduse ülevaade . TELLIGE RSS-KANALID Kas vajate veel abi? Kas soovite rohkem valikuvariante? Tutvustus Kogukonnafoorum Siin saate tutvuda tellimusega kaasnevate eelistega, sirvida koolituskursusi, õppida seadet kaitsma ja teha veel palju muud. Microsoft 365 tellimuse eelised Microsoft 365 koolitus Microsofti turbeteenus Hõlbustuskeskus Kogukonnad aitavad teil küsimusi esitada ja neile vastuseid saada, anda tagasisidet ja saada nõu rikkalike teadmistega asjatundjatelt. Nõu küsimine Microsofti kogukonnafoorumis Microsofti spetsialistide kogukonnafoorum Windows Insideri programmis osalejad Microsoft 365 Insideri programmis osalejad Kas sellest teabest oli abi? Jah Ei Aitäh! Veel tagasisidet Microsoftile? Kas saaksite aidata meil teenust paremaks muuta? (Saatke Microsoftile tagasisidet, et saaksime aidata.) Kui rahul te keelekvaliteediga olete? Mis mõjutas teie hinnangut? Leidsin oma probleemile lahenduse Juhised olid selged Tekstist oli lihtne aru saada Tekstis pole žargooni Piltidest oli abi Tõlkekvaliteet Tekst ei vastanud minu ekraanipildile Valed juhised Liiga tehniline Pole piisavalt teavet Pole piisavalt pilte Tõlkekvaliteet Kas soovite anda veel tagasisidet? (Valikuline) Saada tagasiside Kui klõpsate nuppu Edasta, kasutatakse teie tagasisidet Microsofti toodete ja teenuste täiustamiseks. IT-administraator saab neid andmeid koguda. Privaatsusavaldus. Täname tagasiside eest! × Mis on uut? Copilot organisatsioonidele Copilot isiklikuks kasutuseks Microsoft 365 Windows 11 rakendused Microsoft Store Konto profiil Allalaadimiskeskus Tagastused Tellimuse jälgimine Ringlussevõtt Commercial Warranties Haridus Microsoft Education Haridusseadmed Microsoft Teams haridusasutustele Microsoft 365 Education Office Education Haridustöötajate koolitus ja arendus Pakkumised õpilastele ja vanematele Azure õpilastele Äri Microsofti turve Azure Dynamics 365 Microsoft 365 Microsoft Advertising Microsoft 365 Copilot Microsoft Teamsi jaoks Arendaja ja IT Microsofti arendaja Microsoft Learn Tehisintellekti-turuplatsi rakenduste tugi Microsofti tehnoloogiakogukond Microsoft Marketplace Microsoft Power Platform Marketplace Rewards Visual Studio Ettevõte Töökohad Teave Microsofti kohta Privaatsus Microsoftis Investorid Jätkusuutlikkus Eesti (Eesti) Teie privaatsusvalikutest loobumise ikoon Teie privaatsusvalikud Teie privaatsusvalikutest loobumise ikoon Teie privaatsusvalikud Tarbijaseisundi privaatsus Võtke Microsoftiga ühendust Privaatsus Halda küpsiseid Kasutustingimused Kaubamärgid Reklaamide kohta EU Compliance DoCs © Microsoft 2026 | 2026-01-13T09:30:34 |
https://zh-cn.facebook.com/login/?next=https%3A%2F%2Fl.facebook.com%2Fl.php%3Fu%3Dhttps%253A%252F%252Fwww.instagram.com%252F%26amp%253Bh%3DAT0oK5bRFTTybnDOhYhIZLq01lQic0NLTeIbHIWCatEHpQ1l_XvBDhFFg93NTxyaIWLyhjuKO75pi9dqx6GinsMsCUF3-jXwiN5uEdfkpm7k0xf-caL8YQT8KCcQ_cBSLI1zBjdYsjE75jVT | Facebook Facebook 邮箱或手机号 密码 忘记账户了? 创建新账户 你暂时被禁止使用此功能 你暂时被禁止使用此功能 似乎你过度使用了此功能,因此暂时被阻止,不能继续使用。 Back 中文(简体) 한국어 English (US) Tiếng Việt Bahasa Indonesia ภาษาไทย Español 日本語 Português (Brasil) Français (France) Deutsch 注册 登录 Messenger Facebook Lite 视频 Meta Pay Meta 商店 Meta Quest Ray-Ban Meta Meta AI Meta AI 更多内容 Instagram Threads 选民信息中心 隐私政策 隐私中心 关于 创建广告 创建公共主页 开发者 招聘信息 Cookie Ad Choices 条款 帮助 联系人上传和非用户 设置 动态记录 Meta © 2026 | 2026-01-13T09:30:34 |
https://github.com/llvm/llvm-project/releases/tag/llvmorg-18.1.7 | Release LLVM 18.1.7 · llvm/llvm-project · GitHub Skip to content Navigation Menu Toggle navigation Sign in Appearance settings Platform AI CODE CREATION GitHub Copilot Write better code with AI GitHub Spark Build and deploy intelligent apps GitHub Models Manage and compare prompts MCP Registry New Integrate external tools DEVELOPER WORKFLOWS Actions Automate any workflow Codespaces Instant dev environments Issues Plan and track work Code Review Manage code changes APPLICATION SECURITY GitHub Advanced Security Find and fix vulnerabilities Code security Secure your code as you build Secret protection Stop leaks before they start EXPLORE Why GitHub Documentation Blog Changelog Marketplace View all features Solutions BY COMPANY SIZE Enterprises Small and medium teams Startups Nonprofits BY USE CASE App Modernization DevSecOps DevOps CI/CD View all use cases BY INDUSTRY Healthcare Financial services Manufacturing Government View all industries View all solutions Resources EXPLORE BY TOPIC AI Software Development DevOps Security View all topics EXPLORE BY TYPE Customer stories Events & webinars Ebooks & reports Business insights GitHub Skills SUPPORT & SERVICES Documentation Customer support Community forum Trust center Partners Open Source COMMUNITY GitHub Sponsors Fund open source developers PROGRAMS Security Lab Maintainer Community Accelerator Archive Program REPOSITORIES Topics Trending Collections Enterprise ENTERPRISE SOLUTIONS Enterprise platform AI-powered developer platform AVAILABLE ADD-ONS GitHub Advanced Security Enterprise-grade security features Copilot for Business Enterprise-grade AI features Premium Support Enterprise-grade 24/7 support Pricing Search or jump to... Search code, repositories, users, issues, pull requests... --> Search Clear Search syntax tips Provide feedback --> We read every piece of feedback, and take your input very seriously. Include my email address so I can be contacted Cancel Submit feedback Saved searches Use saved searches to filter your results more quickly --> Name Query To see all available qualifiers, see our documentation . Cancel Create saved search Sign in Sign up Appearance settings Resetting focus You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert {{ message }} llvm / llvm-project Public Notifications You must be signed in to change notification settings Fork 15.8k Star 36.4k Code Issues 5k+ Pull requests 5k+ Actions Security Uh oh! There was an error while loading. Please reload this page . Insights Additional navigation options Code Issues Pull requests Actions Security Insights Releases llvmorg-18.1.7 LLVM 18.1.7 Compare Choose a tag to compare Sorry, something went wrong. Filter Loading Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . No results found View all tags github-actions released this 06 Jun 01:01 · 77741 commits to main since this release llvmorg-18.1.7 This tag was signed with the committer’s verified signature . The key has expired. tstellar Tom Stellard GPG key ID: A2C794A986419D8A Expired Verified Learn about vigilant mode . 768118d LLVM 18.1.7 Release Release Notes Fix a regression from the 18.1.6 release, which could result in compiler crashes in the PPCMergeStringPool pass when compiling for PowerPC targets. Fixes clang-format (since 18.1.1) regressions on breaking before a stream insertion operator (<<) when both operands are string literals. Fixes a clang-format regression (since 17.0.6) on formatting goto labels in macro definitions. A note on binaries Volunteers make binaries for the LLVM project, which will be uploaded when they have had time to test and build these binaries. They might not be available directly or not at all for each release. We suggest you use the binaries from your distribution or build your own if you rely on a specific platform or configuration. Assets 51 Loading Uh oh! There was an error while loading. Please reload this page . --> 👍 44 IsakTheHacker, ameaninglessname, rhysqi, libreom, bnjmntylr, SF-Zhou, ApeachM, dianqk, Unsigned-Long, Safari77, and 34 more reacted with thumbs up emoji 😄 4 IsakTheHacker, wisteriatree0, firengate, and harshn05 reacted with laugh emoji 🎉 10 IsakTheHacker, ameaninglessname, by291, ikulagin, firengate, quixoticaxis, i0tool5, Sirraide, Migazul8, and Chilledheart reacted with hooray emoji ❤️ 9 IsakTheHacker, ameaninglessname, lucascampolimm, firengate, SonicStark, M0nteCarl0, Sirraide, Yazed-Hasan, and Edgarruiz85 reacted with heart emoji 🚀 12 marcauberer, jamesETsmith, IsakTheHacker, ikulagin, leynier, luq7, firengate, jinghao-jia, bipentihexium, i0tool5, and 2 more reacted with rocket emoji 👀 6 IsakTheHacker, wisteriatree0, firengate, Kneba, Karbounay05, and Chilledheart reacted with eyes emoji All reactions 👍 44 reactions 😄 4 reactions 🎉 10 reactions ❤️ 9 reactions 🚀 12 reactions 👀 6 reactions 64 people reacted Footer © 2026 GitHub, Inc. Footer navigation Terms Privacy Security Status Community Docs Contact Manage cookies Do not share my personal information You can’t perform that action at this time. | 2026-01-13T09:30:34 |
https://linkedin.com/company/phparch | PHP Architect | LinkedIn Skip to main content LinkedIn Top Content People Learning Jobs Games Sign in Create an account PHP Architect E-Learning Providers San Diego, California 7,682 followers Professional development for web developers: books, magazines, podcasts, events, and more. Follow View all 20 employees Report this company About us Published since 2002, PHP Architect is a magazine dedicated exclusively to the world of PHP, the programming language that powers the majority of the web. Beyond the monthly magazine, it also publishes numerous books, hosts online webinars, and offers live, online, instructor-led training courses for PHP, Mysql, and Javascript. php[architect] also organizes an annual conference, php[tek] in Chicago each May. Website http://www.phparch.com/ External link for PHP Architect Industry E-Learning Providers Company size 2-10 employees Headquarters San Diego, California Type Privately Held Founded 2002 Specialties PHP, Training, Magazine, Books, Conferences, and Swag Locations Primary 9245 Twin Trails Dr #720503 San Diego, California 92129, US Get directions Employees at PHP Architect Eric Van Johnson John Congdon The Blind Blogger Maxwell Ivey Mike Page See all employees Updates PHP Architect 7,682 followers 3h Report this post Got an idea worth sharing with the PHP community? Get published, get paid, and help developers worldwide grow. 👉 https://lnkd.in/gkUK6gMA #phparchitect #phpdeveloper #techwriting #writers #phpcommunity #devlife Like Comment Share PHP Architect 7,682 followers 3d Report this post This week on the PHP Podcast, Eric and John talk about Welcome to 2026, Denmark stops postal services, New Laravel employee, PHPTek Early Bird ending soon, the pains of making a living off open source, PHP is Back according to Nuno, and more… Links... The PHP Podcast 2026.01.08 | PHP Architect https://www.phparch.com Like Comment Share PHP Architect 7,682 followers 5d Report this post In this episode, Scott talks with Gunnard Engebreth about the OWASP Top 10 and his talks at PHPtek 2026. Links: OWASP Top 10 – https://lnkd.in/eKGyQJY Our Discord – https://lnkd.in/eW6ebMZR Buy our shirts –... Community Corner: OWASP Top 10 With Gunnard Engebreth | PHP Architect https://www.phparch.com Like Comment Share PHP Architect 7,682 followers 6d Report this post In this episode, Scott talks with Larry Garfield about the PHP Framework Interop Group, what needs it fills in the community, and how it’s impacting us, every day developers. Links: Our Discord – https://lnkd.in/eW6ebMZR Buy our shirts –... Community Corner: PHP Framework Interop Group with Larry Garfield | PHP Architect https://www.phparch.com 1 Like Comment Share PHP Architect 7,682 followers 1w Report this post 🎉 Start off the New Year right! Get published, get paid, and help developers worldwide grow. https://lnkd.in/gkUK6gMA #phparchitect #phpdeveloper #techwriting #writers #phpcommunity #devlife 8 Like Comment Share PHP Architect 7,682 followers 2w Report this post December is often a time of celebrations around the world. Various holidays are recognized depending on your region, culture, or religion. One thing that remains consistent during this time is our dedication to presenting you with high-quality PHP... PHP Racing | PHP Architect https://www.phparch.com 7 Like Comment Share PHP Architect 7,682 followers 3w Report this post Is 'irregardless' a real word? Debunking the myth and when to use 'regardless' vs 'irregardless' …more 2 Like Comment Share PHP Architect 7,682 followers 3w Report this post The Editor Wars: VS Code vs Vim vs PHPStorm — are AI-powered VS Code clones actually useful? …more 5 Like Comment Share PHP Architect 7,682 followers 3w Report this post PHP Alive And Kicking: Episode 21 Christmas Special PHP Architect Social Media: X: https://x.com/phparch Mastodon: https://lnkd.in/g9SwJjfu Bluesky: https://lnkd.in/gBVuU3br Discord: https://lnkd.in/er8AcN8d Subscribe to our magazine: https://lnkd.in/eppNWSMg Partner This podcast is made a little better thanks to our partners. Displace Infrastructure Management, Simplified Automate Kubernetes deployments across any cloud provider or bare metal with a single command. Deploy, manage, and scale your infrastructure with ease. https://displace.tech/ PHPScore Put Your Technical Debt on Autopay with PHPScore https://phpscore.com/ Honeybadger.io Honeybadger helps you deploy with confidence and be your team’s DevOps hero by combining error, uptime, and performance monitoring in one simple platform. Check it out at honeybadger.io Music Provided by Epidemic Sound https://lnkd.in/dgdaW-i PHP Alive And Kicking: Episode 21 Christmas Special www.linkedin.com 5 Like Comment Share PHP Architect 7,682 followers 4w Report this post We want to here from you! Get published, get paid, and help developers worldwide grow. 👉 https://lnkd.in/gkUK6gMA #phparchitect #phpdeveloper #techwriting #writers #phpcommunity #devlife 6 Like Comment Share Join now to see what you are missing Find people you know at PHP Architect Browse recommended jobs for you View all updates, news, and articles Join now Similar pages PHP Software Development PHP Developer Software Development The PHP Foundation Software Development Lila Fuches Limited Information Technology & Services Cardiff, South Glamorgan PHP FIG (Framework Interoperability Group) Software Development Hire a Php Developer Software Development Newark, California Laravel News Technology, Information and Media Gastonia, NC Internship California Travel Arrangements San Diego, California PHPCon Poland Events Services Zawiercie, Śląskie Architect With Us Information Technology & Services Richards Bay, KwaZulu Natal Show more similar pages Show fewer similar pages Browse jobs Software Engineer jobs 300,699 open jobs PHP Developer jobs 14,437 open jobs Drupal Developer jobs 4,480 open jobs Engineer jobs 555,845 open jobs Developer jobs 258,935 open jobs Frontend Developer jobs 17,238 open jobs Manager jobs 1,880,925 open jobs More searches More searches PHP Developer jobs Developer jobs Engineer jobs Communications Specialist jobs Human Resources Assistant jobs Coordinator jobs Specialist jobs LinkedIn © 2026 About Accessibility User Agreement Privacy Policy Cookie Policy Copyright Policy Brand Policy Guest Controls Community Guidelines العربية (Arabic) বাংলা (Bangla) Čeština (Czech) Dansk (Danish) Deutsch (German) Ελληνικά (Greek) English (English) Español (Spanish) فارسی (Persian) Suomi (Finnish) Français (French) हिंदी (Hindi) Magyar (Hungarian) Bahasa Indonesia (Indonesian) Italiano (Italian) עברית (Hebrew) 日本語 (Japanese) 한국어 (Korean) मराठी (Marathi) Bahasa Malaysia (Malay) Nederlands (Dutch) Norsk (Norwegian) ਪੰਜਾਬੀ (Punjabi) Polski (Polish) Português (Portuguese) Română (Romanian) Русский (Russian) Svenska (Swedish) తెలుగు (Telugu) ภาษาไทย (Thai) Tagalog (Tagalog) Türkçe (Turkish) Українська (Ukrainian) Tiếng Việt (Vietnamese) 简体中文 (Chinese (Simplified)) 正體中文 (Chinese (Traditional)) Language Agree & Join LinkedIn By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . Sign in to see who you already know at PHP Architect Sign in Welcome back Email or phone Password Show Forgot password? Sign in or By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . New to LinkedIn? Join now or New to LinkedIn? Join now By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement , Privacy Policy , and Cookie Policy . | 2026-01-13T09:30:34 |
https://penneo.com/da/trust-center/ | Trust Center - Penneo Produkter Penneo Sign Validator Hvorfor Penneo Integrationer Løsninger Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Brancher Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Priser Ressourcer Vidensunivers Trust Center Produktopdateringer SIGN Hjælpecenter KYC Hjælpecenter Systemstatus LOG PÅ Penneo Sign Log ind på Penneo Sign. LOG PÅ Penneo KYC Log ind på Penneo KYC. LOG PÅ BOOK ET MØDE GRATIS PRØVEPERIODE DA EN NO FR NL Produkter Penneo Sign Validator Hvorfor Penneo Integrationer Løsninger Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Priser Ressourcer Vidensunivers Trust Center Produktopdateringer SIGN Hjælpecenter KYC Hjælpecenter Systemstatus BOOK ET MØDE GRATIS PRØVEPERIODE LOG PÅ DA EN NO FR NL Penneo Sign Log ind på Penneo Sign. LOG PÅ Penneo KYC Log ind på Penneo KYC. LOG PÅ Penneo Trust Center At Penneo, we prioritize the security and privacy of our customers. EU-approved Qualified Trust Service Provider (QTSP) Certified under ISO/IEC 27001:2022 and ISO/IEC 27701:2019 EU-based data hosting & GDPR compliance Security We operate a certified Information Security Management System (ISMS) and Privacy Information Management System (PIMS) compliant with ISO/IEC 27001:2022 (Information Security) and ISO/IEC 27701:2019 (Privacy Management), respectively. You can find our certificates here: https://penneo.com/iso-certificates/ . This ensures we have best practise security measures in place at both technical and organisational level. For an overview of our security measures, read our Data Processing Addendum at https://penneo.com/terms/ . Data privacy All customer data, including documents, signatures, and personal information, is stored and processed exclusively within secure AWS data centers in the European Union (Frankfurt and Dublin). Read our Privacy Policy , Data Processing Addendum , or contact our DPO at compliance@penneo.com for more information. EU Qualified Trust Service Provider (eIDAS) Penneo is recognized on the European Union Trust List (EUTL) as a Qualified Trust Service Provider (QTSP), authorizing Penneo to provide legally binding trust services across the EU. View Penneo’s QTSP documentation and certificates at eutl.penneo.com . Platform availability Penneo is committed to a highly available and reliable platform. You can view our real-time and historical system status at any time. Check Live System Status . Additional regulatory compliance We continuously monitor the evolving regulatory landscape to ensure our platform meets the needs of customers in regulated industries. Governance & Sustainability: Penneo is committed to operating as a responsible business by minimizing our environmental impact and upholding strong social and governance principles. Read more about it here. DORA (Digital Operational Resilience Act): Penneo supports financial entities’ ICT risk management and reporting obligations under DORA. Please contact compliance@penneo.com for further information. EU Data Act: Our commitment to data portability and interoperability aligns with the principles of the EU Data Act. Read our Data Act Addendum for more information. Accessibility: We are committed to ensuring our platform is accessible to all end-users. Read our Accessibility Statement for more information. Talk to our experts Book a quick demo and we’ll walk you through the key features and answer your questions – no pressure, just clarity. BOOK A DEMO Get a free trial today Sign your first documents with Penneo Sign and see how easy digital compliance can be. No credit card needed. GET A FREE TRIAL Produkter Penneo Sign Priser Integrationer Åben API Validator Hvorfor Penneo Løsninger Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Ressourcer Vidensunivers Trust Center Produktopdateringer SUPPORT SIGN Hjælpecenter KYC Hjælpecenter Systemstatus Virksomhed Om os Karriere Privatlivspolitik Vilkår Brug af cookies Accessibility Statement Whistleblower Policy Kontakt os PENNEO A/S - Gærtorvet 1-5, DK-1799 København V - CVR: 35633766 | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/537 | LLVM Weekly - #537, April 15th 2024 LLVM Weekly - #537, April 15th 2024 Welcome to the five hundred and thirty-seventh issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org , @llvmweekly or @asbradbury on Twitter, or @llvmweekly@fosstodon.org or @asb@fosstodon.org . News and articles from around the web and events No articles to highlight this week - tips always welcome. According to the LLVM calendar in the coming week there will be the following: Office hours with the following hosts: Phoebe Wang, Johannes Doerfert, Aaron Ballman. Online sync-ups on the following topics: Flang, pointer authentication, SYCL, libc++, LLVM security group, new contributors, LLVM/offload, classic flang, Clang C/C++ language working group, loop optimisations, floating point, OpenMP in Flang, MLIR open meeting. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Renato Golin is curious about interest in organising LLVM conferences in other regions , e.g. Asia, South America, Africa. Vy Nguyen shared an updated design proposal for LLDB telemetry/metrics . Tom Stellard shared a PSA on a reported suspicious login for llvmbot and the actions taken . Nick Desaulniers shared that he’s drafted a policy for hand-written assembly in libc Matthias Gehre posted an RFC on adding arithmetic, logical and comparison expressions in MLIR’s PDLL . Vitaly Buka is seeking to introduce a new clang builtin __builtin_allow_runtime_check which is intended to be used to guard expensive runtime checks. Kristof Beyls shared the slides of his keynote on a BOLT-based binary analysis tool . The 64th edition of MLIR News is now available . Guillaume Chatelet posted notes from the LLVM libc round table at EuroLLVM . Justin Bogner kicked off an RFC thread on adding a bunch of additional LLVM intrinsics for math operations present in C++ or HLSL . Andy Kaylor proposed modifications to the LLVM language reference to describe the allowed impact on fast math flags of transformations . LLVM commits Initial support was added to the superword-level parallelism vectorizer for vectorization of non–power-of-2 ops. 6d66db3 . uitofp now supports the nneg flag. 9170e38 . An ExecutionSession state verified was added to ORC, enabled under EXPENSIVE_CHECKS builds. 649523f . ExpandLargeFpConvert gained support for bfloat types. 110c22f . Statepoint and patchpoint support was added for RISC-V. 53003e3 . Clang commits The _cpp_concepts macro is now set to 202002L which enables <expected> from libstdc++ to work correctly with Clang. 2875e24 . Work started on implementing vector types in the clang interpreter. b7a93bc . The syntax difference between HIP and CUDA is now documented. 2bf4889 . The CLANG_ENABLE_CIR CMake variable was added in preparation for allowing the building of Clang with ClangIR. 44de2bb . The ‘if’ clause was implemented for OpenACC compute constructs. daa8836 . Other project commits A design document for debug info generation in Flang was committed. 357f6c7 . The scudo secure memory allocator now has an EnableContiguousRegions mode. bab0507 . The libclc build system was refactored to allow in-tree builds. 72f9881 . libcxx format was updated to Unicode 15.1. 59e66c5 . The AMDGPU shared memory optimisation pass was removed from MLIR due to a range of issues. 4471831 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
http://docs.buildbot.net/current/manual/installation/misc.html#logfiles | 2.2.6. Next Steps — Buildbot 4.3.0 documentation Buildbot 1. Buildbot Tutorial 2. Buildbot Manual 2.1. Introduction 2.2. Installation 2.2.1. Buildbot Components 2.2.2. Requirements 2.2.3. Installing the code 2.2.4. Buildmaster Setup 2.2.5. Worker Setup 2.2.6. Next Steps 2.2.6.1. Launching the daemons 2.2.6.2. Launching worker as Windows service 2.2.6.3. Logfiles 2.2.6.4. Shutdown 2.3. Concepts 2.4. Secret Management 2.5. Configuration 2.6. Customization 2.7. Command-line Tool 2.8. Resources 2.9. Optimization 2.10. Plugin Infrastructure in Buildbot 2.11. Deployment 2.12. Upgrading 3. Buildbot Development 4. Release Notes 5. Older Release Notes 6. API Indices Buildbot 2. Buildbot Manual 2.2. Installation 2.2.6. Next Steps View page source 2.2.6. Next Steps 2.2.6.1. Launching the daemons Both the buildmaster and the worker run as daemon programs. To launch them, pass the working directory to the buildbot and buildbot-worker commands, as appropriate: # start a master buildbot start [ BASEDIR ] # start a worker buildbot-worker start [ WORKER_BASEDIR ] The BASEDIR is optional and can be omitted if the current directory contains the buildbot configuration (the buildbot.tac file). buildbot start This command will start the daemon and then return, so normally it will not produce any output. To verify that the programs are indeed running, look for a pair of files named twistd.log and twistd.pid that should be created in the working directory. twistd.pid contains the process ID of the newly-spawned daemon. When the worker connects to the buildmaster, new directories will start appearing in its base directory. The buildmaster tells the worker to create a directory for each Builder which will be using that worker. All build operations are performed within these directories: CVS checkouts, compiles, and tests. Once you get everything running, you will want to arrange for the buildbot daemons to be started at boot time. One way is to use cron , by putting them in a @reboot crontab entry [ 1 ] @reboot buildbot start [ BASEDIR ] When you run crontab to set this up, remember to do it as the buildmaster or worker account! If you add this to your crontab when running as your regular account (or worse yet, root), then the daemon will run as the wrong user, quite possibly as one with more authority than you intended to provide. It is important to remember that the environment provided to cron jobs and init scripts can be quite different than your normal runtime. There may be fewer environment variables specified, and the PATH may be shorter than usual. It is a good idea to test out this method of launching the worker by using a cron job with a time in the near future, with the same command, and then check twistd.log to make sure the worker actually started correctly. Common problems here are for /usr/local or ~/bin to not be on your PATH , or for PYTHONPATH to not be set correctly. Sometimes HOME is messed up too. If using systemd to launch buildbot-worker , it may be a good idea to specify a fixed PATH using the Environment directive (see systemd unit file example ). Some distributions may include conveniences to make starting buildbot at boot time easy. For instance, with the default buildbot package in Debian-based distributions, you may only need to modify /etc/default/buildbot (see also /etc/init.d/buildbot , which reads the configuration in /etc/default/buildbot ). Buildbot also comes with its own init scripts that provide support for controlling multi-worker and multi-master setups (mostly because they are based on the init script from the Debian package). With a little modification, these scripts can be used on both Debian and RHEL-based distributions. Thus, they may prove helpful to package maintainers who are working on buildbot (or to those who haven’t yet split buildbot into master and worker packages). # install as /etc/default/buildbot-worker # or /etc/sysconfig/buildbot-worker worker/contrib/init-scripts/buildbot-worker.default # install as /etc/default/buildmaster # or /etc/sysconfig/buildmaster master/contrib/init-scripts/buildmaster.default # install as /etc/init.d/buildbot-worker worker/contrib/init-scripts/buildbot-worker.init.sh # install as /etc/init.d/buildmaster master/contrib/init-scripts/buildmaster.init.sh # ... and tell sysvinit about them chkconfig buildmaster reset # ... or update-rc.d buildmaster defaults 2.2.6.2. Launching worker as Windows service Security consideration Setting up the buildbot worker as a Windows service requires Windows administrator rights. It is important to distinguish installation stage from service execution. It is strongly recommended run Buildbot worker with lowest required access rights. It is recommended run a service under machine local non-privileged account. If you decide run Buildbot worker under domain account it is recommended to create dedicated strongly limited user account that will run Buildbot worker service. Windows service setup In this description, we assume that the buildbot worker account is the local domain account worker . In case worker should run under domain user account please replace .\worker with <domain>\worker . Please replace <worker.passwd> with given user password. Please replace <worker.basedir> with the full/absolute directory specification to the created worker (what is called BASEDIR in Creating a worker ). buildbot_worker_windows_service --user .\worker --password < worker.passwd > --startup auto install powershell -command "& {&'New-Item' -path Registry::HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BuildBot\Parameters}" powershell -command "& {&'set-ItemProperty' -path Registry::HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BuildBot\Parameters -Name directories -Value '<worker.basedir>'}" The first command automatically adds user rights to run Buildbot as service. Modify environment variables This step is optional and may depend on your needs. At least we have found useful to have dedicated temp folder worker steps. It is much easier discover what temporary files your builds leaks/misbehaves. As Administrator run regedit Open the key Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Buildbot . Create a new value of type REG_MULTI_SZ called Environment . Add entries like TMP = c : \ bbw \ tmp TEMP = c : \ bbw \ tmp Check if Buildbot can start correctly configured as Windows service As admin user run the command net start buildbot . In case everything goes well, you should see following output The BuildBot service is starting . The BuildBot service was started successfully . Troubleshooting If anything goes wrong check Twisted log on C:\bbw\worker\twistd.log Windows system event log ( eventvwr.msc in command line, Show-EventLog in PowerShell). 2.2.6.3. Logfiles While a buildbot daemon runs, it emits text to a logfile, named twistd.log . A command like tail -f twistd.log is useful to watch the command output as it runs. The buildmaster will announce any errors with its configuration file in the logfile, so it is a good idea to look at the log at startup time to check for any problems. Most buildmaster activities will cause lines to be added to the log. 2.2.6.4. Shutdown To stop a buildmaster or worker manually, use: buildbot stop [ BASEDIR ] # or buildbot-worker stop [ WORKER_BASEDIR ] This simply looks for the twistd.pid file and kills whatever process is identified within. At system shutdown, all processes are sent a SIGKILL . The buildmaster and worker will respond to this by shutting down normally. The buildmaster will respond to a SIGHUP by re-reading its config file. Of course, this only works on Unix-like systems with signal support and not on Windows. The following shortcut is available: buildbot reconfig [ BASEDIR ] When you update the Buildbot code to a new release, you will need to restart the buildmaster and/or worker before they can take advantage of the new code. You can do a buildbot stop BASEDIR and buildbot start BASEDIR in succession, or you can use the restart shortcut, which does both steps for you: buildbot restart [ BASEDIR ] Workers can similarly be restarted with: buildbot-worker restart [ BASEDIR ] There are certain configuration changes that are not handled cleanly by buildbot reconfig . If this occurs, buildbot restart is a more robust way to fully switch over to the new configuration. buildbot restart may also be used to start a stopped Buildbot instance. This behavior is useful when writing scripts that stop, start, and restart Buildbot. A worker may also be gracefully shutdown from the web UI. This is useful to shutdown a worker without interrupting any current builds. The buildmaster will wait until the worker has finished all its current builds, and will then tell the worker to shutdown. [ 1 ] This @reboot syntax is understood by Vixie cron, which is the flavor usually provided with Linux systems. Other unices may have a cron that doesn’t understand @reboot Previous Next © Copyright Buildbot Team Members. Built with Sphinx using a theme provided by Read the Docs . | 2026-01-13T09:30:34 |
http://bugs.php.net/search-howto.php | PHP :: How to search the bug database php.net | support | documentation | report a bug | advanced search | search howto | statistics | random bug | login go to bug id or search bugs for How to Search This HOWTO will allow for a useful experience while scouring the bug database. Do note that a lot of information is entered in by the general public and therefore cannot be fully trusted. Also, the information contained within a bug report is what setup found the bug, so other setups may apply. Basic Search Within every bugs.php.net webpage header is a search box, this is the basic search option. You may enter in a numeric bug ID to redirect to that bugs page or enter in a search term to perform a default bug search. Load the advanced search to view the default values. Advanced Search Some explanations for most of the PHP bugs advanced search options. Feature Explanation Possible reasons for use Find bugs The main search text box for your search terms with each term being separated by a space. The searched database fields are: author email, subject, and description. Minimum term length is three characters. There are three types of searches: all : All search terms are required. any : (default) One or more (any) of the search terms may be present. raw : Allows full use of MySQL's FULLTEXT boolean search operators. For any , you might search for a function and its alias while not caring which shows up. Or a name that has changed in PHP 5 from PHP 4. Use of all makes sense if you require every term in your results, as this can provide precise searching. The raw option is for custom searches, like you might require one term but also want to disallow another from the result. Also, adding optional terms always affects relevancy/order. Status Each bug has a status, this allows searching for a specific (or all) status type. Here are a few explanations: Open : This also includes assigned , analyzed , critical , and verified bugs. (default) Feedback : Bugs requesting feedback. Typically a bug that requests feedback will be marked as No Feedback if no feedback transpires after 15 days. Old feedback : Bugs that have been requesting feedback for over60 days. Fresh : Bugs commented on in the last 30 days that are not closed, duplicates, or not-a-bug. Only developers and the original author can affect this date as public comments do not. Stale : Bugs last commented on at least 30 days ago that are not closed, duplicates, or not-a-bug. Only developers and the original author can affect this date as public comments do not count. Suspended : Tickets which are waiting on some action which is outside the scope of the PHP developers. Wont fix : Tickets where PHP developers won't fix an issue (even though it is acknowlegded as such), for reasons to be stated in the closing comment. All : All types, even not-a-bug. If you're only interested in critical bugs, or want to see which have been verified, or perhaps just those seeking feedback. Category Bugs are categorized although sometimes it might seem like a bug could be in multiple categories. You may choose a specific category or allow any, and also disallow certain categories. If you're unable to locate a bug, consider trying a feature request or any status. OS Bugs that may be specific to an operating system. This value is entered in by the reporter as the OS they used while finding the bug so this may or may not have meaning. Also, the value isn't regulated so for example Windows may be written as Win32, Win, Windows, Win98, NT, etc. Or perhaps a distribution name rather than simply Linux. The query uses a SQL LIKE statement like so: '%$os%' . Although not an accurate field, it may be of some use. Version Limit bugs to a specific version of PHP. A one character integer of 3, 4 or 5 is standard. Entering a length greater than one will perform a SQL LIKE statement like so: '$version%' . Defaults to both 4 and 5. Limit returned bugs to a specific version of PHP. This is fairly reliable as initial version entries are standardized, but on occasion people are known to enter in bogus version information. Assigned Some bugs get assigned to PHP developers, in which case you may specify by entering in the PHP username of said developer. Example use is limiting the bugs assigned to yourself. Author email Takes on an email address of the original author of a bug. Looking for all bugs that a particular person initiated. Date Limit bugs that were reported by a specific time period. This is not only the amount of time since a comment or developer remark was last made, but this is the time when the bug was originally reported. Looking for recently reported bugs. For example, choosing 30 days ago will limit the search to all bugs reported in the last 30 days. Bug System Statistics You can view a variety of statistics about the bugs that have been reported on our bug statistics page . Copyright © 2001-2026 The PHP Group All rights reserved. Last updated: Tue Jan 13 09:00:01 2026 UTC | 2026-01-13T09:30:34 |
https://young-programmers.blogspot.com/2015/02/ | Young Programmers Podcast: February 2015 skip to main | skip to sidebar Young Programmers Podcast A video podcast for computer programmers in grades 3 and up. We learn about Scratch, Tynker, Alice, Python, Pygame, and Scala, and interview interesting programmers. From professional software developer and teacher Dave Briccetti, and many special guests. Viewing the Videos or Subscribing to the Podcast Some of the entries have a picture, which you can click to access the video. Otherwise, to see the videos, use this icon to subscribe to or view the feed: Or, subscribe in iTunes Thursday, February 12, 2015 This Podcast Moves to YouTube Hi all. I have moved to YouTube: https://www.youtube.com/user/dcbriccetti To this playlist, specifically: https://www.youtube.com/playlist?list=PLA87D270FAD3A8C73 See you there. at 7:12 PM Newer Posts Older Posts Home Subscribe to: Comments (Atom) About Me Dave Briccetti View my complete profile Where to Get Software Kojo Python Alice Scratch Other Blogs Dave Briccetti’s Blog One of My Best Classes Ever 10 years ago Tags alice (3) Android (1) arduino (1) art (1) audacity (2) dictionary (2) Flickr (1) functions (2) gamedev (1) garageband (1) GIMP (2) Google (2) guest (4) hacker (1) higher-order functions (1) inkscape (1) interview (9) Java (2) JavaFX (2) Jython (3) Kojo (2) lift (1) music (2) physics (1) platform (1) programmer (4) pygame (6) python (31) PythonCard (1) random (6) Sande (2) Scala (5) scratch (10) shdh (2) shdh34 (2) sound (3) sprite (2) Swing (3) teaching (3) twitter (2) Tynker (1) Web Services (1) xturtle (1) Followers Blog Archive ▼  2015 (1) ▼  February (1) This Podcast Moves to YouTube ►  2013 (4) ►  July (1) ►  June (3) ►  2012 (2) ►  February (1) ►  January (1) ►  2011 (8) ►  November (1) ►  July (3) ►  May (1) ►  February (2) ►  January (1) ►  2010 (6) ►  October (2) ►  June (2) ►  February (2) ►  2009 (37) ►  December (4) ►  November (1) ►  September (7) ►  August (11) ►  July (14)   | 2026-01-13T09:30:34 |
https://docs.aws.amazon.com/zh_tw/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-testing-debugging.html | 測試和偵錯 Lambda@Edge 函數 - Amazon CloudFront 測試和偵錯 Lambda@Edge 函數 - Amazon CloudFront 文件 Amazon CloudFront 開發人員指南 測試您的 Lambda@Edge 函數 在 CloudFront 中識別 Lambda@Edge 函數錯誤 針對無效的 Lambda@Edge 函數回應 (驗證錯誤) 進行疑難排解 進行 Lambda@Edge 函數執行錯誤的疑難排解 判斷 Lambda@Edge 區域 判斷您的帳戶是否推送日誌到 CloudWatch 本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。 測試和偵錯 Lambda@Edge 函數 單獨測試您 Lambda@Edge 函數的程式碼,和進行整合測試,這些動作非常地重要,前者可確保函數能夠完成預期的任務,後者則可確保函數能夠正確地搭配 CloudFront 使用。 在整合測試期間,或是在已經部署函數之後,您可能會需要進行 CloudFront 錯誤的偵錯,例如 HTTP 5xx 錯誤。錯誤可能是從 Lambda 函數傳回的無效回應、觸發函數時的執行錯誤,或是由於 Lambda 服務進行調節所產生的錯誤。本主題中的段落,說明了用來判斷是哪些故障類型造成問題的策略,以及您可採取的問題修正步驟。 注意 當您在疑難排解錯誤時檢閱 CloudWatch 日誌檔案或指標時,請注意這些檔案或指標會顯示在最 AWS 區域 接近函數執行位置的位置。因此,舉例來說,如果您擁有網站或 Web 應用程式,其使用者位於英國,而您擁有與分佈相關聯的 Lambda 函數,則您必須變更區域,才能檢視倫敦 AWS 區域的 CloudWatch 指標或日誌檔案。如需詳細資訊,請參閱 判斷 Lambda@Edge 區域 。 主題 測試您的 Lambda@Edge 函數 在 CloudFront 中識別 Lambda@Edge 函數錯誤 針對無效的 Lambda@Edge 函數回應 (驗證錯誤) 進行疑難排解 進行 Lambda@Edge 函數執行錯誤的疑難排解 判斷 Lambda@Edge 區域 判斷您的帳戶是否推送日誌到 CloudWatch 測試您的 Lambda@Edge 函數 測試您的 Lambda 函數包括兩個步驟:獨立測試和整合測試。 測試獨立的功能 在將您的 Lambda 函數新增到 CloudFront 之前,請務必先使用 CloudFront 主控台中的測試函數,或利用其他的方法,來進行函數的測試。如需有關在 Lambda 主控台進行測試的詳細資訊,請參閱《 AWS Lambda 開發人員指南 》中的 使用主控台調用 Lambda 函數 。 在 CloudFront 中測試函數的運作 完成整合測試非常地重要,這項測試會建立您的函數與分佈的關聯,並根據 CloudFront 事件執行。請務必以正確的事件觸發函數,並傳回對 CloudFront 有效且正確的回應。例如請確定事件的結構正確、只包含有效的標頭等,以此類推。 當您在 Lambda 主控台中,重複地進行函數的整合測試時,請參閱 Lambda@Edge 教學課程中的步驟,以修改程式碼,或變更呼叫函數的 CloudFront 觸發條件。例如,請確定您使用函式的編號版本,如教學課程的這項步驟中所述: 步驟 4:新增一個 CloudFront 觸發條件來執行該函數 。 在變更和部署函數時,請注意,更新的函數與 CloudFront 觸發條件複寫到所有區域時,將需要幾分鐘的時間。這通常需要幾分鐘的時間,但最多可能需要 15 分鐘。 若要檢查複寫是否已完成,請移至 CloudFront主控台並檢視您的分佈。 如欲檢查您的複寫是否已完成部署 在 https://console.aws.amazon.com/cloudfront/v4/home 中開啟 CloudFront 主控台。 選擇分佈名稱。 檢查分佈的狀態是否已從 In Progress (進行中) 變回 Deployed (已部署) ,這表示您的函數已複寫完成。接著,請依照下一節的步驟來驗證函數是否正常運作。 請注意,在主控台中進行的測試只會驗證您的函數邏輯,並不會套用 Lambda@Edge 特定的服務配額 (先前稱為限制)。 在 CloudFront 中識別 Lambda@Edge 函數錯誤 確認您的函數邏輯可正常運作之後,當函數在 CloudFront 中執行時,仍然可能會出現 HTTP 5xx 錯誤。有各種原因可能傳回 HTTP 5xx 錯誤,包括 Lambda 函數錯誤或 CloudFront 中的其他問題。 如果您使用 Lambda@Edge 函數,您可以使用 CloudFront 主控台中的圖形以協助追蹤造成錯誤的原因,然後試圖修正該錯誤。例如,您可以檢視 HTTP 5xx 錯誤是否由 CloudFront 或 Lambda 函數造成,然後針對特定函數檢視相關的日誌檔案以調查該問題。 若要排除 CloudFront 中的一般性 HTTP 錯誤,請參閱下列主題中的故障診斷步驟: 對 CloudFront 中的錯誤回應狀態碼進行疑難排解 。 導致 CloudFront 中 Lambda@Edge 函數錯誤的原因 Lambda 函數造成 HTTP 5xx 錯誤的原因有許多種,您應該依據錯誤的類型採取相應的疑難排解步驟。錯誤分類如下: Lambda 函數執行錯誤 如果因為函數中存在未處理的例外狀況,或程式碼中出現錯誤,而使得 CloudFront 未獲得 Lambda 的回應,會造成執行錯誤。例如,如果程式碼包含回呼 (錯誤)。 無效的 Lambda 函數回應會傳回至 CloudFront 函數執行後,CloudFront 會收到來自 Lambda 的回應。如果回應的物件結構不符合 Lambda@Edge 事件結構說明頁面 ,或回應中包含無效的標頭或其他無效的欄位,系統會傳回錯誤。 由於 Lambda 服務配額 (以前稱為限制),CloudFront 中的執行會受到調節 Lambda 服務會在各區域中調節執行作業,並在您超出配額時傳回錯誤。如需詳細資訊,請參閱 Lambda@Edge 的配額 。 如何判斷故障的類型 為了在您偵錯和解決 CloudFront 傳回的錯誤時協助您決定處理重點,找出為何 CloudFront 會傳回 HTTP 錯誤的原因是很有幫助的。若要開始使用,您可以使用 AWS 管理主控台上 CloudFront 主控台的 Monitoring (監控) 區段中提供的圖表。如需在 CloudFront 主控台的 監控 區段檢視圖表的詳細資訊,請參閱 使用 Amazon CloudWatch 監控 CloudFront 指標 。 以下圖表在您追縱原始伺服器或 Lambda 函數是否傳回錯誤時特別實用,當錯誤是由於 Lambda 函數造成時,也可縮小問題的類型。 錯誤率圖表 在每一個分佈的 Overview (概觀) 標籤上,您可以檢視的其中一個圖表就是 Error rates (錯誤率) 圖表。此圖表顯示錯誤率佔進入您分配的請求總數的百分比。此圖表顯示總錯誤率,總共 4xx 個錯誤、總共 5xx 個錯誤,以及總共 5xx 個 Lambda 函數的錯誤。根據錯誤類型和磁碟區,您可以採取步驟以針對原因進行調查和故障診斷。 如果您看到 Lambda 錯誤,您可以透過查看該函數傳回的特定錯誤類型,以進一步進行調查。 Lambda@Edge 錯誤 標籤包含了依類型分類的函數錯誤圖表,可協助您找出特定函數的問題。 如果您看到 CloudFront 錯誤,您可以進行故障排除並解決原始伺服器錯誤或變更您的 CloudFront 組態。如需詳細資訊,請參閱 對 CloudFront 中的錯誤回應狀態碼進行疑難排解 。 執行錯誤和無效函數回應圖表 Lambda@Edge 錯誤 標籤包含針對特定分佈 (依類型) 分類 Lambda@Edge 錯誤的圖表。例如一個圖表會依 AWS 區域顯示所有執行錯誤。 為了讓您更輕鬆地對問題進行故障排除,您可以依區域開啟和檢驗特定函數的日誌檔案,以尋找特定問題。 依區域檢視特定函數的日誌檔案 在 Lambda@Edge 錯誤 索引標籤的 關聯 Lambda@Edge 函數 下,選擇函數名稱,然後選擇 檢視指標 。 接著在出現您函數名稱的頁面上,於右上角選擇 檢視函數日誌 ,然後選擇區域。 例如,如果您在美國西部 (奧勒岡) 區域的 錯誤 圖表中看到問題,請從下拉式清單選擇該區域。隨即開啟 Amazon CloudWatch 主控台。 在該區域 CloudWatch 主控台的 日誌串流 下,選擇日誌串流以檢視函數事件。 此外,請閱讀此章的下列各節,以了解有關故障排除和修復錯誤的更多建議。 調節圖表 Lambda@Edge 錯誤 標籤也包含 調節 圖表。有時,如果您到達區域並行數量配額 (先前稱為限制),則 Lambda 服務會依每一區域為基礎調節您的函數呼叫。如果出現 超過限制 錯誤,表示您的函數已到達 Lambda 服務對「區域」中的執行作業所施加的配額。如需詳細資訊,包括如何請求提高配額,請參閱 Lambda@Edge 的配額 。 如需如何使用此資訊進行 HTTP 錯誤故障診斷的詳細資訊,請參閱 在 AWS上針對您的內容交付執行偵錯的四個步驟 。 針對無效的 Lambda@Edge 函數回應 (驗證錯誤) 進行疑難排解 如果您找到的問題是 Lambda 驗證錯誤,表示您的 Lambda 函數將無效的回應傳回給 CloudFront。請依照本段落中的指引,採取步驟來審視您的函數,並確認回應符合 CloudFront 的請求。 CloudFront 會以兩種方式驗證來自 Lambda 函數的回應: Lambda 回應必須符合所請求的物件結構。 錯誤的物件結構範例包括:無法剖析的 JSON、遺漏必要的欄位,以及在回應中包含無效的物件。如需更多資訊,請參閱 Lambda@Edge 事件結構說明頁面 。 回應必須只包含有效的物件值。 如果回應中包含有效的物件,但是具有不支援的值,將會發生錯誤。此種情況的範例包括:新增或更新被列入不允許或唯讀的標頭 (請參閱 對邊緣函數的限制 )、超過內文大小的上限 (請參閱 Lambda@Edge 錯誤 主題中的 對所產生回應的大小限制 ),以及無效的字元或值 (請參閱 Lambda@Edge 事件結構說明頁面 )。 當 Lambda 傳回無效回應至 CloudFront 時,會將錯誤訊息寫入日誌檔案,而 CloudFront 會在執行 Lambda 函數的區域中推送至 CloudWatch。出現無效的回應時,預設的動作是將日誌檔案傳送到 CloudWatch。不過,如果在函數發布之前,就已建立 Lambda 函數與 CloudFront 的關聯,則可能不會針對您的函數啟用這項預設動作。如需詳細資訊,請參閱本主題稍後的 判斷您的帳戶是否推送日誌到 CloudWatch 。 CloudFront 會將日誌檔案推送到對應您函數執行所在位置的區域 (在與您的分佈具有關聯的日誌群組中)。日誌群組具有下列格式: /aws/cloudfront/LambdaEdge/ DistributionId ,其中 DistributionId 是您分佈的 ID。若要判斷您可以在哪個區域中找到 CloudWatch 日誌檔案,請參閱本主題稍後的 判斷 Lambda@Edge 區域 。 如果錯誤是可重現的,您可以建立新的請求來造成錯誤,然後在失敗的 CloudFront 回應 ( X-Amz-Cf-Id 標頭) 中找到該請求的 ID,然後在日誌檔案中找出單一錯誤。日誌檔案記錄所包含的資訊,可協助您找出傳回錯誤的原因,也可以列出對應的 Lambda 請求 ID,來讓您針對單一請求的範圍,分析錯誤的根本原因。 如果錯誤是間歇性出現,您可以利用 CloudFront 存取日誌,針對失敗的請求找出其請求 ID,然後搜尋 CloudWatch Logs,尋找對應的錯誤訊息。如需詳細資訊,請參閱先前的段落 判斷故障的類型 。 進行 Lambda@Edge 函數執行錯誤的疑難排解 如果問題是 Lambda 的執行錯誤,那麼建立 Lambda 函數的記錄陳述式、將訊息寫入 CloudWatch 日誌檔案 (此日誌檔案會監控您的函數在 CloudFront 中的執行狀況),然後判斷函數是否如預期運作,這些動作將會有所幫助。接著,您可以在 CloudWatch 日誌檔案中搜尋這些陳述式,來確認您的函數是否正常運作。 注意 即使您未變更您的 Lambda@ Edge 函數,Lambda 函數執行環境的更新仍會對其造成影響,並因而傳回執行錯誤。如需有關測試和遷移至較新版本的資訊,請參閱 Lambda AWS 和 AWS Lambda@Edge 執行環境的近期更新。 判斷 Lambda@Edge 區域 若要檢視您 Lambda@Edge 函數正在接收流量的區域,請在 AWS 管理主控台上的 CloudFront 主控台檢視函數的指標圖形。每個 AWS 區域的指標都會顯示。在同一頁面中,您可以選擇一個區域並檢視該區域的日誌檔,以便調查問題。您必須檢閱正確區域中的 CloudWatch 日誌檔案 AWS ,以查看 CloudFront 執行 Lambda 函數時建立的日誌檔案。 如需在 CloudFront 主控台的 監控 區段檢視圖表的詳細資訊,請參閱 使用 Amazon CloudWatch 監控 CloudFront 指標 。 判斷您的帳戶是否推送日誌到 CloudWatch 根據預設,CloudFront 會啟用記錄無效的 Lambda 函式回應,並使用其中一個 Lambda@Edge 的服務連結角色 將日誌檔案推送至 CloudWatch。如果您擁有 Lambda@Edge 函數,而在無效的 Lambda 函數回應日誌函數推出之前,您已經將此 Lambda@Edge 函數新增到 CloudFront,而當您後續更新 Lambda@Edge 組態時,就會啟用記錄函數 (例如,透過新增 CloudFront 觸發條件)。 您可以執行下列的動作,來確認您的帳戶已啟用推送日誌檔案到 CloudWatch 的功能: 檢查日誌是否顯示在 CloudWatch 中 :請務必查看執行 Lambda@Edge 函數的區域。如需詳細資訊,請參閱 判斷 Lambda@Edge 區域 。 在 IAM 中判斷您的帳戶是否存在相關的服務連結角色 :您的帳戶必須擁有 IAM 角色 AWSServiceRoleForCloudFrontLogger 。如需有關此角色的詳細資訊,請參閱 Lambda@Edge 的服務連結角色 。 您的瀏覽器已停用或無法使用 Javascript。 您必須啟用 Javascript,才能使用 AWS 文件。請參閱您的瀏覽器說明頁以取得說明。 文件慣用形式 新增觸發條件到 Lambda@Edge 函數 刪除函數與複本 此頁面是否有幫助? - 是 感謝您,讓我們知道我們做得很好! 若您有空,歡迎您告知我們值得讚許的地方,這樣才能保持良好服務。 此頁面是否有幫助? - 否 感謝讓我們知道此頁面仍須改善。很抱歉,讓您失望。 若您有空,歡迎您提供改善文件的方式。 | 2026-01-13T09:30:34 |
https://logging.apache.org/security.html | Security :: Apache Logging Services Logging Services a project of Apache Software Foundation Home About Guidelines Charter Team Processes Wiki What is logging? Download Support Security XML Schema Blog The ASF License Donate Thanks Home Security Edit this Page Security The Logging Services Security Team takes security seriously. This allows our users to place their trust in Log4j for protecting their mission-critical data. On this page, we will help you find guidance on security-related issues and access to known vulnerabilities. Log4j 1 has reached End of Life in 2015, and is no longer supported. Vulnerabilities reported after August 2015 against Log4j 1 are not checked and will not be fixed. Users should upgrade to Log4j 2 to obtain security fixes. Getting support If you need help on building or configuring Logging Services projects or other help on following the instructions to mitigate the known vulnerabilities listed here, please use our user support channels . If you need to apply a source code patch, use the building instructions for the project version that you are using. These instructions can be found in BUILDING.adoc , BUILDING.md , etc. files distributed with the sources. Reporting vulnerabilities If you have encountered an unlisted security vulnerability or other unexpected behaviour that has a security impact, or if the descriptions here are incomplete, please report them privately to the Logging Services Security Team . We urge you to carefully read the threat model detailed in following sections before submitting a report. It guides users on certain safety instructions while using Logging Services software and elaborates on what counts as an unexpected behaviour that has a security impact. Common threat model All the logging frameworks maintained by Apache Logging Services ( Log4cxx , Log4j and Log4net ) face similar challenges from malicious actors. The following sections outline the most common threats to logging frameworks and clarify the assumptions regarding the origin and trustworthiness of various data sources. Vulnerability reports that do not adhere to these assumptions will not be accepted and are not eligible for the YesWeHack Bug Bounty Program . User types Apache Logging Services distinguishes two kinds of users: Trusted Users Application developers and administrators are considered trusted users. They have unrestricted access to all the features of the logging framework and the environment it is deployed to. Untrusted Users All the other users are considered untrusted. Data sources Logging systems read data from multiple sources that are controlled by both trusted and untrusted users: Trusted Sources Log4cxx, Log4j, and Log4net trust environment variables, configuration properties, and configuration files. To maintain security, the following responsibilities fall on the deployer: Ensure that untrusted parties do not have write access to these resources. Ensure these resources are transmitted only over confidential channels (e.g., HTTPS, secure file systems). Be aware that non-confidential channels such as HTTP or JMX are disabled by default to prevent accidental exposure. If configuration files use interpolation features (e.g., ( Log4j Lookups )), ensure that only trusted data sources are used. Pay special attention to values stored in the context map (see Thread Context in Log4j ). Although the context map is only accessible by developers, it has been known to include user-provided data, such as HTTP headers, which can introduce risks. The logging frameworks trust that the objects passed to the log statements can be safely converted to strings: These frameworks should not be used to log deserialized data from untrusted sources. See the related OWASP guide for details. If parameterized logging is used, the format string is trusted : Programmers should use compile-time constants as format strings to prevent attackers from tampering messages. See Don’t use string concatenation for an example. Untrusted Sources Log4cxx, Log4j and Log4net do not trust log messages. No particular input validation for log messages is necessary. They do not trust the string representation of log parameters. The logging frameworks do not trust neither the keys nor the values in the thread context. Threats These are the most commonly encountered threats for users of Log4cxx, Log4j and Log4net: Log Injection ( CWE-117 ) Log injection is a common attack vector to hide malicious activity in an application. Regarding this threat: Unstructured layouts such as Pattern Layout in Log4j do not protect users from log injection. These layouts are meant for human and not computer consumption. Log4cxx, Log4j and Log4net must prevent log injection in structured layouts, such as XML, JSON and RFC 5424. Supply chain attacks ( CWE-1357 ) Apache Logging Services projects do check the quality of our dependencies. Deprecated components such as the Cassandra , Kafka and CouchDB appenders are provided for backward compatibility purposes only. While we actively check for vulnerabilities in those components, they are de facto unmaintained, and we discourage their usage in production. All Apache Logging Services are signed with one of the keys in the Logging Services PMC KEYS file . We do not support artifacts that do not have a valid signature, and we encourage users to always check the integrity of the downloaded components. Additional information on how to verify releases signatures is available on the Download page Information disclosure ( CWE-200 ) Since logging frameworks implement information disclosure by design: It is up to the deployer to prevent unauthorized access to log files and to ensure that the appropriate log levels are configured. It is up to the programmer to document which log levels and markers might contain sensitive data. Attention should be brought to the fact that libraries on which an application depends might have a different log level and marker convention. Log masking techniques are out-of-scope for Log4cxx, Log4j, and Log4net. It is up to the developer to ensure that sensitive data is properly masked before it is passed to the logging implementation. For this purpose, third-party frameworks like Safe-Logging should be used. Log reliability (e.g. CVE-778 ) Log4j is designed with reliability in mind: By default , Log4j should deliver log events to the appropriate resource even during a reconfiguration event or will log an error. While log events will be delivered to a resource, not all resources provide a confirmation mechanism. To ensure reliability along the entire logging pipeline, it is up to the deployer to use reliable transmission components: files, loopback network sockets or message-queue-based systems for example. Log4j provides configuration options that discard log events if the load on the application is high. Using these options invalidates the reliability guarantees. Denial of service ( CVE-779 ) Since our logging frameworks are designed with reliability in mind: Our frameworks go to great lengths to minimize performance overhead, minimize latency and maximizing throughput. Since a universal solution does not exist, many configuration options exist to adapt the performance characteristics to a specific application. See Performance for more information. It is up to the deployer to ensure that the appenders can keep up with the logs written by using the appropriate appenders and configuring the appropriate level of logs. It is up to the developer to ensure that log statements, which are not enabled, generate minimal overhead. See the Log4j API Best Practices , for example. Improper neutralization of Special Elements ( CWE-138 ) Log4cxx, Log4j, and Log4net do allow users to pass untrusted strings to log statements and thread context, except in the format string of parameterized logging, as mentioned above. Vulnerability handling policy The Logging Services Security Team follows the ASF Project Security guide for handling security vulnerabilities. Reported security vulnerabilities are subject to voting (by means of lazy approval , preferably) in the private security mailing list before creating a CVE and populating its associated content. This procedure involves only the creation of CVEs and blocks neither (vulnerability) fixes, nor releases. Vulnerability Disclosure Report (VDR) Many Logging Services projects distribute CycloneDX Software Bill of Materials (SBOM) along with each deployed artifact. This is streamlined by Logging Parent for Maven-based projects. Produced SBOMs contain BOM-links referring to a CycloneDX Vulnerability Disclosure Report (VDR) that Apache Logging Services uses for all projects it maintains. This VDR is accessible through the following URL: https://logging.apache.org/cyclonedx/vdr.xml Known vulnerabilities The Logging Services Security Team believes that accuracy, completeness and availability of security information is essential for our users. We choose to pool all information on this one page, allowing easy searching for security vulnerabilities over a range of criteria. Version ranges follow the VERS specification : Log4cxx: semver scheme Log4j: maven scheme Log4net: nuget scheme For brevity, mathematical interval notation is used, with the union operator ( ∪ ) to represent multiple ranges. CVE-2025-68161 Summary Missing TLS hostname verification in Socket appender CVSS 4.x Score & Vector 6.3 MEDIUM (CVSS:4.0/AV:N/AC:H/AT:N/PR:N/UI:N/VC:L/VI:N/VA:N/SC:N/SI:L/SA:N) Components affected Log4j Core Versions affected [2.0-beta9, 2.25.3) Versions fixed 2.25.3 Description The Socket Appender in Log4j Core versions 2.0-beta9 through 2.25.2 does not perform TLS hostname verification of the peer certificate, even when the verifyHostName configuration attribute or the log4j2.sslVerifyHostName system property is set to true . This issue may allow a man-in-the-middle attacker to intercept or redirect log traffic under the following conditions: The attacker is able to intercept or redirect network traffic between the client and the log receiver. The attacker can present a server certificate issued by a certification authority trusted by the Socket Appender’s configured trust store (or by the default Java trust store if no custom trust store is configured). Remediation Users are advised to upgrade to Log4j Core version 2.25.3 , which fully addresses this issue. For earlier versions, the risk can be reduced by carefully restricting the trust store used by the Socket Appender. When configuring a trust store for Log4j Core, we recommend following established best practices. For example, NIST SP 800-52 Rev. 2 (§4.5.2) recommends using a trust store that contains only the CA certificates required for the intended communication scope, such as a private or enterprise CA. Credits This issue was discovered by Samuli Leinonen. It was reported through the Log4j Bug Bounty Program on YesWeHack funded by the Sovereign Tech Agency. References CVE-2025-68161 Pull request that fixes the issue CVE-2025-54813 Summary Improper escaping with JSONLayout CVSS 4.x Score & Vector 6.3 MEDIUM (CVSS:4.0/AV:N/AC:L/AT:P/PR:N/UI:N/VC:N/VI:L/VA:N/SC:N/SI:L/SA:N) Components affected Log4cxx Versions affected [0.11.0, 1.5.0) Versions fixed 1.5.0 Description When using JSONLayout , not all payload bytes are properly escaped. If an attacker-supplied message contains certain non-printable characters, these will be passed along in the message and written out as part of the JSON message. This may prevent applications that consume these logs from correctly interpreting the information within them. Remediation Users are recommended to upgrade to version 1.5.0 , which fixes the issue. Credits This issue was discovered and remediated with support from the Sovereign Tech Agency, through the Log4j Bug Bounty Program on YesWeHack . References CVE-2025-54813 Pull request that fixes the issue CVE-2025-54812 Summary Improper HTML escaping in HTMLLayout CVSS 4.x Score & Vector 2.1 LOW (CVSS:4.0/AV:N/AC:H/AT:N/PR:N/UI:A/VC:L/VI:L/VA:N/SC:L/SI:L/SA:N) Components affected Log4cxx Versions affected [0, 1.5.0) Versions fixed 1.5.0 Description When using HTMLLayout , logger names are not properly escaped when writing out to the HTML file. If untrusted data is used to retrieve the name of a logger, an attacker could theoretically inject HTML or Javascript in order to hide information from logs or steal data from the user. In order to activate this, the following sequence must occur: Log4cxx is configured to use HTMLLayout . Logger name comes from an untrusted string. Logger with compromised name logs a message. User opens the generated HTML log file in their browser, leading to potential XSS. Because logger names are generally constant strings, we assess the impact to users as LOW. Remediation Users are recommended to upgrade to version 1.5.0 , which fixes the issue. Credits This issue was discovered and remediated with support from the Sovereign Tech Agency, through the Log4j Bug Bounty Program on YesWeHack . References CVE-2025-54812 Pull request #509 Pull request #514 CVE-2021-44832 Summary JDBC appender is vulnerable to remote code execution in certain configurations CVSS 3.x Score & Vector 6.6 MEDIUM (CVSS:3.1/AV:N/AC:H/PR:H/UI:N/S:U/C:H/I:H/A:H) Components affected log4j-core Versions affected [2.0-beta7, 2.3.1) ∪ [2.4, 2.12.3) ∪ [2.13.0, 2.17.0) Versions fixed 2.3.1 (for Java 6), 2.12.3 (for Java 7), or 2.17.0 (for Java 8 and later) Description An attacker with write access to the logging configuration can construct a malicious configuration using a JDBC Appender with a data source referencing a JNDI URI which can execute remote code. This issue is fixed by limiting JNDI data source names to the java protocol. Mitigation Upgrade to 2.3.1 (for Java 6), 2.12.3 (for Java 7), or 2.17.0 (for Java 8 and later). In prior releases confirm that if the JDBC Appender is being used it is not configured to use any protocol other than java . References CVE-2021-44832 LOG4J2-3242 CVE-2021-45105 Summary Infinite recursion in lookup evaluation CVSS 3.x Score & Vector 5.9 MEDIUM (CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H) Components affected log4j-core Versions affected [2.0-alpha1, 2.3.1) ∪ [2.4, 2.12.3) ∪ [2.13.0, 2.17.0) Versions fixed 2.3.1 (for Java 6), 2.12.3 (for Java 7), and 2.17.0 (for Java 8 and later) Description Log4j versions 2.0-alpha1 through 2.16.0 (excluding 2.3.1 and 2.12.3 ), did not protect from uncontrolled recursion that can be implemented using self-referential lookups. When the logging configuration uses a non-default Pattern Layout with a Context Lookup (for example, $${ctx:loginId} ), attackers with control over Thread Context Map (MDC) input data can craft malicious input data that contains a recursive lookup, resulting in a StackOverflowError that will terminate the process. This is also known as a DoS (Denial-of-Service) attack. Mitigation Upgrade to 2.3.1 (for Java 6), 2.12.3 (for Java 7), or 2.17.0 (for Java 8 and later). Alternatively, this infinite recursion issue can be mitigated in configuration: In PatternLayout in the logging configuration, replace Context Lookups like ${ctx:loginId} or $${ctx:loginId} with Thread Context Map patterns ( %X , %mdc , or %MDC ). Otherwise, in the configuration, remove references to Context Lookups like ${ctx:loginId} or $${ctx:loginId} where they originate from sources external to the application such as HTTP headers or user input. Note that this mitigation is insufficient in releases older than 2.12.2 (for Java 7), and 2.16.0 (for Java 8 and later) as the issues fixed in those releases will still be present. Note that only the log4j-core JAR file is impacted by this vulnerability. Applications using only the log4j-api JAR file without the log4j-core JAR file are not impacted by this vulnerability. Credits Independently discovered by Hideki Okamoto of Akamai Technologies, Guy Lederfein of Trend Micro Research working with Trend Micro’s Zero Day Initiative, and another anonymous vulnerability researcher. References CVE-2021-45105 LOG4J2-3230 CVE-2021-45046 Summary Thread Context Lookup is vulnerable to remote code execution in certain configurations CVSS 3.x Score & Vector 9.0 CRITICAL (CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:C/C:H/I:H/A:H) Components affected log4j-core Versions affected [2.0-beta9, 2.3.1) ∪ [2.4, 2.12.3) ∪ [2.13.0, 2.16.0) Versions fixed 2.3.1 (for Java 6), 2.12.3 (for Java 7), and 2.16.0 (for Java 8 and later) Description It was found that the fix to address CVE-2021-44228 in Log4j 2.15.0 was incomplete in certain non-default configurations. When the logging configuration uses a non-default Pattern Layout with a Thread Context Lookup (for example, $${ctx:loginId} ), attackers with control over Thread Context Map (MDC) can craft malicious input data using a JNDI Lookup pattern, resulting in an information leak and remote code execution in some environments and local code execution in all environments. Remote code execution has been demonstrated on macOS, Fedora, Arch Linux, and Alpine Linux. Note that this vulnerability is not limited to just the JNDI lookup. Any other Lookup could also be included in a Thread Context Map variable and possibly have private details exposed to anyone with access to the logs. Note that only the log4j-core JAR file is impacted by this vulnerability. Applications using only the log4j-api JAR file without the log4j-core JAR file are not impacted by this vulnerability. Mitigation Upgrade to Log4j 2.3.1 (for Java 6), 2.12.3 (for Java 7), or 2.16.0 (for Java 8 and later). Credits This issue was discovered by Kai Mindermann of iC Consult and separately by 4ra1n. Additional vulnerability details discovered independently by Ash Fox of Google, Alvaro Muñoz and Tony Torralba from GitHub, Anthony Weems of Praetorian, and RyotaK (@ryotkak). References CVE-2021-45046 LOG4J2-3221 CVE-2021-44228 Summary JNDI lookup can be exploited to execute arbitrary code loaded from an LDAP server CVSS 3.x Score & Vector 10.0 CRITICAL (CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H) Components affected log4j-core Versions affected [2.0-beta9, 2.3.1) ∪ [2.4, 2.12.2) ∪ [2.13.0, 2.15.0) Versions fixed 2.3.1 (for Java 6), 2.12.2 (for Java 7), and 2.15.0 (for Java 8 and later) Description In Log4j, the JNDI features used in configurations, log messages, and parameters do not protect against attacker-controlled LDAP and other JNDI related endpoints. An attacker who can control log messages or log message parameters can execute arbitrary code loaded from LDAP servers. Note that only the log4j-core JAR file is impacted by this vulnerability. Applications using only the log4j-api JAR file without the log4j-core JAR file are not impacted by this vulnerability. Mitigation Log4j 1 mitigation Log4j 1 has reached End of Life in 2015, and is no longer supported. Vulnerabilities reported after August 2015 against Log4j 1 are not checked and will not be fixed. Users should upgrade to Log4j 2 to obtain security fixes. Log4j 1 does not have Lookups, so the risk is lower. Applications using Log4j 1 are only vulnerable to this attack when they use JNDI in their configuration. A separate CVE ( CVE-2021-4104 ) has been filed for this vulnerability. To mitigate, audit your logging configuration to ensure it has no JMSAppender configured. Log4j 1 configurations without JMSAppender are not impacted by this vulnerability. Log4j 2 mitigation Upgrade to Log4j 2.3.1 (for Java 6), 2.12.2 (for Java 7), or 2.15.0 (for Java 8 and later). Credits This issue was discovered by Chen Zhaojun of Alibaba Cloud Security Team. References CVE-2021-44228 LOG4J2-3198 LOG4J2-3201 LOG4J2-3242 CVE-2020-9488 Summary Improper validation of certificate with host mismatch in SMTP appender CVSS 3.x Score & Vector 3.7 LOW (CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:N/A:N) Components affected log4j-core Versions affected [2.0-beta1, 2.3.2) ∪ [2.4, 2.12.3) ∪ [2.13.0, 2.13.2) Versions fixed 2.3.2 (for Java 6), 2.12.3 (for Java 7) and 2.13.2 (for Java 8 and later) Description Improper validation of certificate with host mismatch in SMTP appender. This could allow an SMTPS connection to be intercepted by a man-in-the-middle attack which could leak any log messages sent through that appender. The reported issue was caused by an error in SslConfiguration . Any element using SslConfiguration in the Log4j Configuration is also affected by this issue. This includes HttpAppender , SocketAppender , and SyslogAppender . Usages of SslConfiguration that are configured via system properties are not affected. Mitigation Upgrade to 2.3.2 (Java 6), 2.12.3 (Java 7) or 2.13.2 (Java 8 and later). Alternatively, users can set the mail.smtp.ssl.checkserveridentity system property to true to enable SMTPS hostname verification for all SMTPS mail sessions. Credits This issue was discovered by Peter Stöckli. References CVE-2020-9488 LOG4J2-2819 CVE-2017-5645 Summary TCP/UDP socket servers can be exploited to execute arbitrary code CVSS 2.0 Score & Vector 7.5 HIGH (AV:N/AC:L/Au:N/C:P/I:P/A:P) Components affected log4j-core Versions affected [2.0-alpha1, 2.8.2) Versions fixed 2.8.2 (for Java 7 and later) Description When using the TCP socket server or UDP socket server to receive serialized log events from another application, a specially crafted binary payload can be sent that, when deserialized, can execute arbitrary code. Mitigation Java 7 and above users should migrate to version 2.8.2 or avoid using the socket server classes. Java 6 users should avoid using the TCP or UDP socket server classes, or they can manually backport the security fix commit from 2.8.2 . Credits This issue was discovered by Marcio Almeida de Macedo of Red Team at Telstra. References CVE-2017-5645 LOG4J2-1863 Security fix commit Copyright © 1999-2026 The Apache Software Foundation . Licensed under the Apache Software License, Version 2.0 . Please read our privacy policy . Apache, Log4j, and the Apache feather logo are trademarks or registered trademarks of The Apache Software Foundation. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. | 2026-01-13T09:30:34 |
http://bugs.php.net/bugs-generating-backtrace.php | PHP :: Generating a gdb backtrace php.net | support | documentation | report a bug | advanced search | search howto | statistics | random bug | login go to bug id or search bugs for Unix | Windows Generating a gdb backtrace Noticing PHP crashes There's no absolute way to know that PHP is crashing, but there may be signs. Typically, if you access a page that is always supposed to generate output (has a leading HTML block, for example), and suddenly get "Document contains no data" from your browser, it may mean that PHP crashes somewhere along the execution of the script. Another way to tell that PHP is crashing is by looking at the Apache error logs, and looking for SEGV (Apache 1.2) or Segmentation Fault (Apache 1.3). Important! To get a backtrace with correct information you must have a non stripped PHP binary! If you don't have a core file yet: Remove any limits you may have on core dump size from your shell: tcsh: unlimit coredumpsize bash/sh: ulimit -c unlimited Ensure that the directory in which you're running PHP, or the PHP-enabled httpd, has write permissions for the user who's running PHP. Cause PHP to crash: PHP CGI: Simply run php with the script that crashes it PHP Apache Module: Run httpd -X, and access the script that crashes PHP Generic way to get a core on Linux Set up the core pattern (run this command as root ): echo "<cores dir>/core-%e.%p" > /proc/sys/kernel/core_pattern make sure the directory is writable by PHP Set the ulimit (see above how to do it). Restart/rerun PHP. After that any process crashing in your system, including PHP, will leave its core file in the directory you've specified in core_pattern . Once you have the core file: Run gdb with the path to the PHP or PHP-enabled httpd binary, and path to the core file. Some examples: gdb /usr/local/apache/sbin/httpd /usr/local/apache/sbin/core gdb /home/user/dev/php-snaps/sapi/cli/php /home/user/dev/testing/core At the gdb prompt, run: (gdb) bt If you can't get a core file: Run httpd -X under gdb with something like: gdb /usr/local/apache/sbin/httpd (gdb) run -X Then use your web browser and access your server to force the crash. You should see a gdb prompt appear and some message indicating that there was a crash. At this gdb prompt, type: (gdb) bt or, running from the commandline gdb /home/user/dev/php-snaps/sapi/cli/php (gdb) run /path/to/script.php (gdb) bt This should generate a backtrace, that you should submit in the bug report, along with any other details you can give us about your setup, and offending script. Locating which function call caused a segfault: You can locate the function call that caused a segfault, easily, with gdb. First, you need a core file or to generate a segfault under gdb as described above. In PHP, each function is executed by an internal function called execute() and has its own stack. Each line generated by the bt command represents a function call stack. Typically, you will see several execute() lines when you issue bt . You are interested in the last execute() stack (i.e. smallest frame number). You can move the current working stack with the up , down or frame commands. Below is an example gdb session that can be used as a guideline on how to handle your segfault. Sample gdb session (gdb) bt #0 0x080ca21b in _efree (ptr=0xbfffdb9b) at zend_alloc.c:240 #1 0x080d691a in _zval_dtor (zvalue=0x8186b94) at zend_variables.c:44 #2 0x080cfab3 in _zval_ptr_dtor (zval_ptr=0xbfffdbfc) at zend_execute_API.c:274 #3 0x080f1cc4 in execute (op_array=0x816c670) at ./zend_execute.c:1605 #4 0x080f1e06 in execute (op_array=0x816c530) at ./zend_execute.c:1638 #5 0x080f1e06 in execute (op_array=0x816c278) at ./zend_execute.c:1638 #6 0x080f1e06 in execute (op_array=0x8166eec) at ./zend_execute.c:1638 #7 0x080d7b93 in zend_execute_scripts (type=8, retval=0x0, file_count=3) at zend.c:810 #8 0x0805ea75 in php_execute_script (primary_file=0xbffff650) at main.c:1310 #9 0x0805cdb3 in main (argc=2, argv=0xbffff6fc) at cgi_main.c:753 #10 0x400c91be in __libc_start_main (main=0x805c580 , argc=2, ubp_av=0xbffff6fc, init=0x805b080 , fini=0x80f67b4 , rtld_fini=0x4000ddd0 , stack_end=0xbffff6ec) at ../sysdeps/generic/libc-start.c:129 (gdb) frame 3 #3 0x080f1cc4 in execute (op_array=0x816c670) at ./zend_execute.c:1605 (gdb) print (char *)(executor_globals.function_state_ptr->function)->common.function_name $14 = 0x80fa6fa "pg_result_error" (gdb) print (char *)executor_globals.active_op_array->function_name $15 = 0x816cfc4 "result_error" (gdb) print (char *)executor_globals.active_op_array->filename $16 = 0x816afbc "/home/yohgaki/php/DEV/segfault.php" (gdb) In this session, frame 3 is the last execute() call. The frame 3 command moves the current working stack to the proper frame. print (char *)(executor_globals.function_state_ptr->function)->common.function_name prints the function name. In the sample gdb session, the pg_result_error() call is causing the segfault. You can print any internal data that you like, if you know the internal data structure. Please do not ask how to use gdb or about the internal data structure. Refer to gdb manual for gdb usage and to the PHP source for the internal data structure. You may not see execute if the segfault happens without calling any functions. Copyright © 2001-2026 The PHP Group All rights reserved. Last updated: Tue Jan 13 09:00:01 2026 UTC | 2026-01-13T09:30:34 |
https://aws.amazon.com/ru/waf/ | Брандмауэр веб-приложений – Защита веб-API – AWS WAF – AWS Перейти к главному контенту Filter: Все English Свяжитесь с нами AWS Marketplace Поддержка Мой аккаунт Поиск Filter: Все Войти в консоль Создать аккаунт Брандмауэр веб-приложений AWS Обзор Возможности Цены Начало работы Ресурсы Еще Продукты › Безопасность, идентификация и соответствие требованиям › AWS WAF До 10 млн стандартных запросов на управление ботами в месяц на Уровне бесплатного пользования AWS → AWS WAF Защитите веб‑приложения от распространенных веб-атак Начать работу с AWS WAF Преимущества AWS WAF Управляемые правила значительно экономят время Экономия времени благодаря использованию контролируемых правил позволяет больше времени уделять созданию приложений. Мониторинг, блокировка или ограничение скорости ботов Более удобное отслеживание, блокировка или ограничение скорости распространенных ботов. Сократите количество действий при настройке средств защиты Ускорьте комплексную настройку безопасности с помощью консолидированного интерфейса, который снижает сложность и количество шагов при развертывании средств защиты до 80 %. Централизованный мониторинг и возможность реагирования Единый комплексный интерфейс сочетает основные функции безопасности со специализированными решениями от партнеров для повышения прозрачности и контроля безопасности. Этот унифицированный подход превращает данные безопасности в практические рекомендации, устраняя операционные сложности и ускоряя реагирование на риски. Повышение уровня безопасности Предварительно сконфигурированные комплекты защиты используют опыт AWS в области безопасности, предоставляя шаблоны мгновенной защиты для конкретных отраслей и типов рабочих нагрузок, таких как API, PHP-приложения и веб-сервисы. Эти шаблоны постоянно оптимизируются для обеспечения актуальных мер безопасности и не требуют глубоких знаний в области развертывания. Получайте рекомендации по безопасности, помогающие укрепить систему безопасности в целом. Почему именно AWS WAF? AWS WAF дает возможность создавать правила безопасности, которые будут контролировать трафик от ботов и блокировать такие выполняемые по распространенным шаблонам атаки, как внедрение SQL-кода и межсайтовый скриптинг (XSS). Играть Примеры использования Фильтрация веб-трафика Создавайте правила для фильтрации веб-запросов на основе таких условий, как IP‑адреса, заголовки и тела HTTP‑сообщений или пользовательские URI. Подробнее о создании правил Предотвращение захвата аккаунта Отслеживайте страницу входа в свое приложение на предмет несанкционированного доступа к аккаунтам пользователей с использованием скомпрометированных мандатов. Подробнее о предотвращении мошенничества Автоматическая защита от DDoS-атак уровня 7 Предназначен для непрерывного мониторинга и автоматического устранения событий распределенной атаки типа «отказ в обслуживании» (DDoS) на уровне приложений (уровень 7) в течение нескольких секунд. Быстрое внедрение средств безопасности Запускайте новые приложения с уверенностью, используя упрощенный пошаговый мастер настройки с одностраничным интерфейсом для активации предварительно настроенных параметров безопасности, адаптированных под ваши потребности. Повышение уровня безопасности Благодаря разработанным экспертами пакетам правил, централизованному мониторингу и постоянным рекомендациям вы получаете мгновенную защиту для оптимизации уровня безопасности. Начать работу с AWS WAF Начать работу с AWS WAF Изучите возможности AWS WAF Связаться с экспертом Свяжитесь с нами Создать аккаунт AWS Подробнее Что такое AWS? Что такое облачные вычисления? Что такое агентный ИИ? Центр концепций в сфере облачных вычислений Безопасность облака AWS Новые возможности Блоги Пресс-релизы Ресурсы Начало работы Обучение Центр доверия AWS Библиотека решений AWS Центр архитектуры Вопросы и ответы по продуктам и техническим темам Аналитические отчеты Партнеры AWS Разработчики Центр Builder SDK и инструменты .NET на AWS Python на AWS Java на AWS PHP на AWS JavaScript на AWS Поддержка Свяжитесь с нами Обращение в службу поддержки AWS re:Post Центр знаний Обзор Поддержки AWS Получение помощи специалиста Доступность AWS Юридическая информация English К началу Amazon – работодатель равных возможностей. Мы обеспечиваем справедливое отношение к представителям меньшинств, женщинам, лицам с ограниченными возможностями, ветеранам боевых действий и представителям любых гендерных групп и сексуальной ориентации независимо от их возраста. x facebook linkedin instagram twitch youtube podcasts email Конфиденциальность Условия пользования сайтом Параметры файлов cookie © Amazon Web Services, Inc. и дочерние организации, 2026. Все права защищены. | 2026-01-13T09:30:34 |
https://docs.aws.amazon.com/it_it/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-permissions.html | Configurazione di ruoli e autorizzazioni IAM per Lambda@Edge - Amazon CloudFront Configurazione di ruoli e autorizzazioni IAM per Lambda@Edge - Amazon CloudFront Documentazione Amazon CloudFront Guida per gli sviluppatori Autorizzazioni IAM necessarie per associare le funzioni Lambda @Edge alle distribuzioni CloudFront Ruolo di esecuzione della funzione per i principali del servizio Ruoli collegati ai servizi per Lambda@Edge Le traduzioni sono generate tramite traduzione automatica. In caso di conflitto tra il contenuto di una traduzione e la versione originale in Inglese, quest'ultima prevarrà. Configurazione di ruoli e autorizzazioni IAM per Lambda@Edge Per configurare Lambda@Edge, devi disporre delle seguenti autorizzazioni e ruoli IAM per AWS Lambda: Autorizzazioni IAM : queste autorizzazioni ti consentono di creare la tua funzione Lambda e associarla alla tua distribuzione. CloudFront Un ruolo di esecuzione della funzione Lambda (ruolo IAM): i principali del servizio Lambda assumono questo ruolo per eseguire la funzione. Ruoli collegati ai servizi per Lambda @Edge: i ruoli collegati ai servizi consentono a specifiche funzioni Lambda di Servizi AWS replicare e abilitare l'utilizzo di file di registro. Regioni AWS CloudWatch CloudFront Autorizzazioni IAM necessarie per associare le funzioni Lambda @Edge alle distribuzioni CloudFront Oltre alle autorizzazioni IAM necessarie per Lambda, sono necessarie le seguenti autorizzazioni per associare le funzioni Lambda alle distribuzioni: CloudFront lambda:GetFunction : concede l’autorizzazione per ottenere informazioni di configurazione relative alla funzione Lambda e un URL pre-firmato per scaricare un file .zip contenente la funzione. lambda:EnableReplication* : concede l’autorizzazione alla policy delle risorse in modo che il servizio di replica Lambda possa ottenere il codice e la configurazione della funzione. lambda:DisableReplication* : concede l’autorizzazione alla policy delle risorse in modo che il servizio di replica Lambda possa eliminare la funzione. Importante È necessario aggiungere l’asterisco ( * ) alla fine delle azioni lambda:EnableReplication * e lambda:DisableReplication * . Per la risorsa, specificate l'ARN della versione della funzione che desiderate eseguire quando si verifica un CloudFront evento, come nell'esempio seguente: arn:aws:lambda:us-east-1:123456789012:function: TestFunction :2 iam:CreateServiceLinkedRole — Concede l'autorizzazione a creare un ruolo collegato al servizio che Lambda @Edge utilizza per replicare le funzioni Lambda. CloudFront Dopo aver configurato Lambda@Edge per la prima volta, il ruolo collegato al servizio viene creato automaticamente. Non è necessario aggiungere questa autorizzazione ad altre distribuzioni che utilizzano Lambda@Edge. cloudfront:UpdateDistribution o cloudfront:CreateDistribution : concede l’autorizzazione per aggiornare o creare una distribuzione. Per ulteriori informazioni, consulta i seguenti argomenti: Identity and Access Management per Amazon CloudFront Autorizzazioni di accesso alle risorse Lambda nella Guida per gli sviluppatori di AWS Lambda Ruolo di esecuzione della funzione per i principali del servizio È necessario creare un ruolo IAM che può essere assunto dai principali servizi lambda.amazonaws.com e edgelambda.amazonaws.com quando eseguono la funzione. Suggerimento Quando crei la tua funzione nella console Lambda, puoi scegliere di creare un nuovo ruolo di esecuzione utilizzando un modello di AWS policy. Questa fase aggiunge automaticamente le autorizzazioni Lambda@Edge richieste per eseguire la funzione. Consulta Fase 5 del tutorial: creazione di una semplice funzione Lambda@Edge . Per ulteriori informazioni sulla creazione manuale di un ruolo IAM, consulta Creazione di ruoli e collegamento di policy (console) nella Guida per l’utente IAM . Esempio: policy di attendibilità del ruolo Puoi aggiungere questo ruolo nella scheda Relazioni di attendibilità nella console IAM. Non aggiungere questa policy nella scheda Autorizzazioni . JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "lambda.amazonaws.com", "edgelambda.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] } Per ulteriori informazioni sulle autorizzazioni da concedere al ruolo di esecuzione, consulta Autorizzazioni di accesso alle risorse Lambda nella Guida per gli sviluppatori di AWS Lambda . Note Per impostazione predefinita, ogni volta che un CloudFront evento attiva una funzione Lambda, i dati vengono scritti CloudWatch nei log. Se si desidera utilizzare questi registri, il ruolo di esecuzione richiede l'autorizzazione per scrivere dati nei registri. CloudWatch Puoi utilizzare il ruolo AWSLambdaBasicExecutionRole predefinito per concedere l’autorizzazione al ruolo di esecuzione. Per ulteriori informazioni sui CloudWatch registri, vedere. Registri delle funzioni Edge Se il codice della funzione Lambda accede ad altre AWS risorse, ad esempio la lettura di un oggetto da un bucket S3, il ruolo di esecuzione necessita dell'autorizzazione per eseguire tale azione. Ruoli collegati ai servizi per Lambda@Edge Lambda@Edge usa un ruolo collegato al servizio IAM. Un ruolo collegato ai servizi è un tipo univoco di ruolo IAM collegato direttamente a un servizio. I ruoli collegati ai servizi sono definiti automaticamente dal servizio stesso e includono tutte le autorizzazioni richieste dal servizio per eseguire chiamate agli altri servizi AWS per tuo conto. Lambda@Edge usa i seguenti ruoli collegati al servizio IAM: AWSServiceRoleForLambdaReplicator : Lambda@Edge utilizza questo ruolo per consentire a Lambda@Edge di replicare funzioni su Regioni AWS. Quando aggiungi per la prima volta un trigger Lambda @Edge CloudFront, AWSServiceRoleForLambdaReplicator viene creato automaticamente un ruolo denominato per consentire a Lambda @Edge di replicare le funzioni. Regioni AWS Tale ruolo è obbligatorio per utilizzare le funzioni Lambda@Edge. L’aspetto dell’ARN per il ruolo AWSServiceRoleForLambdaReplicator è simile a quello dell’esempio seguente: arn:aws:iam::123456789012:role/aws-service-role/replicator.lambda.amazonaws.com/AWSServiceRoleForLambdaReplicator AWSServiceRoleForCloudFrontLogger — CloudFront utilizza questo ruolo per inviare file di registro in. CloudWatch Puoi utilizzare i file di log per eseguire il debug degli errori di convalida di Lambda@Edge. Il AWSServiceRoleForCloudFrontLogger ruolo viene creato automaticamente quando si aggiunge l'associazione di funzioni Lambda @Edge per consentire di inviare i file di CloudFront registro degli errori Lambda @Edge a. CloudWatch L'ARN per il ruolo AWSServiceRoleForCloudFrontLogger avrà il seguente aspetto: arn:aws:iam::account_number:role/aws-service-role/logger.cloudfront.amazonaws.com/AWSServiceRoleForCloudFrontLogger Un ruolo collegato ai servizi semplifica la configurazione e l'utilizzo di Lambda@Edge perché non dovrai più aggiungere manualmente le autorizzazioni necessarie. Lambda@Edge definisce le autorizzazioni dei relativi ruoli associati ai servizi e solo Lambda@Edge potrà assumere i propri ruoli. Le autorizzazioni definite includono policy di trust e di autorizzazioni. Le policy di autorizzazioni non possono essere attribuite a nessun'altra entità IAM. È necessario rimuovere tutte le risorse associate CloudFront o Lambda @Edge prima di poter eliminare un ruolo collegato al servizio. Questo aiuta a proteggere le risorse Lambda@Edge in modo da non rimuovere un ruolo collegato al servizio che è ancora necessario per accedere alle risorse attive. Per ulteriori informazioni sui ruoli collegati al servizio, consulta Ruoli collegati ai servizi per CloudFront . Autorizzazioni del ruolo collegato ai servizi per Lambda@Edge Lambda@Edge usa due ruoli collegati ai servizi denominati AWSServiceRoleForLambdaReplicator e AWSServiceRoleForCloudFrontLogger . Nelle sezioni seguenti vengono descritte le autorizzazioni per ognuno di questi ruoli. Indice Autorizzazioni del ruolo collegato ai servizi per Lambda Replicator Autorizzazioni relative ai ruoli collegati ai servizi per logger CloudFront Autorizzazioni del ruolo collegato ai servizi per Lambda Replicator Questo ruolo collegato al servizio consente a Lambda di replicare le funzioni Lambda@Edge su Regioni AWS. Ai fini dell'assunzione del ruolo AWSServiceRoleForLambdaReplicator, il ruolo collegato ai servizi replicator.lambda.amazonaws.com considera attendibile il servizio. La policy delle autorizzazioni del ruolo consente a Lambda@Edge di eseguire le seguenti operazioni sulle risorse specificate: lambda:CreateFunction - arn:aws:lambda:*:*:function:* lambda:DeleteFunction - arn:aws:lambda:*:*:function:* lambda:DisableReplication - arn:aws:lambda:*:*:function:* iam:PassRole - all AWS resources cloudfront:ListDistributionsByLambdaFunction - all AWS resources Autorizzazioni relative ai ruoli collegati ai servizi per logger CloudFront Questo ruolo collegato al servizio consente di CloudFront inviare file di registro in CloudWatch modo da poter eseguire il debug degli errori di convalida Lambda @Edge. Ai fini dell'assunzione del ruolo AWSServiceRoleForCloudFrontLogger, il ruolo collegato ai servizi logger.cloudfront.amazonaws.com considera attendibile il servizio. La policy delle autorizzazioni del ruolo consente a Lambda@Edge di eseguire le seguenti azioni sulla risorsa arn:aws:logs:*:*:log-group:/aws/cloudfront/* specificata: logs:CreateLogGroup logs:CreateLogStream logs:PutLogEvents Per consentire a un'entità IAM (ad esempio un utente, un gruppo o un ruolo) di eliminare ruoli Lambda@Edge collegati ai servizi, devi configurare le autorizzazioni. Per ulteriori informazioni, consulta Autorizzazioni del ruolo collegato ai servizi nella Guida per l'utente IAM . Creazione di ruoli collegati ai servizi per Lambda@Edge In genere non si creano manualmente i ruoli collegati ai servizi per Lambda@Edge. Il servizio crea i ruoli automaticamente nei seguenti casi: Quando si crea un trigger per la prima volta, il servizio crea il ruolo AWSServiceRoleForLambdaReplicator (se non esiste già). Questo ruolo consente a Lambda di replicare le funzioni Lambda@Edge su Regioni AWS. Se lo elimini, il ruolo collegato ai servizi verrà creato nuovamente quando aggiungi un nuovo trigger per Lambda@Edge in una distribuzione. Quando aggiorni o crei una CloudFront distribuzione con un'associazione Lambda @Edge, il servizio crea il AWSServiceRoleForCloudFrontLogger ruolo (se il ruolo non esiste già). Questo ruolo consente di CloudFront inviare i file di registro a CloudWatch. Se elimini il ruolo collegato al servizio, il ruolo verrà creato nuovamente quando aggiorni o crei una CloudFront distribuzione con un'associazione Lambda @Edge. Per creare manualmente questi ruoli collegati ai servizi, puoi eseguire i seguenti comandi (): AWS Command Line Interface AWS CLI Per creare il ruolo AWSServiceRoleForLambdaReplicator Esegui il comando seguente. aws iam create-service-linked-role --aws-service-name replicator.lambda.amazonaws.com Per creare il ruolo AWSServiceRoleForCloudFrontLogger Esegui il comando seguente. aws iam create-service-linked-role --aws-service-name logger.cloudfront.amazonaws.com Modifica dei ruoli Lambda@Edge collegati ai servizi Lambda@Edge non consente di modificare i ruoli collegati al servizio AWSServiceRoleForLambdaReplicator o AWSServiceRoleForCloudFrontLogger. Dopo che è stato creato un ruolo collegato al servizio, non puoi modificare il nome del ruolo perché varie entità possono farvi riferimento. Puoi tuttavia utilizzare IAM per modificare la descrizione del ruolo. Per ulteriori informazioni, consulta Modifica di un ruolo collegato al servizio nella Guida per l’utente di IAM . Supportato Regioni AWS per i ruoli collegati ai servizi Lambda @Edge CloudFront supporta l'utilizzo di ruoli collegati ai servizi per Lambda @Edge nei seguenti casi: Regioni AWS Stati Uniti orientali (Virginia settentrionale) – us-east-1 Stati Uniti orientali (Ohio) – us-east-2 Stati Uniti occidentali (California settentrionale) – us-west-1 Stati Uniti occidentali (Oregon) – us-west-2 Asia Pacifico (Mumbai) – ap-south-1 Asia Pacifico (Seul) - ap-northeast-2 Asia Pacifico (Singapore) – ap-southeast-1 Asia Pacifico (Sydney) - ap-southeast-2 Asia Pacifico (Tokyo) - ap-northeast-1 Europe (Francoforte) – eu-central-1 Europa (Irlanda) – eu-west-1 Europe (Londra) – eu-west-2 Sud America (San Paolo) – sa-east-1 JavaScript è disabilitato o non è disponibile nel tuo browser. Per usare la documentazione AWS, JavaScript deve essere abilitato. Consulta le pagine della guida del browser per le istruzioni. Convenzioni dei documenti Tutorial: funzione Lambda@Edge di base Scrivere e creare una funzione Lambda@Edge Questa pagina ti è stata utile? - Sì Grazie per averci comunicato che stiamo facendo un buon lavoro! Se hai un momento, ti invitiamo a dirci che cosa abbiamo fatto che ti è piaciuto così possiamo offrirti altri contenuti simili. Questa pagina ti è stata utile? - No Grazie per averci comunicato che questa pagina ha bisogno di essere modificata. Siamo spiacenti di non aver soddisfatto le tue esigenze. Se hai un momento, ti invitiamo a dirci come possiamo migliorare la documentazione. | 2026-01-13T09:30:34 |
https://docs.aws.amazon.com/id_id/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-permissions.html#slr-permissions-cloudfront-logger | Siapkan izin dan peran IAM untuk Lambda @Edge - Amazon CloudFront Siapkan izin dan peran IAM untuk Lambda @Edge - Amazon CloudFront Dokumentasi Amazon CloudFront Panduan Developerr Izin IAM diperlukan untuk mengaitkan fungsi Lambda @Edge dengan distribusi CloudFront Peran eksekusi fungsi untuk prinsipal layanan Peran terkait layanan untuk Lambda @Edge Terjemahan disediakan oleh mesin penerjemah. Jika konten terjemahan yang diberikan bertentangan dengan versi bahasa Inggris aslinya, utamakan versi bahasa Inggris. Siapkan izin dan peran IAM untuk Lambda @Edge Untuk mengonfigurasi Lambda @Edge, Anda harus memiliki izin dan peran IAM berikut untuk: AWS Lambda Izin IAM — Izin ini memungkinkan Anda membuat fungsi Lambda dan mengaitkannya dengan distribusi Anda. CloudFront Peran eksekusi fungsi Lambda (peran IAM) — Prinsipal layanan Lambda mengasumsikan peran ini untuk menjalankan fungsi Anda. Peran terkait layanan untuk Lambda @Edge — Peran terkait layanan memungkinkan spesifik untuk Layanan AWS mereplikasi fungsi Lambda ke dan mengaktifkan penggunaan file log. Wilayah AWS CloudWatch CloudFront Izin IAM diperlukan untuk mengaitkan fungsi Lambda @Edge dengan distribusi CloudFront Selain izin IAM yang Anda perlukan untuk Lambda, Anda memerlukan izin berikut untuk mengaitkan fungsi Lambda dengan distribusi: CloudFront lambda:GetFunction — Memberikan izin untuk mendapatkan informasi konfigurasi untuk fungsi Lambda Anda dan URL yang telah ditentukan sebelumnya untuk mengunduh file .zip yang berisi fungsi tersebut. lambda:EnableReplication* — Memberikan izin ke kebijakan sumber daya sehingga layanan replikasi Lambda bisa mendapatkan kode fungsi dan konfigurasi. lambda:DisableReplication* — Memberikan izin untuk kebijakan sumber daya sehingga layanan replikasi Lambda dapat menghapus fungsi. penting Anda harus menambahkan tanda bintang ( * ) di akhir lambda:EnableReplication * dan lambda:DisableReplication * tindakan. Untuk sumber daya, tentukan ARN dari versi fungsi yang ingin Anda jalankan ketika suatu CloudFront peristiwa terjadi, seperti contoh berikut: arn:aws:lambda:us-east-1:123456789012:function: TestFunction :2 iam:CreateServiceLinkedRole — Memberikan izin untuk membuat peran terkait layanan yang digunakan Lambda @Edge untuk mereplikasi fungsi Lambda. CloudFront Setelah Anda mengonfigurasi Lambda @Edge untuk pertama kalinya, peran terkait layanan akan dibuat secara otomatis untuk Anda. Anda tidak perlu menambahkan izin ini ke distribusi lain yang menggunakan Lambda @Edge. cloudfront:UpdateDistribution atau cloudfront:CreateDistribution — Memberikan izin untuk memperbarui atau membuat distribusi. Untuk informasi selengkapnya, lihat topik berikut: Identity and Access Management untuk Amazon CloudFront Izin akses sumber daya Lambda di Panduan Pengembang AWS Lambda Peran eksekusi fungsi untuk prinsipal layanan Anda harus membuat peran IAM yang dapat diasumsikan oleh kepala sekolah lambda.amazonaws.com dan edgelambda.amazonaws.com layanan ketika mereka menjalankan fungsi Anda. Tip Saat membuat fungsi di konsol Lambda, Anda dapat memilih untuk membuat peran eksekusi baru dengan menggunakan templat AWS kebijakan. Langkah ini secara otomatis menambahkan izin Lambda @Edge yang diperlukan untuk menjalankan fungsi Anda. Lihat Langkah 5 dalam Tutorial: Membuat fungsi Lambda @Edge sederhana . Untuk informasi selengkapnya tentang membuat peran IAM secara manual, lihat Membuat peran dan melampirkan kebijakan (konsol) di Panduan Pengguna IAM . contoh Contoh: Kebijakan kepercayaan peran Anda dapat menambahkan peran ini di bawah tab Trust Relationship di konsol IAM. Jangan tambahkan kebijakan ini di bawah tab Izin . JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "lambda.amazonaws.com", "edgelambda.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] } Untuk informasi selengkapnya tentang izin yang perlu Anda berikan ke peran eksekusi, lihat Izin akses sumber daya Lambda di AWS Lambda Panduan Pengembang. Catatan Secara default, setiap kali CloudFront peristiwa memicu fungsi Lambda, data ditulis CloudWatch ke Log. Jika Anda ingin menggunakan log ini, peran eksekusi memerlukan izin untuk menulis data ke CloudWatch Log. Anda dapat menggunakan standar AWSLambdaBasicExecutionRole untuk memberikan izin ke peran eksekusi. Untuk informasi selengkapnya tentang CloudWatch Log, lihat Log fungsi tepi . Jika kode fungsi Lambda Anda mengakses AWS sumber daya lain, seperti membaca objek dari bucket S3, peran eksekusi memerlukan izin untuk melakukan tindakan tersebut. Peran terkait layanan untuk Lambda @Edge Lambda @Edge menggunakan peran terkait layanan IAM. Peran yang terhubung dengan layanan adalah jenis peran IAM unik yang terhubung langsung ke layanan. Peran yang ditautkan dengan layanan ditentukan sebelumnya oleh layanan dan mencakup semua izin yang diperlukan layanan untuk menghubungi layanan AWS lainnya atas nama Anda. Lambda @Edge menggunakan peran terkait layanan IAM berikut: AWSServiceRoleForLambdaReplicator — Lambda @Edge menggunakan peran ini untuk memungkinkan Lambda @Edge mereplikasi fungsi. Wilayah AWS Saat Anda pertama kali menambahkan pemicu Lambda @Edge CloudFront, peran bernama dibuat AWSServiceRoleForLambdaReplicator secara otomatis untuk memungkinkan Lambda @Edge mereplikasi fungsi. Wilayah AWS Peran ini diperlukan untuk menggunakan fungsi Lambda @Edge. ARN untuk AWSServiceRoleForLambdaReplicator peran tersebut terlihat seperti contoh berikut: arn:aws:iam::123456789012:role/aws-service-role/replicator.lambda.amazonaws.com/AWSServiceRoleForLambdaReplicator AWSServiceRoleForCloudFrontLogger — CloudFront menggunakan peran ini untuk mendorong file log ke CloudWatch. Anda dapat menggunakan file log untuk men-debug kesalahan validasi Lambda @Edge. AWSServiceRoleForCloudFrontLoggerPeran dibuat secara otomatis saat Anda menambahkan asosiasi fungsi Lambda @Edge CloudFront untuk memungkinkan mendorong file log kesalahan Lambda @Edge ke. CloudWatch ARN untuk AWSServiceRoleForCloudFrontLogger peran yang terlihat seperti ini: arn:aws:iam::account_number:role/aws-service-role/logger.cloudfront.amazonaws.com/AWSServiceRoleForCloudFrontLogger Peran yang terhubung dengan layanan memudahkan pengaturan dan penggunaan Lambda@Edge karena Anda tidak perlu menambahkan izin yang diperlukan secara manual. Lambda@Edge mendefinisikan izin peran yang terhubung ke layanan, dan hanya Lambda@Edge yang dapat memegang peran tersebut. Izin yang ditentukan mencakup kebijakan kepercayaan dan kebijakan izin. Kebijakan izin tidak dapat dilampirkan ke entitas IAM lainnya. Anda harus menghapus sumber daya terkait CloudFront atau Lambda @Edge sebelum dapat menghapus peran terkait layanan. Ini membantu melindungi sumber daya Lambda @Edge Anda sehingga Anda tidak menghapus peran terkait layanan yang masih diperlukan untuk mengakses sumber daya aktif. Untuk mengetahui informasi selengkapnya tentang peran terkait layanan, lihat Peran terkait layanan untuk CloudFront . Izin peran terkait layanan untuk Lambda @Edge Lambda @Edge menggunakan dua peran terkait layanan, bernama dan. AWSServiceRoleForLambdaReplicator AWSServiceRoleForCloudFrontLogger Bagian berikut menjelaskan izin untuk masing-masing peran ini. Daftar Isi Izin peran terkait layanan untuk replikator Lambda Izin peran terkait layanan untuk logger CloudFront Izin peran terkait layanan untuk replikator Lambda Peran terkait layanan ini memungkinkan Lambda untuk mereplikasi fungsi Lambda @Edge. Wilayah AWS Peran terkait layanan AWSServiceRoleForLambdaReplicator memercayai layanan replicator.lambda.amazonaws.com untuk menjalankan peran. Kebijakan izin peran memungkinkan Lambda@Edge menyelesaikan tindakan berikut pada sumber daya yang ditentukan: lambda:CreateFunction pada arn:aws:lambda:*:*:function:* lambda:DeleteFunction pada arn:aws:lambda:*:*:function:* lambda:DisableReplication pada arn:aws:lambda:*:*:function:* iam:PassRole pada all AWS resources cloudfront:ListDistributionsByLambdaFunction pada all AWS resources Izin peran terkait layanan untuk logger CloudFront Peran terkait layanan ini memungkinkan CloudFront untuk mendorong file log ke dalam CloudWatch sehingga Anda dapat men-debug kesalahan validasi Lambda @Edge. Peran terkait layanan AWSServiceRoleForCloudFrontLogger memercayai layanan logger.cloudfront.amazonaws.com untuk menjalankan peran. Kebijakan izin peran memungkinkan Lambda @Edge menyelesaikan tindakan berikut pada sumber daya yang ditentukan: arn:aws:logs:*:*:log-group:/aws/cloudfront/* logs:CreateLogGroup logs:CreateLogStream logs:PutLogEvents Anda harus mengonfigurasi izin untuk mengizinkan entitas IAM (seperti pengguna, grup, atau peran) untuk menghapus peran yang ditautkan oleh layanan Lambda@Edge. Untuk informasi selengkapnya, lihat Izin peran terkait layanan dalam Panduan Pengguna IAM . Membuat peran terkait layanan untuk Lambda @Edge Anda biasanya tidak membuat peran terkait layanan secara manual untuk Lambda@Edge. Layanan ini membuat peran untuk Anda secara otomatis dalam skenario berikut: Saat pertama kali membuat pemicu, layanan akan membuat AWSServiceRoleForLambdaReplicator peran (jika belum ada). Peran ini memungkinkan Lambda untuk mereplikasi fungsi Lambda @Edge ke. Wilayah AWS Jika Anda menghapus peran layanan yang ditautkan, peran tersebut akan dibuat lagi saat Anda menambahkan pemicu baru untuk Lambda@Edge dalam distribusi. Saat Anda memperbarui atau membuat CloudFront distribusi yang memiliki asosiasi Lambda @Edge, layanan akan membuat AWSServiceRoleForCloudFrontLogger peran (jika peran tersebut belum ada). Peran ini memungkinkan CloudFront untuk mendorong file log Anda ke CloudWatch. Jika Anda menghapus peran terkait layanan, peran akan dibuat lagi saat Anda memperbarui atau membuat CloudFront distribusi yang memiliki asosiasi Lambda @Edge. Untuk membuat peran terkait layanan ini secara manual, Anda dapat menjalankan perintah AWS Command Line Interface (AWS CLI) berikut: Untuk membuat AWSServiceRoleForLambdaReplicator peran Jalankan perintah berikut. aws iam create-service-linked-role --aws-service-name replicator.lambda.amazonaws.com Untuk membuat AWSServiceRoleForCloudFrontLogger peran Jalankan perintah berikut. aws iam create-service-linked-role --aws-service-name logger.cloudfront.amazonaws.com Mengedit peran terkait layanan Lambda @Edge Lambda @Edge tidak mengizinkan Anda mengedit AWSServiceRoleForLambdaReplicator atau peran terkait AWSServiceRoleForCloudFrontLogger layanan. Setelah layanan membuat peran terkait layanan, Anda tidak dapat mengubah nama peran karena berbagai entitas mungkin mereferensikan peran tersebut. Namun, Anda dapat menggunakan IAM untuk mengedit deskripsi peran. Untuk informasi selengkapnya, lihat Mengedit peran terkait layanan dalam Panduan Pengguna IAM . Didukung Wilayah AWS untuk peran terkait layanan Lambda @Edge CloudFront mendukung penggunaan peran terkait layanan untuk Lambda @Edge sebagai berikut: Wilayah AWS AS Timur (Virginia Utara)– us-east-1 AS Timur (Ohio)– us-east-2 AS Barat (California Utara)– us-west-1 AS Barat (Oregon)– us-west-2 Asia Pasifik (Mumbai)– ap-south-1 Asia Pasifik (Seoul)– ap-northeast-2 Asia Pasifik (Singapura)– ap-southeast-1 Asia Pasifik (Sydney)– ap-southeast-2 Asia Pasifik (Tokyo) – ap-northeast-1 Eropa (Frankfurt) – eu-central-1 Eropa (Irlandia)– eu-west-1 Eropa (London) – eu-west-2 Amerika Selatan (São Paulo) – sa-east-1 Javascript dinonaktifkan atau tidak tersedia di browser Anda. Untuk menggunakan Dokumentasi AWS, Javascript harus diaktifkan. Lihat halaman Bantuan browser Anda untuk petunjuk. Konvensi Dokumen Tutorial: Fungsi dasar Lambda @Edge Tulis dan buat fungsi Lambda @Edge Apakah halaman ini membantu Anda? - Ya Terima kasih telah memberitahukan bahwa hasil pekerjaan kami sudah baik. Jika Anda memiliki waktu luang, beri tahu kami aspek apa saja yang sudah bagus, agar kami dapat menerapkannya secara lebih luas. Apakah halaman ini membantu Anda? - Tidak Terima kasih telah memberi tahu kami bahwa halaman ini perlu ditingkatkan. Maaf karena telah mengecewakan Anda. Jika Anda memiliki waktu luang, beri tahu kami bagaimana dokumentasi ini dapat ditingkatkan. | 2026-01-13T09:30:34 |
https://www.php.net/manual/ru/function.session-name.php | PHP: session_name - Manual update page now Downloads Documentation Get Involved Help Search docs Getting Started Introduction A simple tutorial Language Reference Basic syntax Types Variables Constants Expressions Operators Control Structures Functions Classes and Objects Namespaces Enumerations Errors Exceptions Fibers Generators Attributes References Explained Predefined Variables Predefined Exceptions Predefined Interfaces and Classes Predefined Attributes Context options and parameters Supported Protocols and Wrappers Security Introduction General considerations Installed as CGI binary Installed as an Apache module Session Security Filesystem Security Database Security Error Reporting User Submitted Data Hiding PHP Keeping Current Features HTTP authentication with PHP Cookies Sessions Handling file uploads Using remote files Connection handling Persistent Database Connections Command line usage Garbage Collection DTrace Dynamic Tracing Function Reference Affecting PHP's Behaviour Audio Formats Manipulation Authentication Services Command Line Specific Extensions Compression and Archive Extensions Cryptography Extensions Database Extensions Date and Time Related Extensions File System Related Extensions Human Language and Character Encoding Support Image Processing and Generation Mail Related Extensions Mathematical Extensions Non-Text MIME Output Process Control Extensions Other Basic Extensions Other Services Search Engine Extensions Server Specific Extensions Session Extensions Text Processing Variable and Type Related Extensions Web Services Windows Only Extensions XML Manipulation GUI Extensions Keyboard Shortcuts ? This help j Next menu item k Previous menu item g p Previous man page g n Next man page G Scroll to bottom g g Scroll to top g h Goto homepage g s Goto search (current page) / Focus search box session_regenerate_id » « session_module_name Руководство по PHP Справочник функций Модули для работы с сессиями Сессии Функции для работы с сессиями Язык: English German Spanish French Italian Japanese Brazilian Portuguese Russian Turkish Ukrainian Chinese (Simplified) Other session_name (PHP 4, PHP 5, PHP 7, PHP 8) session_name — Получает и (или) устанавливает название текущей сессии Описание session_name ( ? string $name = null ): string | false Функция session_name() возвращает название текущей сессии. При передаче аргумента в параметр name функция session_name() обновит название и вернёт старое название сессии. Функция session_name() изменяет название сессии в блоке данных cookie HTTP-протокола, а при включённой директиве session.use_trans_sid — в содержимом вывода, если для сессии указали новое имя name . Функция session_name() выдаёт ошибку уровня E_WARNING , если функцию вызвали после отправки cookie HTTP-протокола. Функцию session_name() вызывают до вызова функции session_start() , чтобы сессия работала правильно. При запуске запроса название сессии сбрасывается на значение по умолчанию, которое хранится в директиве session.name . Поэтому функцию session_name() требуется вызывать для каждого запроса, и до вызова функции session_start() . Список параметров name Название сессии ссылается на название сессии, которое сохраняется в блоках данных cookie и подставляется в URL-адреса содержимого страницы. Пример имени сессии: PHPSESSID . Для имён сессий разрешается указывать только буквенно-цифровые символы; лучше предпочесть короткие и понятные названия, которое, например, увидят пользователи с включённым предупреждением о блоках данных cookie. Имя текущей сессии изменится на значение аргумента, если в параметр name передали аргумент, значение которого не равно null . Внимание Название сессии нельзя составлять только из цифр, в имени требуется указать хотя бы одну букву. При нарушении требования PHP каждый раз будет генерировать новый идентификатор. Возвращаемые значения Функция возвращает имя текущей сессии. С параметром name функция обновляет название текущей сессии и возвращает старое название сессии или false , если возникла ошибка. Список изменений Версия Описание 8.0.0 Параметр name теперь принимает значение null . 7.2.0 Функция session_name() проверяет статус сессии, раньше функция проверяла только статус cookie. Поэтому старую версию функции session_name() разрешалось вызывать после вызова функции session_start() , что иногда приводило к сбою PHP и неправильному поведению. Примеры Пример #1 Пример использования функции session_name() <?php /* Устанавливаем для имени сессии значение "WebsiteID" */ $previous_name = session_name ( "WebsiteID" ); echo "Раньше сессия называлась ' { $previous_name } '" ; ?> Смотрите также Параметр конфигурации session.name Нашли ошибку? Инструкция • Исправление • Сообщение об ошибке + Добавить Примечания пользователей 9 notes up down 146 Hongliang Qiang ¶ 21 years ago This may sound no-brainer: the session_name() function will have no essential effect if you set session.auto_start to "true" in php.ini . And the obvious explanation is the session already started thus cannot be altered before the session_name() function--wherever it is in the script--is executed, same reason session_name needs to be called before session_start() as documented. I know it is really not a big deal. But I had a quite hard time before figuring this out, and hope it might be helpful to someone like me. up down 65 php at wiz dot cx ¶ 17 years ago if you try to name a php session "example.com" it gets converted to "example_com" and everything breaks. don't use a period in your session name. up down 40 relsqui at chiliahedron dot com ¶ 16 years ago Remember, kids--you MUST use session_name() first if you want to use session_set_cookie_params() to, say, change the session timeout. Otherwise it won't work, won't give any error, and nothing in the documentation (that I've seen, anyway) will explain why. Thanks to brandan of bildungsroman.com who left a note under session_set_cookie_params() explaining this or I'd probably still be throwing my hands up about it. up down 21 Joseph Dalrymple ¶ 14 years ago For those wondering, this function is expensive! On a script that was executing in a consistent 0.0025 seconds, just the use of session_name("foo") shot my execution time up to ~0.09s. By simply sacrificing session_name("foo"), I sped my script up by roughly 0.09 seconds. up down 10 Victor H ¶ 10 years ago As Joseph Dalrymple said, adding session_name do slow down a little bit the execution time. But, what i've observed is that it decreased the fluctuation between requests. Requests on my script fluctuated between 0,045 and 0,022 seconds. With session_name("myapp"), it goes to 0,050 and 0,045. Not a big deal, but that's a point to note. For those with problems setting the name, when session.auto_start is set to 1, you need to set the session.name on php.ini! up down 3 mmulej at gmail dot com ¶ 4 years ago Hope this is not out of php.net noting scope. session_name('name') must be set before session_start() because the former changes ini settings and the latter reads them. For the same reason session_set_cookie_params($options) must be set before session_start() as well. I find it best to do the following. function is_session_started() { if (php_sapi_name() === 'cli') return false; if (version_compare(phpversion(), '5.4.0', '>=')) return session_status() === PHP_SESSION_ACTIVE; return session_id() !== ''; } if (!is_session_started()) { session_name($session_name); session_set_cookie_params($cookie_options); session_start(); } up down 0 tony at marston-home dot demon dot co dot uk ¶ 7 years ago The description that session_name() gets and/or sets the name of the current session is technically wrong. It does nothing but deal with the value originally supplied by the session.name value within the php.ini file. Thus:- $name = session_name(); is functionally equivalent to $name = ini_get('session.name'); and session_name('newname); is functionally equivalent to ini_set('session.name','newname'); This also means that: $old_name = session_name('newname'); is functionally equivalent to $old_name = ini_set('session.name','newname'); The current value of session.name is not attached to a session until session_start() is called. Once session_start() has used session.name to lookup the session_id() in the cookie data the name becomes irrelevant as all further operations on the session data are keyed by the session_id(). Note that changing session.name while a session is currently active will not update the name in any session cookie. The new name does not take effect until the next call to session_start(), and this requires that the current session, which was created with the previous value for session.name, be closed. up down -4 tony at marston-home dot demon dot co dot uk ¶ 7 years ago The description has recently been modified to contain the statement "When new session name is supplied, session_name() modifies HTTP cookie". This is not correct as session_name() has never modified any cookie data. A change in session.name does not become effective until session_start() is called, and it is session_start() that creates the cookie if it does not already exist. See the following bug report for details: https://bugs.php.net/bug.php?id=76413 up down -3 descartavel1+php at gmail dot com ¶ 2 years ago Always try to set the prefix for your session name attribute to either `__Host-` or `__Secure-` to benefit from Browsers improved security. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie#attributes Also, if you have auto_session enabled, you must set this name in session.name in your config (php.ini, htaccess, etc) + Добавить Функции для работы с сессиями session_​abort session_​cache_​expire session_​cache_​limiter session_​commit session_​create_​id session_​decode session_​destroy session_​encode session_​gc session_​get_​cookie_​params session_​id session_​module_​name session_​name session_​regenerate_​id session_​register_​shutdown session_​reset session_​save_​path session_​set_​cookie_​params session_​set_​save_​handler session_​start session_​status session_​unset session_​write_​close Copyright © 2001-2026 The PHP Documentation Group My PHP.net Contact Other PHP.net sites Privacy policy ↑ and ↓ to navigate • Enter to select • Esc to close • / to open Press Enter without selection to search using Google | 2026-01-13T09:30:34 |
https://releases.llvm.org/18.1.8/tools/polly/docs/ReleaseNotes.html | Release Notes 18.1.8 Release Notes — Polly 18.1.8 documentation Polly 18.1.8 documentation Release Notes 18.1.8 Release Notes «   Upcoming Release   ::   Contents   ::   The Architecture   » Release Notes 18.1.8 Release Notes ¶ In Polly 18 the following important changes have been incorporated. «   Upcoming Release   ::   Contents   ::   The Architecture   » © Copyright 2010-2024, The Polly Team. Created using Sphinx 7.1.2. | 2026-01-13T09:30:34 |
https://penneo.com/da/product-updates/ | Product Updates - Penneo Produkter Penneo Sign Validator Hvorfor Penneo Integrationer Løsninger Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Brancher Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Priser Ressourcer Vidensunivers Trust Center Produktopdateringer SIGN Hjælpecenter KYC Hjælpecenter Systemstatus LOG PÅ Penneo Sign Log ind på Penneo Sign. LOG PÅ Penneo KYC Log ind på Penneo KYC. LOG PÅ BOOK ET MØDE GRATIS PRØVEPERIODE DA EN NO FR NL Produkter Penneo Sign Validator Hvorfor Penneo Integrationer Løsninger Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Priser Ressourcer Vidensunivers Trust Center Produktopdateringer SIGN Hjælpecenter KYC Hjælpecenter Systemstatus BOOK ET MØDE GRATIS PRØVEPERIODE LOG PÅ DA EN NO FR NL Penneo Sign Log ind på Penneo Sign. LOG PÅ Penneo KYC Log ind på Penneo KYC. LOG PÅ Produktopdateringer Produktopdateringer fra Q4 2025: Hurtigere afsendelse, global QES & API-opdateringer december 3, 2025 I sidste kvartal af 2025 fokuserede vi på to hovedmål: at fjerne friktion fra dine daglige arbejdsgange og styrke de... Læs mere Penneo QES: Højeste juridiske gyldighed og dokumenteret sikkerhed december 3, 2025 Penneo tilbyder nu kvalificerede elektroniske signaturer gennem både ID Verifier og itsme® QES. Guldstandarden for digitale signaturer En kvalificeret elektronisk... Læs mere Send dokumenter til underskrift på få sekunder med Penneo Sign december 3, 2025 En hurtigere og enklere måde at sende dokumenter til underskrift er landet. Vi introducerer en ny og strømlinet mulighed for... Læs mere Hvad er nyt i Penneo Sign? Produktopdateringer fra Q2 2025 september 1, 2025 I andet kvartal 2025 har vi fokuseret på vigtige forbedringer for at øge stabiliteten af vores platform, styrke vores engagement... Læs mere Vi introducerer Penneos nye analysepanel: Forbrugsoversigt september 1, 2025 Vi lancerer Forbrugsoversigt – et helt nyt værktøj, som samler jeres vigtigste data ét sted. Med Forbrugsoversigt har du fået... Læs mere Q1 2025 Penneo Sign Opdateringer marts 31, 2025 Første kvartal af 2025 bød på en række forbedringer i Penneo Sign – med fokus på bedre ydeevne, brugeroplevelse og... Læs mere Har du brug for at indsamle underskrifter fra internationale interessenter? Penneo Sign gør det nu nemmere marts 5, 2025 Arbejder du med internationale bestyrelsesmedlemmer, kunder eller medarbejdere, som ikke kan underskrive med lokale eID-metoder som MitID eller MitID Erhverv?... Læs mere Hvad er nyt i Penneo Sign? Produktopdateringer fra H2 2024 december 18, 2024 Hold dig opdateret med de nyeste forbedringer i Penneo Sign! I anden halvdel af 2024 har vi introduceret nye funktioner... Læs mere Hvad er nyt i Penneo Sign? Produktopdateringer fra H1 2024 juni 26, 2024 Hold dig opdateret med de nyeste forbedringer i Penneo Sign! I de første måneder af 2024 har vi introduceret nye... Læs mere SMS-verifikation i Penneo Sign: Et ekstra lag af sikkerhed i dokumentunderskrift april 20, 2024 I vores fortsatte bestræbelser på at forbedre sikkerheden og effektiviteten af dine digitale underskrifter introducerer vi SMS-verification. Læs mere SE FLERE ARTIKLER Hold dig opdateret Er du allerede Penneo bruger? Tilmeld dig vores nyhedsbrev for at få produktnyheder direkte i din indbakke, og være blandt de første til at høre om produktopdateringer, der kan lette dine arbejdsgange! URL Dette felt er til validering og bør ikke ændres. Email * * Ja tak, jeg vil gerne modtage produktnyheder og tips fra Penneo via email, og jeg er opmærksom på at jeg kan afmelde mig dette nyhedsbrev når som helst. Produkter Penneo Sign Priser Integrationer Åben API Validator Hvorfor Penneo Løsninger Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Ressourcer Vidensunivers Trust Center Produktopdateringer SUPPORT SIGN Hjælpecenter KYC Hjælpecenter Systemstatus Virksomhed Om os Karriere Privatlivspolitik Vilkår Brug af cookies Accessibility Statement Whistleblower Policy Kontakt os PENNEO A/S - Gærtorvet 1-5, DK-1799 København V - CVR: 35633766 | 2026-01-13T09:30:34 |
http://docs.buildbot.net/current/manual/cmdline.html#cmdline-start%20(worker) | 2.7. Command-line Tool — Buildbot 4.3.0 documentation Buildbot 1. Buildbot Tutorial 2. Buildbot Manual 2.1. Introduction 2.2. Installation 2.3. Concepts 2.4. Secret Management 2.5. Configuration 2.6. Customization 2.7. Command-line Tool 2.7.1. buildbot 2.7.1.1. Administrator Tools 2.7.1.2. Developer Tools 2.7.1.3. Other Tools 2.7.1.4. .buildbot config directory 2.7.2. buildbot-worker 2.7.2.1. create-worker 2.7.2.2. start 2.7.2.3. restart 2.7.2.4. stop 2.8. Resources 2.9. Optimization 2.10. Plugin Infrastructure in Buildbot 2.11. Deployment 2.12. Upgrading 3. Buildbot Development 4. Release Notes 5. Older Release Notes 6. API Indices Buildbot 2. Buildbot Manual 2.7. Command-line Tool View page source 2.7. Command-line Tool This section describes command-line tools available after buildbot installation. The two main command-line tools are buildbot and buildbot-worker . The former handles a Buildbot master and the former handles a Buildbot worker. Every command-line tool has a list of global options and a set of commands which have their own options. One can run these tools in the following way: buildbot [global options] command [command options] buildbot-worker [global options] command [command options] The buildbot command is used on the master, while buildbot-worker is used on the worker. Global options are the same for both tools which perform the following actions: --help Print general help about available commands and global options and exit. All subsequent arguments are ignored. --verbose Set verbose output. --version Print current buildbot version and exit. All subsequent arguments are ignored. You can get help on any command by specifying --help as a command option: buildbot command --help You can also use manual pages for buildbot and buildbot-worker for quick reference on command-line options. The remainder of this section describes each buildbot command. See Command Line Index for a full list. 2.7.1. buildbot The buildbot command-line tool can be used to start or stop a buildmaster or buildbot, and to interact with a running buildmaster. Some of its subcommands are intended for buildmaster admins, while some are for developers who are editing the code that the buildbot is monitoring. 2.7.1.1. Administrator Tools The following buildbot sub-commands are intended for buildmaster administrators: create-master buildbot create-master -r {BASEDIR} This creates a new directory and populates it with files that allow it to be used as a buildmaster’s base directory. You will usually want to use the option -r option to create a relocatable buildbot.tac . This allows you to move the master directory without editing this file. upgrade-master buildbot upgrade-master {BASEDIR} This upgrades a previously created buildmaster’s base directory for a new version of buildbot master source code. This will copy the web server static files, and potentially upgrade the db. start buildbot start [--nodaemon] {BASEDIR} This starts a buildmaster which was already created in the given base directory. The daemon is launched in the background, with events logged to a file named twistd.log . The option –nodaemon option instructs Buildbot to skip daemonizing. The process will start in the foreground. It will only return to the command-line when it is stopped. Additionally, the user can set the environment variable START_TIMEOUT to specify the amount of time the script waits for the master to start until it declares the operation as failure. restart buildbot restart [--nodaemon] {BASEDIR} Restart the buildmaster. This is equivalent to stop followed by start The option –nodaemon option has the same meaning as for start . stop buildbot stop {BASEDIR} This terminates the daemon (either buildmaster or worker) running in the given directory. The --clean option shuts down the buildmaster cleanly. With --no-wait option buildbot stop command will send buildmaster shutdown signal and will immediately exit, not waiting for complete buildmaster shutdown. sighup buildbot sighup {BASEDIR} This sends a SIGHUP to the buildmaster running in the given directory, which causes it to re-read its master.cfg file. checkconfig buildbot checkconfig {BASEDIR|CONFIG_FILE} This checks if the buildmaster configuration is well-formed and contains no deprecated or invalid elements. If no arguments are used or the base directory is passed as the argument the config file specified in buildbot.tac is checked. If the argument is the path to a config file then it will be checked without using the buildbot.tac file. cleanupdb buildbot cleanupdb {BASEDIR|CONFIG_FILE} [-q] This command is frontend for various database maintenance jobs: optimiselogs: This optimization groups logs into bigger chunks to apply higher level of compression. This script runs for as long as it takes to finish the job including the time needed to check master.cfg file. copy-db buildbot copy-db {DESTINATION_URL} {BASEDIR} [-q] This command copies all buildbot data from source database configured in the buildbot configuration file to the destination database. The URL of the destination database is specified on the command line. The destination database may have different type from the source database. The destination database must be empty. The script will initialize it in the same way as if a new Buildbot installation was created. Source database must be already upgraded to the current Buildbot version by the buildbot upgrade-master command. 2.7.1.2. Developer Tools These tools are provided for use by the developers who are working on the code that the buildbot is monitoring. try This lets a developer to ask the question What would happen if I committed this patch right now? . It runs the unit test suite (across multiple build platforms) on the developer’s current code, allowing them to make sure they will not break the tree when they finally commit their changes. The buildbot try command is meant to be run from within a developer’s local tree, and starts by figuring out the base revision of that tree (what revision was current the last time the tree was updated), and a patch that can be applied to that revision of the tree to make it match the developer’s copy. This (revision, patch) pair is then sent to the buildmaster, which runs a build with that SourceStamp . If you want, the tool will emit status messages as the builds run, and will not terminate until the first failure has been detected (or the last success). There is an alternate form which accepts a pre-made patch file (typically the output of a command like svn diff ). This --diff form does not require a local tree to run from. See try –diff concerning the --diff command option. For this command to work, several pieces must be in place: the Try_Jobdir or : Try_Userpass , as well as some client-side configuration. Locating the master The try command needs to be told how to connect to the try scheduler, and must know which of the authentication approaches described above is in use by the buildmaster. You specify the approach by using --connect=ssh or --connect=pb (or try_connect = 'ssh' or try_connect = 'pb' in .buildbot/options ). For the PB approach, the command must be given a option –master argument (in the form HOST : PORT ) that points to TCP port that you picked in the Try_Userpass scheduler. It also takes a option –username and option –passwd pair of arguments that match one of the entries in the buildmaster’s userpass list. These arguments can also be provided as try_master , try_username , and try_password entries in the .buildbot/options file. For the SSH approach, the command must be given option –host and option –username , to get to the buildmaster host. It must also be given option –jobdir , which points to the inlet directory configured above. The jobdir can be relative to the user’s home directory, but most of the time you will use an explicit path like ~buildbot/project/trydir . These arguments can be provided in .buildbot/options as try_host , try_username , try_password , and try_jobdir . If you need to use something different from the default ssh command for connecting to the remote system, you can use –ssh command line option or try_ssh in the configuration file. The SSH approach also provides a option –buildbotbin argument to allow specification of the buildbot binary to run on the buildmaster. This is useful in the case where buildbot is installed in a virtualenv on the buildmaster host, or in other circumstances where the buildbot command is not on the path of the user given by option –username . The option –buildbotbin argument can be provided in .buildbot/options as try_buildbotbin The following command line arguments are deprecated, but retained for backward compatibility: --tryhost is replaced by option –host --trydir is replaced by option –jobdir --master is replaced by option –masterstatus Likewise, the following .buildbot/options file entries are deprecated, but retained for backward compatibility: try_dir is replaced by try_jobdir masterstatus is replaced by try_masterstatus Waiting for results If you provide the option –wait option (or try_wait = True in .buildbot/options ), the buildbot try command will wait until your changes have either been proven good or bad before exiting. Unless you use the option –quiet option (or try_quiet=True ), it will emit a progress message every 60 seconds until the builds have completed. The SSH connection method does not support waiting for results. Choosing the Builders A trial build is performed on multiple Builders at the same time, and the developer gets to choose which Builders are used (limited to a set selected by the buildmaster admin with the TryScheduler ’s builderNames= argument). The set you choose will depend upon what your goals are: if you are concerned about cross-platform compatibility, you should use multiple Builders, one from each platform of interest. You might use just one builder if that platform has libraries or other facilities that allow better test coverage than what you can accomplish on your own machine, or faster test runs. The set of Builders to use can be specified with multiple option –builder arguments on the command line. It can also be specified with a single try_builders option in .buildbot/options that uses a list of strings to specify all the Builder names: try_builders = [ "full-OSX" , "full-win32" , "full-linux" ] If you are using the PB approach, you can get the names of the builders that are configured for the try scheduler using the get-builder-names argument: buildbot try --get-builder-names --connect = pb --master = ... --username = ... --passwd = ... Specifying the VC system The try command also needs to know how to take the developer’s current tree and extract the (revision, patch) source-stamp pair. Each VC system uses a different process, so you start by telling the try command which VC system you are using, with an argument like option –vc=cvs or option –vc=git . This can also be provided as try_vc in .buildbot/options . The following names are recognized: bzr cvs darcs hg git mtn p4 svn Finding the top of the tree Some VC systems (notably CVS and SVN) track each directory more-or-less independently, which means the try command needs to move up to the top of the project tree before it will be able to construct a proper full-tree patch. To accomplish this, the try command will crawl up through the parent directories until it finds a marker file. The default name for this marker file is .buildbot-top , so when you are using CVS or SVN you should touch .buildbot-top from the top of your tree before running buildbot try . Alternatively, you can use a filename like ChangeLog or README , since many projects put one of these files in their top-most directory (and nowhere else). To set this filename, use --topfile=ChangeLog , or set it in the options file with try_topfile = 'ChangeLog' . You can also manually set the top of the tree with --topdir=~/trees/mytree , or try_topdir = '~/trees/mytree' . If you use try_topdir , in a .buildbot/options file, you will need a separate options file for each tree you use, so it may be more convenient to use the try_topfile approach instead. Other VC systems which work on full projects instead of individual directories (Darcs, Mercurial, Git, Monotone) do not require try to know the top directory, so the option –try-topfile and option –try-topdir arguments will be ignored. If the try command cannot find the top directory, it will abort with an error message. The following command line arguments are deprecated, but retained for backward compatibility: --try-topdir is replaced by option –topdir --try-topfile is replaced by option –topfile Determining the branch name Some VC systems record the branch information in a way that try can locate it. For the others, if you are using something other than the default branch, you will have to tell the buildbot which branch your tree is using. You can do this with either the option –branch argument, or a try_branch entry in the .buildbot/options file. Determining the revision and patch Each VC system has a separate approach for determining the tree’s base revision and computing a patch. CVS try pretends that the tree is up to date. It converts the current time into a option -D time specification, uses it as the base revision, and computes the diff between the upstream tree as of that point in time versus the current contents. This works, more or less, but requires that the local clock be in reasonably good sync with the repository. SVN try does a svn status -u to find the latest repository revision number (emitted on the last line in the Status against revision: NN message). It then performs an svn diff -r NN to find out how your tree differs from the repository version, and sends the resulting patch to the buildmaster. If your tree is not up to date, this will result in the try tree being created with the latest revision, then backwards patches applied to bring it back to the version you actually checked out (plus your actual code changes), but this will still result in the correct tree being used for the build. bzr try does a bzr revision-info to find the base revision, then a bzr diff -r$base.. to obtain the patch. Mercurial hg parents --template '{node}\n' emits the full revision id (as opposed to the common 12-char truncated) which is a SHA1 hash of the current revision’s contents. This is used as the base revision. hg diff then provides the patch relative to that revision. For try to work, your working directory must only have patches that are available from the same remotely-available repository that the build process’ source.Mercurial will use. Perforce try does a p4 changes -m1 ... to determine the latest changelist and implicitly assumes that the local tree is synced to this revision. This is followed by a p4 diff -du to obtain the patch. A p4 patch differs slightly from a normal diff. It contains full depot paths and must be converted to paths relative to the branch top. To convert the following restriction is imposed. The p4base (see P4Source ) is assumed to be //depot Darcs try does a darcs changes --context to find the list of all patches back to and including the last tag that was made. This text file (plus the location of a repository that contains all these patches) is sufficient to re-create the tree. Therefore the contents of this context file are the revision stamp for a Darcs-controlled source tree. It then does a darcs diff -u to compute the patch relative to that revision. Git git branch -v lists all the branches available in the local repository along with the revision ID it points to and a short summary of the last commit. The line containing the currently checked out branch begins with “* “ (star and space) while all the others start with “ “ (two spaces). try scans for this line and extracts the branch name and revision from it. Then it generates a diff against the base revision. Todo I’m not sure if this actually works the way it’s intended since the extracted base revision might not actually exist in the upstream repository. Perhaps we need to add a –remote option to specify the remote tracking branch to generate a diff against. Monotone mtn automate get_base_revision_id emits the full revision id which is a SHA1 hash of the current revision’s contents. This is used as the base revision. mtn diff then provides the patch relative to that revision. For try to work, your working directory must only have patches that are available from the same remotely-available repository that the build process’ source.Monotone will use. patch information You can provide the option –who=dev to designate who is running the try build. This will add the dev to the Reason field on the try build’s status web page. You can also set try_who = dev in the .buildbot/options file. Note that option –who=dev will not work on version 0.8.3 or earlier masters. Similarly, option –comment=COMMENT will specify the comment for the patch, which is also displayed in the patch information. The corresponding config-file option is try_comment . Sending properties You can set properties to send with your change using either the option –property=key=value option, which sets a single property, or the option –properties=key1=value1,key2=value2… option, which sets multiple comma-separated properties. Either of these can be specified multiple times. Note that the option –properties option uses commas to split on properties, so if your property value itself contains a comma, you’ll need to use the option –property option to set it. try –diff Sometimes you might have a patch from someone else that you want to submit to the buildbot. For example, a user may have created a patch to fix some specific bug and sent it to you by email. You’ve inspected the patch and suspect that it might do the job (and have at least confirmed that it doesn’t do anything evil). Now you want to test it out. One approach would be to check out a new local tree, apply the patch, run your local tests, then use buildbot try to run the tests on other platforms. An alternate approach is to use the buildbot try --diff form to have the buildbot test the patch without using a local tree. This form takes a option –diff argument which points to a file that contains the patch you want to apply. By default this patch will be applied to the TRUNK revision, but if you give the optional option –baserev argument, a tree of the given revision will be used as a starting point instead of TRUNK. You can also use buildbot try --diff=- to read the patch from stdin . Each patch has a patchlevel associated with it. This indicates the number of slashes (and preceding pathnames) that should be stripped before applying the diff. This exactly corresponds to the option -p or option –strip argument to the patch utility. By default buildbot try --diff uses a patchlevel of 0, but you can override this with the option -p argument. When you use option –diff , you do not need to use any of the other options that relate to a local tree, specifically option –vc , option –try-topfile , or option –try-topdir . These options will be ignored. Of course you must still specify how to get to the buildmaster (with option –connect , option –tryhost , etc). 2.7.1.3. Other Tools These tools are generally used by buildmaster administrators. sendchange This command is used to tell the buildmaster about source changes. It is intended to be used from within a commit script, installed on the VC server. It requires that you have a PBChangeSource ( PBChangeSource ) running in the buildmaster (by being set in c['change_source'] ). buildbot sendchange --master {MASTERHOST}:{PORT} --auth {USER}:{PASS} --who {USER} {FILENAMES..} The option –auth option specifies the credentials to use to connect to the master, in the form user:pass . If the password is omitted, then sendchange will prompt for it. If both are omitted, the old default (username “change” and password “changepw”) will be used. Note that this password is well-known, and should not be used on an internet-accessible port. The option –master and option –username arguments can also be given in the options file (see .buildbot config directory ). There are other (optional) arguments which can influence the Change that gets submitted: --branch (or option branch ) This provides the (string) branch specifier. If omitted, it defaults to None , indicating the default branch . All files included in this Change must be on the same branch. --category (or option category ) This provides the (string) category specifier. If omitted, it defaults to None , indicating no category . The category property can be used by schedulers to filter what changes they listen to. --project (or option project ) This provides the (string) project to which this change applies, and defaults to ‘’. The project can be used by schedulers to decide which builders should respond to a particular change. --repository (or option repository ) This provides the repository from which this change came, and defaults to '' . --revision This provides a revision specifier, appropriate to the VC system in use. --revision_file This provides a filename which will be opened and the contents used as the revision specifier. This is specifically for Darcs, which uses the output of darcs changes --context as a revision specifier. This context file can be a couple of kilobytes long, spanning a couple lines per patch, and would be a hassle to pass as a command-line argument. --property This parameter is used to set a property on the Change generated by sendchange . Properties are specified as a name : value pair, separated by a colon. You may specify many properties by passing this parameter multiple times. --comments This provides the change comments as a single argument. You may want to use option –logfile instead. --logfile This instructs the tool to read the change comments from the given file. If you use - as the filename, the tool will read the change comments from stdin. --encoding Specifies the character encoding for all other parameters, defaulting to 'utf8' . --vc Specifies which VC system the Change is coming from, one of: cvs , svn , darcs , hg , bzr , git , mtn , or p4 . Defaults to None . user Note that in order to use this command, you need to configure a CommandlineUserManager instance in your master.cfg file, which is explained in Users Options . This command allows you to manage users in buildbot’s database. No extra requirements are needed to use this command, aside from the Buildmaster running. For details on how Buildbot manages users, see Users . --master The user command can be run virtually anywhere provided a location of the running buildmaster. The option –master argument is of the form MASTERHOST : PORT . --username PB connection authentication that should match the arguments to CommandlineUserManager . --passwd PB connection authentication that should match the arguments to CommandlineUserManager . --op There are four supported values for the option –op argument: add , update , remove , and get . Each are described in full in the following sections. --bb_username Used with the option –op=update option, this sets the user’s username for web authentication in the database. It requires option –bb_password to be set along with it. --bb_password Also used with the option –op=update option, this sets the password portion of a user’s web authentication credentials into the database. The password is first encrypted prior to storage for security reasons. --ids When working with users, you need to be able to refer to them by unique identifiers to find particular users in the database. The option –ids option lets you specify a comma separated list of these identifiers for use with the user command. The option –ids option is used only when using option –op=remove or option –op=get . --info Users are known in buildbot as a collection of attributes tied together by some unique identifier (see Users ). These attributes are specified in the form {TYPE}={VALUE} when using the option –info option. These {TYPE}={VALUE} pairs are specified in a comma separated list, so for example: --info=svn=jdoe,git='John Doe <joe@example.com>' The option –info option can be specified multiple times in the user command, as each specified option will be interpreted as a new user. Note that option –info is only used with option –op=add or with option –op=update , and whenever you use option –op=update you need to specify the identifier of the user you want to update. This is done by prepending the option –info arguments with {ID:} . If we were to update 'jschmo' from the previous example, it would look like this: --info=jdoe:git='Joe Doe <joe@example.com>' Note that option –master , option –username , option –passwd , and option –op are always required to issue the user command. The option –master , option –username , and option –passwd options can be specified in the option file with keywords user_master , user_username , and user_passwd , respectively. If user_master is not specified, then option –master from the options file will be used instead. Below are examples of how each command should look. Whenever a user command is successful, results will be shown to whoever issued the command. For option –op=add : buildbot user --master={MASTERHOST} --op=add \ --username={USER} --passwd={USERPW} \ --info={TYPE}={VALUE},... For option –op=update : buildbot user --master={MASTERHOST} --op=update \ --username={USER} --passwd={USERPW} \ --info={ID}:{TYPE}={VALUE},... For option –op=remove : buildbot user --master={MASTERHOST} --op=remove \ --username={USER} --passwd={USERPW} \ --ids={ID1},{ID2},... For option –op=get : buildbot user --master={MASTERHOST} --op=get \ --username={USER} --passwd={USERPW} \ --ids={ID1},{ID2},... A note on option –op=update : when updating the option –bb_username and option –bb_password , the option –info doesn’t need to have additional {TYPE}={VALUE} pairs to update and can just take the {ID} portion. 2.7.1.4. .buildbot config directory Many of the buildbot tools must be told how to contact the buildmaster that they interact with. This specification can be provided as a command-line argument, but most of the time it will be easier to set them in an options file. The buildbot command will look for a special directory named .buildbot , starting from the current directory (where the command was run) and crawling upwards, eventually looking in the user’s home directory. It will look for a file named options in this directory, and will evaluate it as a Python script, looking for certain names to be set. You can just put simple name = 'value' pairs in this file to set the options. For a description of the names used in this file, please see the documentation for the individual buildbot sub-commands. The following is a brief sample of what this file’s contents could be. # for status-reading tools masterstatus = 'buildbot.example.org:12345' # for 'sendchange' or the debug port master = 'buildbot.example.org:18990' Note carefully that the names in the options file usually do not match the command-line option name. master Equivalent to option –master for sendchange . It is the location of the pb.PBChangeSource for `sendchange . username Equivalent to option –username for the sendchange command. branch Equivalent to option –branch for the sendchange command. category Equivalent to option –category for the sendchange command. try_connect Equivalent to option –connect , this specifies how the try command should deliver its request to the buildmaster. The currently accepted values are ssh and pb . try_builders Equivalent to option –builders , specifies which builders should be used for the try build. try_vc Equivalent to option –vc for try , this specifies the version control system being used. try_branch Equivalent to option –branch , this indicates that the current tree is on a non-trunk branch. try_topdir try_topfile Use try_topdir , equivalent to option –try-topdir , to explicitly indicate the top of your working tree, or try_topfile , equivalent to option –try-topfile to name a file that will only be found in that top-most directory. try_host try_username try_dir When try_connect is ssh , the command will use try_host for option –tryhost , try_username for option –username , and try_dir for option –trydir . Apologies for the confusing presence and absence of ‘try’. try_username try_password try_master Similarly, when try_connect is pb , the command will pay attention to try_username for option –username , try_password for option –passwd , and try_master for option –master . try_wait masterstatus try_wait and masterstatus (equivalent to option –wait and master , respectively) are used to ask the try command to wait for the requested build to complete. 2.7.2. buildbot-worker buildbot-worker command-line tool is used for worker management only and does not provide any additional functionality. One can create, start, stop and restart the worker. 2.7.2.1. create-worker This creates a new directory and populates it with files that let it be used as a worker’s base directory. You must provide several arguments, which are used to create the initial buildbot.tac file. The option -r option is advisable here, just like for create-master . buildbot-worker create-worker -r {BASEDIR} {MASTERHOST}:{PORT} {WORKERNAME} {PASSWORD} The create-worker options are described in Worker Options . 2.7.2.2. start This starts a worker which was already created in the given base directory. The daemon is launched in the background, with events logged to a file named twistd.log . buildbot-worker start [--nodaemon] BASEDIR The option –nodaemon option instructs Buildbot to skip daemonizing. The process will start in the foreground. It will only return to the command-line when it is stopped. 2.7.2.3. restart buildbot-worker restart [--nodaemon] BASEDIR This restarts a worker which is already running. It is equivalent to a stop followed by a start . The option –nodaemon option has the same meaning as for start . 2.7.2.4. stop This terminates the daemon worker running in the given directory. buildbot stop BASEDIR Previous Next © Copyright Buildbot Team Members. Built with Sphinx using a theme provided by Read the Docs . | 2026-01-13T09:30:34 |
https://www.php.net/manual/fr/function.session-name.php | PHP: session_name - Manual update page now Downloads Documentation Get Involved Help Search docs Getting Started Introduction A simple tutorial Language Reference Basic syntax Types Variables Constants Expressions Operators Control Structures Functions Classes and Objects Namespaces Enumerations Errors Exceptions Fibers Generators Attributes References Explained Predefined Variables Predefined Exceptions Predefined Interfaces and Classes Predefined Attributes Context options and parameters Supported Protocols and Wrappers Security Introduction General considerations Installed as CGI binary Installed as an Apache module Session Security Filesystem Security Database Security Error Reporting User Submitted Data Hiding PHP Keeping Current Features HTTP authentication with PHP Cookies Sessions Handling file uploads Using remote files Connection handling Persistent Database Connections Command line usage Garbage Collection DTrace Dynamic Tracing Function Reference Affecting PHP's Behaviour Audio Formats Manipulation Authentication Services Command Line Specific Extensions Compression and Archive Extensions Cryptography Extensions Database Extensions Date and Time Related Extensions File System Related Extensions Human Language and Character Encoding Support Image Processing and Generation Mail Related Extensions Mathematical Extensions Non-Text MIME Output Process Control Extensions Other Basic Extensions Other Services Search Engine Extensions Server Specific Extensions Session Extensions Text Processing Variable and Type Related Extensions Web Services Windows Only Extensions XML Manipulation GUI Extensions Keyboard Shortcuts ? This help j Next menu item k Previous menu item g p Previous man page g n Next man page G Scroll to bottom g g Scroll to top g h Goto homepage g s Goto search (current page) / Focus search box session_regenerate_id » « session_module_name Manuel PHP Référence des fonctions Extensions sur les Sessions Sessions Fonctions Session Change language: English German Spanish French Italian Japanese Brazilian Portuguese Russian Turkish Ukrainian Chinese (Simplified) Other session_name (PHP 4, PHP 5, PHP 7, PHP 8) session_name — Lit et/ou modifie le nom de la session Description session_name ( ? string $name = null ): string | false session_name() retourne le nom de la session courante. Si le paramètre name est fourni, session_name() modifiera le nom de la session et retournera l' ancien nom de la session. Si un nouveau nom de session name est fourni, session_name() modifie le cookie HTTP (et le contenue de sortie quand session.transid est activé). Une fois le cookie HTTP envoyé, appeler session_name() déclenche un E_WARNING . session_name() doit être appelé avant session_start() pour que la session fonctionne correctement. Le nom de la session est réinitialisé à la valeur par défaut, stockée dans session.name lors du démarrage. Ainsi, vous devez appeler session_name() pour chaque demande (et avant que session_start() soit appelé). Liste de paramètres name Le nom de session est utilisé comme nom pour les cookies et les URLs (i.e. PHPSESSID ). Il ne doit contenir que des caractères alphanumériques ; il doit être court et descriptif (surtout pour les utilisateurs ayant activé l'alerte cookie). Si name est fourni et non null , le nom de la session courante sera remplacé par cette valeur. Avertissement Les noms de session ne peuvent pas contenir uniquement des chiffres, au moins une lettre doit être présente. Sinon, un identifiant de session sera généré à chaque fois. Valeurs de retour Retourne le nom de la session courante. Si le paramètre name est fourni et que la fonction met à jour le nom de la session, alors l' ancien nom de session sera retourné, ou false si une erreur survient. Historique Version Description 8.0.0 name est désormais nullable. 7.2.0 session_name() vérifie l'état de la session, auparavant elle vérifiait uniquement l'état du cookie. Par conséquent, les versions plus anciennes de session_name() autorise l'appel de session_name() après session_start() ce qui peut causer le plantage de PHP et peut donner lieu à des comportements étranges. Exemples Exemple #1 Exemple avec session_name() <?php /* choisi le nom de session : WebsiteID */ $previous_name = session_name ( "WebsiteID" ); echo "L'ancien nom de la session était $previous_name <br />" ; ?> Voir aussi La directive de configuration session.name Found A Problem? Learn How To Improve This Page • Submit a Pull Request • Report a Bug + add a note User Contributed Notes 9 notes up down 146 Hongliang Qiang ¶ 21 years ago This may sound no-brainer: the session_name() function will have no essential effect if you set session.auto_start to "true" in php.ini . And the obvious explanation is the session already started thus cannot be altered before the session_name() function--wherever it is in the script--is executed, same reason session_name needs to be called before session_start() as documented. I know it is really not a big deal. But I had a quite hard time before figuring this out, and hope it might be helpful to someone like me. up down 65 php at wiz dot cx ¶ 17 years ago if you try to name a php session "example.com" it gets converted to "example_com" and everything breaks. don't use a period in your session name. up down 40 relsqui at chiliahedron dot com ¶ 16 years ago Remember, kids--you MUST use session_name() first if you want to use session_set_cookie_params() to, say, change the session timeout. Otherwise it won't work, won't give any error, and nothing in the documentation (that I've seen, anyway) will explain why. Thanks to brandan of bildungsroman.com who left a note under session_set_cookie_params() explaining this or I'd probably still be throwing my hands up about it. up down 21 Joseph Dalrymple ¶ 14 years ago For those wondering, this function is expensive! On a script that was executing in a consistent 0.0025 seconds, just the use of session_name("foo") shot my execution time up to ~0.09s. By simply sacrificing session_name("foo"), I sped my script up by roughly 0.09 seconds. up down 10 Victor H ¶ 10 years ago As Joseph Dalrymple said, adding session_name do slow down a little bit the execution time. But, what i've observed is that it decreased the fluctuation between requests. Requests on my script fluctuated between 0,045 and 0,022 seconds. With session_name("myapp"), it goes to 0,050 and 0,045. Not a big deal, but that's a point to note. For those with problems setting the name, when session.auto_start is set to 1, you need to set the session.name on php.ini! up down 3 mmulej at gmail dot com ¶ 4 years ago Hope this is not out of php.net noting scope. session_name('name') must be set before session_start() because the former changes ini settings and the latter reads them. For the same reason session_set_cookie_params($options) must be set before session_start() as well. I find it best to do the following. function is_session_started() { if (php_sapi_name() === 'cli') return false; if (version_compare(phpversion(), '5.4.0', '>=')) return session_status() === PHP_SESSION_ACTIVE; return session_id() !== ''; } if (!is_session_started()) { session_name($session_name); session_set_cookie_params($cookie_options); session_start(); } up down 0 tony at marston-home dot demon dot co dot uk ¶ 7 years ago The description that session_name() gets and/or sets the name of the current session is technically wrong. It does nothing but deal with the value originally supplied by the session.name value within the php.ini file. Thus:- $name = session_name(); is functionally equivalent to $name = ini_get('session.name'); and session_name('newname); is functionally equivalent to ini_set('session.name','newname'); This also means that: $old_name = session_name('newname'); is functionally equivalent to $old_name = ini_set('session.name','newname'); The current value of session.name is not attached to a session until session_start() is called. Once session_start() has used session.name to lookup the session_id() in the cookie data the name becomes irrelevant as all further operations on the session data are keyed by the session_id(). Note that changing session.name while a session is currently active will not update the name in any session cookie. The new name does not take effect until the next call to session_start(), and this requires that the current session, which was created with the previous value for session.name, be closed. up down -4 tony at marston-home dot demon dot co dot uk ¶ 7 years ago The description has recently been modified to contain the statement "When new session name is supplied, session_name() modifies HTTP cookie". This is not correct as session_name() has never modified any cookie data. A change in session.name does not become effective until session_start() is called, and it is session_start() that creates the cookie if it does not already exist. See the following bug report for details: https://bugs.php.net/bug.php?id=76413 up down -3 descartavel1+php at gmail dot com ¶ 2 years ago Always try to set the prefix for your session name attribute to either `__Host-` or `__Secure-` to benefit from Browsers improved security. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie#attributes Also, if you have auto_session enabled, you must set this name in session.name in your config (php.ini, htaccess, etc) + add a note Fonctions Session session_​abort session_​cache_​expire session_​cache_​limiter session_​commit session_​create_​id session_​decode session_​destroy session_​encode session_​gc session_​get_​cookie_​params session_​id session_​module_​name session_​name session_​regenerate_​id session_​register_​shutdown session_​reset session_​save_​path session_​set_​cookie_​params session_​set_​save_​handler session_​start session_​status session_​unset session_​write_​close Copyright © 2001-2026 The PHP Documentation Group My PHP.net Contact Other PHP.net sites Privacy policy ↑ and ↓ to navigate • Enter to select • Esc to close • / to open Press Enter without selection to search using Google | 2026-01-13T09:30:34 |
https://aws.amazon.com/de/waf/ | Web Application Firewall – Web-API-Schutz – AWS WAF – AWS Überspringen zum Hauptinhalt Filter: Alle English Kontaktieren Sie uns AWS Marketplace Support Mein Konto Suche Filter: Alle Bei der Konsole anmelden Konto erstellen AWS Web Application Firewall Übersicht Funktionen Preise Erste Schritte Ressourcen Mehr Produkte › Sicherheit, Identität und Compliance › AWS WAF Mit dem kostenlosen AWS-Kontingent 10 Millionen gängige Bot-Control-Anfragen pro Monat erhalten → AWS WAF Ihre Webanwendungen vor häufigen Angriffen schützen Erste Schritte mit AWS WAF Vorteile von AWS WAF Zeit mit verwalteten Regeln sparen Sparen Sie Zeit mit verwalteten Regeln, damit Sie mehr Zeit für die Entwicklung von Anwendungen haben. Bots überwachen, blockieren oder ihre Rate begrenzen Überwachen, blockieren oder begrenzen Sie gängige und weit verbreitete Bots einfacher. Die Schritte zur Sicherheitskonfiguration verringern Beschleunigen Sie komplexe Sicherheitskonfigurationen mit einer konsolidierten Schnittstelle, die die Komplexität und Schritte der Sicherheitsbereitstellung um bis zu 80% reduziert. Zentralisierte und umsetzbare Sichtbarkeit Eine einzige, umfassende Schnittstelle kombiniert zentrale Sicherheitsfunktionen mit speziellen Schutzmaßnahmen von Partnern, um die Sichtbarkeit und Kontrolle der Sicherheit zu verbessern. Dieser einheitliche Ansatz wandelt Sicherheitsdaten in umsetzbare Erkenntnisse um, wodurch betriebliche Reibungsverluste vermieden und die Reaktion auf Risiken beschleunigt werden. Sicherheitsstatus verstärken Vorkonfigurierte Schutzpakete nutzen das Sicherheitsexpertise von AWS, um Sofortschutzvorlagen für bestimmte Branchen und Workload-Typen wie APIs, PHP-Anwendungen und Webservices bereitzustellen. Diese Vorlagen werden kontinuierlich optimiert, um die Sicherheit auf dem neuesten Stand zu halten, ohne dass umfassende Bereitstellungskenntnisse erforderlich sind. Erhalten Sie kontinuierliche Sicherheitsempfehlungen, um den allgemeinen Sicherheitsstatus zu stärken. Warum AWS WAF? Mit AWS WAF können Sie Sicherheitsregeln erstellen, die Bot-Verkehr kontrollieren und allgemeine Angriffsmuster blockieren, z. B. SQL-Injektion oder Cross-Site-Scripting (XSS). Abspielen Anwendungsfälle Web-Datenverkehr filtern Erstellen Sie Regeln, um Web-Anfragen basierend auf Bedingungen wie IP-Adressen, HTTP-Header und -Hauptteil oder benutzerdefinierte URIs zu filtern. Weitere Informationen zum Erstellen von Regeln Betrug bei der Kontoübernahme verhindern Überwachen Sie die Anmeldeseite Ihrer Anwendung auf nicht autorisierten Zugriff auf Benutzerkonten mit kompromittierten Anmeldeinformationen. Weitere Informationen zur Betrugsprävention Automatischer Layer-7-DDoS-Schutz Konzipiert für die kontinuierliche Überwachung und automatische Abwehr von Ereignissen von Distributed Denial of Service (DDoS) auf Anwendungsebene (Layer 7) innerhalb von Sekunden. Schnelle Sicherheitsimplementierung Starten Sie neue Anwendungen mit Zuversicht, indem Sie das optimierte, geführte Onboarding-Setup mit einer einseitigen Oberfläche verwenden, um vorkonfigurierte Sicherheitsstandards zu aktivieren, die auf Ihre Bedürfnisse zugeschnitten sind. Sicherheitsstatus verstärken Durch von Experten kuratierte Regelpakete, konsolidierte Einblicke und fortlaufende Empfehlungen erhalten Sie sofortigen Schutz, um Ihren Sicherheitsstatus zu optimieren. Erste Schritte mit AWS WAF Erste Schritte mit AWS WAF AWS WAF erkunden Einen Experten kontaktieren Kontakt AWS-Konto erstellen Lernen Was ist AWS? Was ist „Cloud Computing“? Was ist „Agentenbasierte KI“? Hub für Cloud-Computing-Konzepte AWS Cloud Sicherheit Neuerungen Blogs Pressemitteilungen Ressourcen Erste Schritte Training AWS Trust Center AWS-Lösungsportfolio Architekturzentrum Häufig gestellte Fragen zu Produkt und Technik Berichte von Analysten AWS-Partner Entwickler Builder Center SDKs und Tools .NET auf AWS Python in AWS Java in AWS PHP in AWS JavaScript in AWS Hilfe Kontaktieren Sie uns Support-Ticket aufgeben AWS re:Post Wissenscenter AWS Support – Überblick Erhalten Sie Hilfe von Experten Barrierefreiheit bei AWS Rechtlicher Hinweis English Zurück zum Seitenanfang Amazon.com setzt als Arbeitgeber auf Gleichberechtigung: Minderheiten/Frauen/Menschen mit Behinderungen/Veteranen/Geschlechtsidentität/sexuelle Orientierung/Alter. x facebook linkedin instagram twitch youtube podcasts email Datenschutz Allgemeine Geschäftsbedingungen Cookie-Einstellungen © 2026, Amazon Web Services, Inc. bzw. Tochtergesellschaften des Unternehmens. Alle Rechte vorbehalten. | 2026-01-13T09:30:34 |
http://docs.buildbot.net/current/manual/installation/misc.html#launching-the-daemons | 2.2.6. Next Steps — Buildbot 4.3.0 documentation Buildbot 1. Buildbot Tutorial 2. Buildbot Manual 2.1. Introduction 2.2. Installation 2.2.1. Buildbot Components 2.2.2. Requirements 2.2.3. Installing the code 2.2.4. Buildmaster Setup 2.2.5. Worker Setup 2.2.6. Next Steps 2.2.6.1. Launching the daemons 2.2.6.2. Launching worker as Windows service 2.2.6.3. Logfiles 2.2.6.4. Shutdown 2.3. Concepts 2.4. Secret Management 2.5. Configuration 2.6. Customization 2.7. Command-line Tool 2.8. Resources 2.9. Optimization 2.10. Plugin Infrastructure in Buildbot 2.11. Deployment 2.12. Upgrading 3. Buildbot Development 4. Release Notes 5. Older Release Notes 6. API Indices Buildbot 2. Buildbot Manual 2.2. Installation 2.2.6. Next Steps View page source 2.2.6. Next Steps 2.2.6.1. Launching the daemons Both the buildmaster and the worker run as daemon programs. To launch them, pass the working directory to the buildbot and buildbot-worker commands, as appropriate: # start a master buildbot start [ BASEDIR ] # start a worker buildbot-worker start [ WORKER_BASEDIR ] The BASEDIR is optional and can be omitted if the current directory contains the buildbot configuration (the buildbot.tac file). buildbot start This command will start the daemon and then return, so normally it will not produce any output. To verify that the programs are indeed running, look for a pair of files named twistd.log and twistd.pid that should be created in the working directory. twistd.pid contains the process ID of the newly-spawned daemon. When the worker connects to the buildmaster, new directories will start appearing in its base directory. The buildmaster tells the worker to create a directory for each Builder which will be using that worker. All build operations are performed within these directories: CVS checkouts, compiles, and tests. Once you get everything running, you will want to arrange for the buildbot daemons to be started at boot time. One way is to use cron , by putting them in a @reboot crontab entry [ 1 ] @reboot buildbot start [ BASEDIR ] When you run crontab to set this up, remember to do it as the buildmaster or worker account! If you add this to your crontab when running as your regular account (or worse yet, root), then the daemon will run as the wrong user, quite possibly as one with more authority than you intended to provide. It is important to remember that the environment provided to cron jobs and init scripts can be quite different than your normal runtime. There may be fewer environment variables specified, and the PATH may be shorter than usual. It is a good idea to test out this method of launching the worker by using a cron job with a time in the near future, with the same command, and then check twistd.log to make sure the worker actually started correctly. Common problems here are for /usr/local or ~/bin to not be on your PATH , or for PYTHONPATH to not be set correctly. Sometimes HOME is messed up too. If using systemd to launch buildbot-worker , it may be a good idea to specify a fixed PATH using the Environment directive (see systemd unit file example ). Some distributions may include conveniences to make starting buildbot at boot time easy. For instance, with the default buildbot package in Debian-based distributions, you may only need to modify /etc/default/buildbot (see also /etc/init.d/buildbot , which reads the configuration in /etc/default/buildbot ). Buildbot also comes with its own init scripts that provide support for controlling multi-worker and multi-master setups (mostly because they are based on the init script from the Debian package). With a little modification, these scripts can be used on both Debian and RHEL-based distributions. Thus, they may prove helpful to package maintainers who are working on buildbot (or to those who haven’t yet split buildbot into master and worker packages). # install as /etc/default/buildbot-worker # or /etc/sysconfig/buildbot-worker worker/contrib/init-scripts/buildbot-worker.default # install as /etc/default/buildmaster # or /etc/sysconfig/buildmaster master/contrib/init-scripts/buildmaster.default # install as /etc/init.d/buildbot-worker worker/contrib/init-scripts/buildbot-worker.init.sh # install as /etc/init.d/buildmaster master/contrib/init-scripts/buildmaster.init.sh # ... and tell sysvinit about them chkconfig buildmaster reset # ... or update-rc.d buildmaster defaults 2.2.6.2. Launching worker as Windows service Security consideration Setting up the buildbot worker as a Windows service requires Windows administrator rights. It is important to distinguish installation stage from service execution. It is strongly recommended run Buildbot worker with lowest required access rights. It is recommended run a service under machine local non-privileged account. If you decide run Buildbot worker under domain account it is recommended to create dedicated strongly limited user account that will run Buildbot worker service. Windows service setup In this description, we assume that the buildbot worker account is the local domain account worker . In case worker should run under domain user account please replace .\worker with <domain>\worker . Please replace <worker.passwd> with given user password. Please replace <worker.basedir> with the full/absolute directory specification to the created worker (what is called BASEDIR in Creating a worker ). buildbot_worker_windows_service --user .\worker --password < worker.passwd > --startup auto install powershell -command "& {&'New-Item' -path Registry::HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BuildBot\Parameters}" powershell -command "& {&'set-ItemProperty' -path Registry::HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BuildBot\Parameters -Name directories -Value '<worker.basedir>'}" The first command automatically adds user rights to run Buildbot as service. Modify environment variables This step is optional and may depend on your needs. At least we have found useful to have dedicated temp folder worker steps. It is much easier discover what temporary files your builds leaks/misbehaves. As Administrator run regedit Open the key Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Buildbot . Create a new value of type REG_MULTI_SZ called Environment . Add entries like TMP = c : \ bbw \ tmp TEMP = c : \ bbw \ tmp Check if Buildbot can start correctly configured as Windows service As admin user run the command net start buildbot . In case everything goes well, you should see following output The BuildBot service is starting . The BuildBot service was started successfully . Troubleshooting If anything goes wrong check Twisted log on C:\bbw\worker\twistd.log Windows system event log ( eventvwr.msc in command line, Show-EventLog in PowerShell). 2.2.6.3. Logfiles While a buildbot daemon runs, it emits text to a logfile, named twistd.log . A command like tail -f twistd.log is useful to watch the command output as it runs. The buildmaster will announce any errors with its configuration file in the logfile, so it is a good idea to look at the log at startup time to check for any problems. Most buildmaster activities will cause lines to be added to the log. 2.2.6.4. Shutdown To stop a buildmaster or worker manually, use: buildbot stop [ BASEDIR ] # or buildbot-worker stop [ WORKER_BASEDIR ] This simply looks for the twistd.pid file and kills whatever process is identified within. At system shutdown, all processes are sent a SIGKILL . The buildmaster and worker will respond to this by shutting down normally. The buildmaster will respond to a SIGHUP by re-reading its config file. Of course, this only works on Unix-like systems with signal support and not on Windows. The following shortcut is available: buildbot reconfig [ BASEDIR ] When you update the Buildbot code to a new release, you will need to restart the buildmaster and/or worker before they can take advantage of the new code. You can do a buildbot stop BASEDIR and buildbot start BASEDIR in succession, or you can use the restart shortcut, which does both steps for you: buildbot restart [ BASEDIR ] Workers can similarly be restarted with: buildbot-worker restart [ BASEDIR ] There are certain configuration changes that are not handled cleanly by buildbot reconfig . If this occurs, buildbot restart is a more robust way to fully switch over to the new configuration. buildbot restart may also be used to start a stopped Buildbot instance. This behavior is useful when writing scripts that stop, start, and restart Buildbot. A worker may also be gracefully shutdown from the web UI. This is useful to shutdown a worker without interrupting any current builds. The buildmaster will wait until the worker has finished all its current builds, and will then tell the worker to shutdown. [ 1 ] This @reboot syntax is understood by Vixie cron, which is the flavor usually provided with Linux systems. Other unices may have a cron that doesn’t understand @reboot Previous Next © Copyright Buildbot Team Members. Built with Sphinx using a theme provided by Read the Docs . | 2026-01-13T09:30:34 |
https://docs.aws.amazon.com/fr_fr/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-testing-debugging.html | Test et débogage des fonctions Lambda@Edge - Amazon CloudFront Test et débogage des fonctions Lambda@Edge - Amazon CloudFront Documentation Amazon CloudFront Guide du développeur Test de vos fonctions Lambda@Edge Identifiez les erreurs de fonction Lambda @Edge dans CloudFront Dépannage en cas de réponses de fonction Lambda@Edge non valide (erreurs de validation) Dépannage des erreurs d’exécution de fonction Lambda@Edge Définition de la région Lambda@Edge Déterminez si votre compte envoie les journaux vers CloudWatch Les traductions sont fournies par des outils de traduction automatique. En cas de conflit entre le contenu d'une traduction et celui de la version originale en anglais, la version anglaise prévaudra. Test et débogage des fonctions Lambda@Edge Il est important de tester votre code de fonction Lambda @Edge de manière autonome, pour vous assurer qu'il exécute la tâche prévue, et de réaliser des tests d'intégration pour vous assurer que la fonction fonctionne correctement avec. CloudFront Au cours des tests d'intégration ou après le déploiement de votre fonction, il se peut que vous deviez CloudFront corriger des erreurs, telles que des erreurs HTTP 5xx. Les erreurs peuvent être de différents types : une réponse non valide renvoyée par la fonction Lambda, des erreurs d'exécution lorsque la fonction est déclenchée, ou encore des erreurs en raison de la limitation d'exécution par le service Lambda. Les sections de cette rubrique donnent des stratégies pour déterminer le type de défaillance qui est à l'origine du problème, puis les étapes à suivre afin de résoudre le problème. Note Lorsque vous consultez des fichiers CloudWatch journaux ou des indicateurs pour résoudre des erreurs, sachez qu'ils sont affichés ou stockés à l'emplacement le Région AWS plus proche de l'endroit où la fonction s'est exécutée. Ainsi, si vous avez un site Web ou une application Web avec des utilisateurs au Royaume-Uni, et qu'une fonction Lambda est associée à votre distribution, par exemple, vous devez modifier la région pour afficher les métriques ou CloudWatch les fichiers journaux de Londres. Région AWS Pour de plus amples informations, veuillez consulter Définition de la région Lambda@Edge . Rubriques Test de vos fonctions Lambda@Edge Identifiez les erreurs de fonction Lambda @Edge dans CloudFront Dépannage en cas de réponses de fonction Lambda@Edge non valide (erreurs de validation) Dépannage des erreurs d’exécution de fonction Lambda@Edge Définition de la région Lambda@Edge Déterminez si votre compte envoie les journaux vers CloudWatch Test de vos fonctions Lambda@Edge Il existe deux étapes pour tester votre fonction Lambda : le test autonome et le test d'intégration. Test de la fonctionnalité autonome Avant d'ajouter votre fonction Lambda à CloudFront, assurez-vous de la tester d'abord en utilisant les fonctionnalités de test de la console Lambda ou en utilisant d'autres méthodes. Pour plus d’informations sur les tests de la console Lambda, consultez Invoquer une fonction Lambda avec la console dans le Guide du développeur AWS Lambda . Testez le fonctionnement de votre fonction dans CloudFront Il est important de réaliser des tests d'intégration, dans lesquels votre fonction est associée à une distribution et s'exécute en fonction d'un CloudFront événement. Assurez-vous que la fonction est déclenchée pour le bon événement, et qu'elle renvoie une réponse valide et correcte CloudFront. Par exemple, assurez-vous que la structure de l’événement est correcte, que seuls les en-têtes valides sont inclus, etc. Au fur et à mesure que vous testez l'intégration de votre fonction dans la console Lambda, reportez-vous aux étapes du didacticiel Lambda @Edge pour modifier votre code ou CloudFront le déclencheur qui appelle votre fonction. Par exemple, vérifiez que vous travaillez avec une version numérotée de votre fonction, comme le décrit cette étape du tutoriel : Étape 4 : ajouter un CloudFront déclencheur pour exécuter la fonction . Lorsque vous apportez des modifications et que vous les déployez, sachez qu'il faudra plusieurs minutes pour que votre fonction et vos CloudFront déclencheurs mis à jour soient répliqués dans toutes les régions. Cela prend généralement quelques minutes, mais peut durer jusqu'à 15 minutes. Vous pouvez vérifier si la réplication est terminée en accédant à la CloudFront console et en consultant votre distribution. Pour vérifier si le déploiement de votre réplication est terminé Ouvrez la CloudFront console à l'adresse https://console.aws.amazon.com/cloudfront/v4/home . Choisissez le nom de la distribution. Vérifiez que le statut de distribution passe de En cours à Déployé , ce qui signifie que votre fonction a été répliquée. Suivez les étapes de la section suivante afin de vérifier que la fonction s'exécute correctement. Sachez que les tests effectués dans la console valident uniquement la logique de votre fonction et n'appliquent pas de quotas (auparavant appelés limites) de service spécifiques à Lambda@Edge. Identifiez les erreurs de fonction Lambda @Edge dans CloudFront Une fois que vous avez vérifié que la logique de votre fonction fonctionne correctement, des erreurs HTTP 5xx peuvent encore s'afficher lors de l'exécution de votre fonction. CloudFront Les erreurs HTTP 5xx peuvent être renvoyées pour diverses raisons, notamment des erreurs liées à la fonction Lambda ou d'autres problèmes. CloudFront Si vous utilisez les fonctions Lambda @Edge, vous pouvez utiliser les graphiques de la CloudFront console pour identifier la cause de l'erreur, puis essayer de la corriger. Par exemple, vous pouvez voir si les erreurs HTTP 5xx sont causées par CloudFront ou par des fonctions Lambda, puis, pour des fonctions spécifiques, vous pouvez consulter les fichiers journaux associés afin d'étudier le problème. Pour résoudre les erreurs HTTP en général dans CloudFront, consultez les étapes de résolution des problèmes décrites dans la rubrique suivante : Résolution des codes d'état des réponses aux erreurs dans CloudFront . Quelles sont les causes des erreurs de fonction Lambda @Edge dans CloudFront Il existe plusieurs raisons pour lesquelles une fonction Lambda peut entraîner une erreur HTTP 5xx. Les étapes de résolution à suivre dépendent du type d'erreur. Les erreurs peuvent être classées comme suit : Une erreur d'exécution de la fonction Lambda Une erreur d'exécution se produit lorsque Lambda CloudFront ne reçoit pas de réponse en raison d'exceptions non gérées dans la fonction ou d'une erreur dans le code. Par exemple, si le code comprend le rappel (Error). Une réponse de fonction Lambda non valide est renvoyée à CloudFront Une fois la fonction exécutée, CloudFront reçoit une réponse de Lambda. Une erreur est renvoyée si la structure d'objet de la réponse n'est pas conforme à Structure d'événement Lambda@Edge ou si la réponse contient des en-têtes ou d'autres champs non valides. L'exécution dans CloudFront est limitée en raison des quotas de service Lambda (anciennement appelés limites) Le service Lambda limite les exécutions dans chaque région, et renvoie une erreur si vous dépassez le quota. Pour de plus amples informations, veuillez consulter Quotas sur Lambda@Edge . Comment déterminer le type d'échec Pour vous aider à décider sur quoi vous concentrer lorsque vous débugez et que vous vous efforcez de résoudre les erreurs renvoyées CloudFront, il est utile de déterminer pourquoi une erreur HTTP CloudFront est renvoyée. Pour commencer, vous pouvez utiliser les graphiques fournis dans la section Surveillance de la CloudFront console sur le AWS Management Console. Pour plus d'informations sur l'affichage des graphiques dans la section Surveillance de la CloudFront console, consultez Surveillance des métriques CloudFront avec Amazon CloudWatch . Les graphiques suivants peuvent être particulièrement utiles lorsque vous souhaitez retracer si des erreurs sont renvoyées par les origines ou par une fonction Lambda, et pour réduire le type de problème lorsqu'il s'agit d'une erreur provenant d'une fonction Lambda. Graphique des taux d'erreurs L'un des graphiques que vous pouvez afficher dans l'onglet Présentation pour chacune de vos distributions est un graphique de taux d'erreurs . Ce graphique affiche le taux d'erreurs sous forme de pourcentage du nombre total de demandes adressées à votre distribution. Ce graphique montre le taux d'erreurs total, le total des erreurs 4xx, le total des erreurs 5xx et le total des erreurs 5xx provenant des fonctions Lambda. Selon le type d'erreur et le volume, vous pouvez prendre des mesures pour étudier et résoudre le problème initial. Si vous voyez des erreurs Lambda, vous pouvez poursuivre vos investigations en examinant les types d'erreurs spécifiques que la fonction renvoie. L'onglet Lambda@Edge errors (Erreurs Lambda@Edge) inclut des graphiques qui classent les erreurs de fonction par type pour vous aider à identifier le problème pour une fonction spécifique. Si vous CloudFront constatez des erreurs, vous pouvez les résoudre et vous efforcer de corriger les erreurs d'origine ou de modifier votre CloudFront configuration. Pour de plus amples informations, veuillez consulter Résolution des codes d'état des réponses aux erreurs dans CloudFront . Graphiques des erreurs d'exécution et des réponses de fonction non valide L'onglet Lambda@Edge errors (Erreurs Lambda@Edge) inclut des graphiques permettant de classer les erreurs Lambda@Edge pour une distribution spécifique, par type. Par exemple, un graphique montre toutes les erreurs d’exécution par Région AWS. Pour faciliter la résolution des problèmes, vous pouvez rechercher des problèmes spécifiques en ouvrant et en examinant les fichiers journaux des fonctions spécifiques par région. Pour afficher les fichiers journaux d’une fonction spécifique par région Dans l’onglet Erreurs Lambda@Edge , sous Fonctions Lambda@Edge associées , choisissez le nom de la fonction, puis choisissez Afficher les métriques . Ensuite, sur la page affichant le nom de votre fonction, dans le coin supérieur droit, choisissez Afficher les journaux de fonction , puis sélectionnez une Région. Par exemple, si vous constatez des problèmes dans le graphique Erreurs pour la Région USA Ouest (Oregon), choisissez cette Région dans la liste déroulante. Cela ouvre la CloudWatch console Amazon. Dans la CloudWatch console de cette région, sous Log streams , choisissez un log stream pour afficher les événements liés à la fonction. De plus, lisez les sections suivantes de ce chapitre pour plus de recommandations sur le dépannage et la correction des erreurs. Graphique des limitations L'onglet Lambda@Edge errors (Erreurs Lambda@Edge) inclut également un graphique Throttles (Limitations) . Occasionnellement, le service Lambda limite vos appels de fonction par région si vous atteignez le quota (auparavant appelé limite) de simultanéité régionale. Si vous voyez une erreur de dépassement de limite , cela signifie que votre fonction a atteint un quota que le service Lambda impose sur les exécutions dans une région. Pour obtenir plus d’informations sur ces limites et découvrir comment demander une augmentation du quota, consultez Quotas sur Lambda@Edge . Pour obtenir un exemple sur la façon d'utiliser ces informations pour résoudre des erreurs HTTP, consultez le billet de blog Four steps for debugging your content delivery on AWS . Dépannage en cas de réponses de fonction Lambda@Edge non valide (erreurs de validation) Si vous identifiez que votre problème est dû à une erreur de validation Lambda, cela signifie que votre fonction Lambda renvoie une réponse non valide à. CloudFront Suivez les instructions de cette section pour prendre les mesures nécessaires pour revoir votre fonction et vous assurer que votre réponse est conforme aux CloudFront exigences. CloudFront valide la réponse d'une fonction Lambda de deux manières : La réponse Lambda doit se conformer à la structure d'objet requise. Voici des exemples de mauvaise structure d'objet : impossible d'analyser JSON, champs obligatoires manquants et la réponse contient un objet non valide. Pour de plus amples informations, veuillez consulter Structure d'événement Lambda@Edge . La réponse doit inclure uniquement les valeurs d'objet valides. Une erreur se produit si la réponse inclut un objet valide mais dont les valeurs ne sont pas prises en charge. Les exemples incluent les éléments suivants : ajout ou mise à jour d'en-têtes non autorisés ou en lecture seule (voir Restrictions sur les fonctions périphériques ), dépassement des limitations de taille du corps (voir Limites sur la taille de la réponse générée dans la rubrique Lambda@Edge Erreurs ) et les caractères ou les valeurs non valables (voir Structure d'événement Lambda@Edge ). Lorsque Lambda renvoie une réponse non valide à CloudFront, des messages d'erreur sont écrits CloudWatch dans des fichiers journaux qui sont CloudFront redirigés vers la région où la fonction Lambda a été exécutée. C'est le comportement par défaut auquel les fichiers journaux sont envoyés en CloudWatch cas de réponse non valide. Toutefois, si vous avez associé une fonction Lambda à une fonction Lambda CloudFront avant son lancement, il est possible qu'elle ne soit pas activée pour votre fonction. Pour plus d'informations, consultez la section Déterminez si votre compte transmet les fichiers journaux à CloudWatch plus loin dans la rubrique. CloudFront envoie les fichiers journaux vers la région correspondant à l'endroit où votre fonction a été exécutée, dans le groupe de journaux associé à votre distribution. Les groupes de journaux ont le format suivant : /aws/cloudfront/LambdaEdge/ DistributionId , où DistributionId est l'ID de votre distribution. Pour déterminer la région dans laquelle se trouvent les fichiers CloudWatch journaux, consultez la section Détermination de la région Lambda @Edge plus loin dans cette rubrique. Si l'erreur est reproductible, vous pouvez créer une nouvelle demande qui entraîne l'erreur, puis rechercher l'identifiant de la demande dans une CloudFront réponse ayant échoué ( X-Amz-Cf-Id en-tête) afin de localiser un seul échec dans les fichiers journaux. L'entrée du fichier journal inclut des informations susceptibles de vous aider à identifier les raisons pour lesquelles l'erreur est renvoyée, et affiche aussi l'ID de demande Lambda correspondant, ce qui vous permet d'analyser la cause première dans le cadre d'une seule demande. Si une erreur est intermittente, vous pouvez utiliser les journaux d' CloudFront accès pour trouver l'identifiant d'une demande qui a échoué, puis rechercher dans CloudWatch les journaux les messages d'erreur correspondants. Pour plus d'informations, consultez la section précédente, Détermination du type d'échec . Dépannage des erreurs d’exécution de fonction Lambda@Edge Si le problème provient d'une erreur d'exécution Lambda, il peut être utile de créer des instructions de journalisation pour les fonctions Lambda, d'écrire des messages dans des fichiers CloudWatch journaux qui surveillent l'exécution de votre fonction CloudFront et déterminent si elle fonctionne comme prévu. Vous pouvez ensuite rechercher ces instructions dans les fichiers CloudWatch journaux pour vérifier que votre fonction fonctionne. Note Même si vous n'avez pas modifié votre fonction Lambda@Edge, les mises à jour de l'environnement d'exécution de la fonction Lambda peuvent l'affecter et renvoyer une erreur d'exécution. Pour plus d'informations sur les tests et la migration vers une version ultérieure, consultez Prochaines mises à jour de l'environnement d'exécution AWS Lambda et AWS Lambda @Edge . Définition de la région Lambda@Edge Pour connaître les régions dans lesquelles votre fonction Lambda @Edge reçoit du trafic, consultez les métriques de la fonction sur la CloudFront console du. AWS Management Console Les statistiques sont affichées pour chaque AWS région. Sur la même page, vous pouvez choisir une région et afficher les fichiers journaux pour cette région afin de pouvoir rechercher des problèmes. Vous devez consulter les fichiers CloudWatch journaux dans la AWS région appropriée pour voir les fichiers journaux créés lors de l' CloudFront exécution de votre fonction Lambda. Pour plus d'informations sur l'affichage des graphiques dans la section Surveillance de la CloudFront console, consultez Surveillance des métriques CloudFront avec Amazon CloudWatch . Déterminez si votre compte envoie les journaux vers CloudWatch Par défaut, CloudFront active la journalisation des réponses de fonction Lambda non valides et envoie les fichiers journaux vers CloudWatch . Rôles liés à un service pour Lambda@Edge Si vous avez ajouté des fonctions Lambda @Edge CloudFront avant la publication de la fonctionnalité de journal des réponses des fonctions Lambda non valide, la journalisation est activée lors de la prochaine mise à jour de votre configuration Lambda @Edge, par exemple en ajoutant un déclencheur. CloudFront Vous pouvez vérifier que le transfert des fichiers journaux vers CloudWatch est activé pour votre compte en procédant comme suit : Vérifiez si les journaux apparaissent dans CloudWatch — Assurez-vous de regarder dans la région où la fonction Lambda @Edge s'est exécutée. Pour de plus amples informations, veuillez consulter Définition de la région Lambda@Edge . Déterminez si le rôle lié au service associé existe dans votre compte IAM : vous devez disposer du rôle IAM AWSServiceRoleForCloudFrontLogger dans votre compte. Pour plus d’informations sur ce rôle, consultez Rôles liés à un service pour Lambda@Edge . JavaScript est désactivé ou n'est pas disponible dans votre navigateur. Pour que vous puissiez utiliser la documentation AWS, Javascript doit être activé. Vous trouverez des instructions sur les pages d'aide de votre navigateur. Conventions de rédaction Ajout de déclencheurs à une fonction Lambda@Edge Suppression de fonctions et de réplicas Cette page vous a-t-elle été utile ? - Oui Merci de nous avoir fait part de votre satisfaction. Si vous avez quelques minutes à nous consacrer, merci de nous indiquer ce qui vous a plu afin que nous puissions nous améliorer davantage. Cette page vous a-t-elle été utile ? - Non Merci de nous avoir avertis que cette page avait besoin d'être retravaillée. Nous sommes désolés de ne pas avoir répondu à vos attentes. Si vous avez quelques minutes à nous consacrer, merci de nous indiquer comment nous pourrions améliorer cette documentation. | 2026-01-13T09:30:34 |
https://ja-jp.facebook.com/login/?next=https%3A%2F%2Fl.facebook.com%2Fl.php%3Fu%3Dhttps%253A%252F%252Fwww.instagram.com%252F%26amp%253Bh%3DAT0ODBxi9Ky_q5Zd-o4dYpnmHyfX6cU94o0TSebfXayTVSpJVZhsLNiaEOYJD4tF5NyHmlhcHvBZpujDEdp7WEIct_OsB2vtYxvgex9BobCz0QD-I7CT9qU5M2_9NMDpfEnKNwsIMtWszByH | Facebook Facebook メールアドレスまたは電話番号 パスワード アカウントを忘れた場合 新しいアカウントを作成 機能の一時停止 機能の一時停止 この機能の使用ペースが早過ぎるため、機能の使用が一時的にブロックされました。 Back 日本語 한국어 English (US) Tiếng Việt Bahasa Indonesia ภาษาไทย Español 中文(简体) Português (Brasil) Français (France) Deutsch アカウント登録 ログイン Messenger Facebook Lite 動画 Meta Pay Metaストア Meta Quest Ray-Ban Meta Meta AI Meta AIのコンテンツをもっと見る Instagram Threads 投票情報センター プライバシーポリシー プライバシーセンター Facebookについて 広告を作成 ページを作成 開発者 採用情報 Cookie AdChoices 規約 ヘルプ 連絡先のアップロードと非ユーザー 設定 アクティビティログ Meta © 2026 | 2026-01-13T09:30:34 |
https://llvm.org/docs/GettingStarted.html#checkout%20 | Getting Started with the LLVM System — LLVM 22.0.0git documentation Navigation index next | previous | LLVM Home | Documentation » Getting Started/Tutorials » Getting Started with the LLVM System Documentation Getting Started/Tutorials User Guides Reference Getting Involved Contributing to LLVM Submitting Bug Reports Mailing Lists Discord Meetups and Social Events Additional Links FAQ Glossary Publications Github Repository This Page Show Source Quick search Getting Started with the LLVM System ¶ Overview Getting the Source Code and Building LLVM Stand-alone Builds Requirements Hardware Software Host C++ Toolchain, both Compiler and Standard Library Getting a Modern Host C++ Toolchain Getting Started with LLVM Terminology and Notation Sending patches Bisecting commits Reverting a change Local LLVM Configuration Compiling the LLVM Suite Source Code Cross-Compiling LLVM The Location of LLVM Object Files Optional Configuration Items Directory Layout llvm/cmake llvm/examples llvm/include llvm/lib llvm/bindings llvm/projects llvm/test test-suite llvm/tools llvm/utils An Example Using the LLVM Tool Chain Example with clang Common Problems Links Overview ¶ Welcome to the LLVM project! The LLVM project has multiple components. The core of the project is itself called “LLVM”. This contains all of the tools, libraries, and header files needed to process intermediate representations and convert them into object files. Tools include an assembler, disassembler, bitcode analyzer, and bitcode optimizer. It also contains basic regression tests. C-like languages use the Clang front end. This component compiles C, C++, Objective-C, and Objective-C++ code into LLVM bitcode – and from there into object files, using LLVM. Other components include: the libc++ C++ standard library , the LLD linker , and more. Getting the Source Code and Building LLVM ¶ Check out LLVM (including subprojects like Clang): git clone https://github.com/llvm/llvm-project.git Or, on Windows: git clone --config core.autocrlf=false https://github.com/llvm/llvm-project.git To save storage and speed up the checkout time, you may want to do a shallow clone . For example, to get the latest revision of the LLVM project, use git clone --depth 1 https://github.com/llvm/llvm-project.git You are likely not interested in the user branches in the repo (used for stacked pull requests and reverts), you can filter them from your git fetch (or git pull ) with this configuration: git config --add remote.origin.fetch '^refs/heads/users/*' git config --add remote.origin.fetch '^refs/heads/revert-*' Configure and build LLVM and Clang: cd llvm-project cmake -S llvm -B build -G <generator> [options] Some common build system generators are: Ninja — for generating Ninja build files. Most llvm developers use Ninja. Unix Makefiles — for generating make-compatible parallel makefiles. Visual Studio — for generating Visual Studio projects and solutions. Xcode — for generating Xcode projects. See the CMake docs for a more comprehensive list. Some common options: -DLLVM_ENABLE_PROJECTS='...' — A semicolon-separated list of the LLVM subprojects you’d like to additionally build. Can include any of: clang, clang-tools-extra, lldb, lld, polly, or cross-project-tests. For example, to build LLVM, Clang, and LLD, use -DLLVM_ENABLE_PROJECTS="clang;lld" . -DCMAKE_INSTALL_PREFIX=directory — Specify for directory the full pathname of where you want the LLVM tools and libraries to be installed (default /usr/local ). -DCMAKE_BUILD_TYPE=type — Controls the optimization level and debug information of the build. Valid options for type are Debug , Release , RelWithDebInfo , and MinSizeRel . For more detailed information, see CMAKE_BUILD_TYPE . -DLLVM_ENABLE_ASSERTIONS=ON — Compile with assertion checks enabled (default is ON for Debug builds, OFF for all other build types). -DLLVM_USE_LINKER=lld — Link with the lld linker , assuming it is installed on your system. This can dramatically speed up link times if the default linker is slow. -DLLVM_PARALLEL_{COMPILE,LINK,TABLEGEN}_JOBS=N — Limit the number of compile/link/tablegen jobs running in parallel at the same time. This is especially important for linking since linking can use lots of memory. If you run into memory issues building LLVM, try setting this to limit the maximum number of compile/link/tablegen jobs running at the same time. cmake --build build [--target <target>] or the build system specified above directly. The default target (i.e. cmake --build build or make -C build ) will build all of LLVM. The check-all target (i.e. ninja check-all ) will run the regression tests to ensure everything is in working order. CMake will generate build targets for each tool and library, and most LLVM sub-projects generate their own check-<project> target. Running a serial build will be slow . To improve speed, try running a parallel build. That’s done by default in Ninja; for make , use the option -j NN , where NN is the number of parallel jobs, e.g. the number of available CPUs. A basic CMake and build/test invocation which only builds LLVM and no other subprojects: cmake -S llvm -B build -G Ninja -DCMAKE_BUILD_TYPE=Debug ninja -C build check-llvm This will set up an LLVM build with debugging info, then compile LLVM and run LLVM tests. For more detailed information on CMake options, see CMake If you get build or test failures, see below . Consult the Getting Started with LLVM section for detailed information on configuring and compiling LLVM. Go to Directory Layout to learn about the layout of the source code tree. Stand-alone Builds ¶ Stand-alone builds allow you to build a sub-project against a pre-built version of the clang or llvm libraries that is already present on your system. You can use the source code from a standard checkout of the llvm-project (as described above) to do stand-alone builds, but you may also build from a sparse checkout or from the tarballs available on the releases page. For stand-alone builds, you must have an llvm install that is configured properly to be consumable by stand-alone builds of the other projects. This could be a distro-provided LLVM install, or you can build it yourself, like this: cmake -G Ninja -S path/to/llvm-project/llvm -B $builddir \ -DLLVM_INSTALL_UTILS=ON \ -DCMAKE_INSTALL_PREFIX=/path/to/llvm/install/prefix \ < other options > ninja -C $builddir install Once llvm is installed, to configure a project for a stand-alone build, invoke CMake like this: cmake -G Ninja -S path/to/llvm-project/$subproj \ -B $buildir_subproj \ -DLLVM_EXTERNAL_LIT=/path/to/lit \ -DLLVM_ROOT=/path/to/llvm/install/prefix Notice that: The stand-alone build needs to happen in a folder that is not the original folder where LLVMN was built ( $builddir!=$builddir_subproj ). LLVM_ROOT should point to the prefix of your llvm installation, so for example, if llvm is installed into /usr/bin and /usr/lib64 , then you should pass -DLLVM_ROOT=/usr/ . Both the LLVM_ROOT and LLVM_EXTERNAL_LIT options are required to do stand-alone builds for all sub-projects. Additional required options for each sub-project can be found in the table below. The check-$subproj and install build targets are supported for the sub-projects listed in the table below. Sub-Project Required Sub-Directories Required CMake Options llvm llvm, cmake, third-party LLVM_INSTALL_UTILS=ON clang clang, cmake CLANG_INCLUDE_TESTS=ON (Required for check-clang only) lld lld, cmake Example of building stand-alone clang : # !/bin/sh build_llvm=`pwd`/build-llvm build_clang=`pwd`/build-clang installprefix=`pwd`/install llvm=`pwd`/llvm-project mkdir -p $build_llvm mkdir -p $installprefix cmake -G Ninja -S $llvm/llvm -B $build_llvm \ -DLLVM_INSTALL_UTILS=ON \ -DCMAKE_INSTALL_PREFIX=$installprefix \ -DCMAKE_BUILD_TYPE=Release ninja -C $build_llvm install cmake -G Ninja -S $llvm/clang -B $build_clang \ -DLLVM_EXTERNAL_LIT=$build_llvm/utils/lit \ -DLLVM_ROOT=$installprefix ninja -C $build_clang Requirements ¶ Before you begin to use the LLVM system, review the requirements below. This may save you some trouble by knowing ahead of time what hardware and software you will need. Hardware ¶ LLVM is known to work on the following host platforms: OS Arch Compilers Linux x86 1 GCC, Clang Linux amd64 GCC, Clang Linux ARM GCC, Clang Linux AArch64 GCC, Clang Linux LoongArch GCC, Clang Linux Mips GCC, Clang Linux PowerPC GCC, Clang Linux RISC-V GCC, Clang Linux SystemZ GCC, Clang Solaris V9 (Ultrasparc) GCC DragonFlyBSD amd64 GCC, Clang FreeBSD x86 1 GCC, Clang FreeBSD amd64 GCC, Clang FreeBSD AArch64 GCC, Clang NetBSD x86 1 GCC, Clang NetBSD amd64 GCC, Clang OpenBSD x86 1 GCC, Clang OpenBSD amd64 GCC, Clang macOS 2 PowerPC GCC macOS x86 GCC, Clang macOS arm64 Clang Cygwin/Win32 x86 1, 3 GCC Windows x86 1 Visual Studio Windows x64 x86-64 Visual Studio, Clang 4 Windows on Arm ARM64 Visual Studio, Clang 4 Note Code generation supported for Pentium processors and up Code generation supported for 32-bit ABI only To use LLVM modules on a Win32-based system, you may configure LLVM with -DBUILD_SHARED_LIBS=On . Visual Studio alone can compile LLVM. When using Clang, you must also have Visual Studio installed. Note that Debug builds require a lot of time and disk space. An LLVM-only build will need about 1-3 GB of space. A full build of LLVM and Clang will need around 15-20 GB of disk space. The exact space requirements will vary by system. (It is so large because of all the debugging information and the fact that the libraries are statically linked into multiple tools). If you are space-constrained, you can build only selected tools or only selected targets. The Release build requires considerably less space. The LLVM suite may compile on other platforms, but it is not guaranteed to do so. If compilation is successful, the LLVM utilities should be able to assemble, disassemble, analyze, and optimize LLVM bitcode. Code generation should work as well, although the generated native code may not work on your platform. Software ¶ Compiling LLVM requires that you have several software packages installed. The table below lists those required packages. The Package column is the usual name for the software package that LLVM depends on. The Version column provides “known to work” versions of the package. The Notes column describes how LLVM uses the package and provides other details. Package Version Notes CMake >=3.20.0 Makefile/workspace generator python >=3.8 Automated test suite 1 zlib >=1.2.3.4 Compression library 2 GNU Make 3.79, 3.79.1 Makefile/build processor 3 PyYAML >=5.1 Header generator 4 Note Only needed if you want to run the automated test suite in the llvm/test directory, or if you plan to utilize any Python libraries, utilities, or bindings. Optional, adds compression/uncompression capabilities to selected LLVM tools. Optional, you can use any other build tool supported by CMake. Only needed when building libc with New Headergen. Mainly used by libc. Additionally, your compilation host is expected to have the usual plethora of Unix utilities. Specifically: ar — archive library builder bzip2 — bzip2 command for distribution generation bunzip2 — bunzip2 command for distribution checking chmod — change permissions on a file cat — output concatenation utility cp — copy files date — print the current date/time echo — print to standard output egrep — extended regular expression search utility find — find files/dirs in a file system grep — regular expression search utility gzip — gzip command for distribution generation gunzip — gunzip command for distribution checking install — install directories/files mkdir — create a directory mv — move (rename) files ranlib — symbol table builder for archive libraries rm — remove (delete) files and directories sed — stream editor for transforming output sh — Bourne shell for make build scripts tar — tape archive for distribution generation test — test things in file system unzip — unzip command for distribution checking zip — zip command for distribution generation Host C++ Toolchain, both Compiler and Standard Library ¶ LLVM is very demanding of the host C++ compiler, and as such tends to expose bugs in the compiler. We also attempt to follow improvements and developments in the C++ language and library reasonably closely. As such, we require a modern host C++ toolchain, both compiler and standard library, in order to build LLVM. LLVM is written using the subset of C++ documented in coding standards . To enforce this language version, we check the most popular host toolchains for specific minimum versions in our build systems: Clang 5.0 Apple Clang 10.0 GCC 7.4 Visual Studio 2019 16.8 Anything older than these toolchains may work, but will require forcing the build system with a special option and is not really a supported host platform. Also note that older versions of these compilers have often crashed or miscompiled LLVM. For less widely used host toolchains such as ICC or xlC, be aware that a very recent version may be required to support all of the C++ features used in LLVM. We track certain versions of software that are known to fail when used as part of the host toolchain. These even include linkers at times. GNU ld 2.16.X . Some 2.16.X versions of the ld linker will produce very long warning messages complaining that some “ .gnu.linkonce.t.* ” symbol was defined in a discarded section. You can safely ignore these messages as they are erroneous and the linkage is correct. These messages disappear using ld 2.17. GNU binutils 2.17 : Binutils 2.17 contains a bug which causes huge link times (minutes instead of seconds) when building LLVM. We recommend upgrading to a newer version (2.17.50.0.4 or later). GNU Binutils 2.19.1 Gold : This version of Gold contained a bug which causes intermittent failures when building LLVM with position independent code. The symptom is an error about cyclic dependencies. We recommend upgrading to a newer version of Gold. Getting a Modern Host C++ Toolchain ¶ This section mostly applies to Linux and older BSDs. On macOS, you should have a sufficiently modern Xcode, or you will likely need to upgrade until you do. Windows does not have a “system compiler”, so you must install either Visual Studio 2019 (or later), or a recent version of mingw64. FreeBSD 10.0 and newer have a modern Clang as the system compiler. However, some Linux distributions and some other or older BSDs sometimes have extremely old versions of GCC. These steps attempt to help you upgrade your compiler even on such a system. However, if at all possible, we encourage you to use a recent version of a distribution with a modern system compiler that meets these requirements. Note that it is tempting to install a prior version of Clang and libc++ to be the host compiler; however, libc++ was not well tested or set up to build on Linux until relatively recently. As a consequence, this guide suggests just using libstdc++ and a modern GCC as the initial host in a bootstrap, and then using Clang (and potentially libc++). The first step is to get a recent GCC toolchain installed. The most common distribution on which users have struggled with the version requirements is Ubuntu Precise, 12.04 LTS. For this distribution, one easy option is to install the toolchain testing PPA and use it to install a modern GCC. There is a really nice discussion of this on the ask ubuntu stack exchange and a github gist with updated commands. However, not all users can use PPAs and there are many other distributions, so it may be necessary (or just useful, if you’re here you are doing compiler development after all) to build and install GCC from source. It is also quite easy to do these days. Easy steps for installing a specific version of GCC: % gcc_version = 7 .4.0 % wget https://ftp.gnu.org/gnu/gcc/gcc- ${ gcc_version } /gcc- ${ gcc_version } .tar.bz2 % wget https://ftp.gnu.org/gnu/gcc/gcc- ${ gcc_version } /gcc- ${ gcc_version } .tar.bz2.sig % wget https://ftp.gnu.org/gnu/gnu-keyring.gpg % signature_invalid = ` gpg --verify --no-default-keyring --keyring ./gnu-keyring.gpg gcc- ${ gcc_version } .tar.bz2.sig ` % if [ $signature_invalid ] ; then echo "Invalid signature" ; exit 1 ; fi % tar -xvjf gcc- ${ gcc_version } .tar.bz2 % cd gcc- ${ gcc_version } % ./contrib/download_prerequisites % cd .. % mkdir gcc- ${ gcc_version } -build % cd gcc- ${ gcc_version } -build % $PWD /../gcc- ${ gcc_version } /configure --prefix = $HOME /toolchains --enable-languages = c,c++ % make -j $( nproc ) % make install For more details, check out the excellent GCC wiki entry , where I got most of this information from. Once you have a GCC toolchain, configure your build of LLVM to use the new toolchain for your host compiler and C++ standard library. Because the new version of libstdc++ is not on the system library search path, you need to pass extra linker flags so that it can be found at link time ( -L ) and at runtime ( -rpath ). If you are using CMake, this invocation should produce working binaries: % mkdir build % cd build % CC = $HOME /toolchains/bin/gcc CXX = $HOME /toolchains/bin/g++ \ cmake .. -DCMAKE_CXX_LINK_FLAGS = "-Wl,-rpath, $HOME /toolchains/lib64 -L $HOME /toolchains/lib64" If you fail to set rpath, most LLVM binaries will fail on startup with a message from the loader similar to libstdc++.so.6: version `GLIBCXX_3.4.20' not found . This means you need to tweak the -rpath linker flag. This method will add an absolute path to the rpath of all executables. That’s fine for local development. If you want to distribute the binaries you build so that they can run on older systems, copy libstdc++.so.6 into the lib/ directory. All of LLVM’s shipping binaries have an rpath pointing at $ORIGIN/../lib , so they will find libstdc++.so.6 there. Non-distributed binaries don’t have an rpath set and won’t find libstdc++.so.6 . Pass -DLLVM_LOCAL_RPATH="$HOME/toolchains/lib64" to CMake to add an absolute path to libstdc++.so.6 as above. Since these binaries are not distributed, having an absolute local path is fine for them. When you build Clang, you will need to give it access to a modern C++ standard library in order to use it as your new host in part of a bootstrap. There are two easy ways to do this, either build (and install) libc++ along with Clang and then use it with the -stdlib=libc++ compile and link flag, or install Clang into the same prefix ( $HOME/toolchains above) as GCC. Clang will look within its own prefix for libstdc++ and use it if found. You can also add an explicit prefix for Clang to look in for a GCC toolchain with the --gcc-toolchain=/opt/my/gcc/prefix flag, passing it to both compile and link commands when using your just-built-Clang to bootstrap. Getting Started with LLVM ¶ The remainder of this guide is meant to get you up and running with LLVM and to give you some basic information about the LLVM environment. The later sections of this guide describe the general layout of the LLVM source tree, a simple example using the LLVM toolchain, and links to find more information about LLVM or to get help via e-mail. Terminology and Notation ¶ Throughout this manual, the following names are used to denote paths specific to the local system and working environment. These are not environment variables you need to set but just strings used in the rest of this document below . In any of the examples below, simply replace each of these names with the appropriate pathname on your local system. All these paths are absolute: SRC_ROOT This is the top-level directory of the LLVM source tree. OBJ_ROOT This is the top-level directory of the LLVM object tree (i.e. the tree where object files and compiled programs will be placed. It can be the same as SRC_ROOT). Sending patches ¶ See Contributing . Bisecting commits ¶ See Bisecting LLVM code for how to use git bisect on LLVM. Reverting a change ¶ When reverting changes using git, the default message will say “This reverts commit XYZ”. Leave this at the end of the commit message, but add some details before it as to why the commit is being reverted. A brief explanation and/or links to bots that demonstrate the problem are sufficient. Local LLVM Configuration ¶ Once checked out repository, the LLVM suite source code must be configured before being built. This process uses CMake. Unlike the normal configure script, CMake generates the build files in whatever format you request as well as various *.inc files, and llvm/include/llvm/Config/config.h.cmake . Variables are passed to cmake on the command line using the format -D<variable name>=<value> . The following variables are some common options used by people developing LLVM. CMAKE_C_COMPILER CMAKE_CXX_COMPILER CMAKE_BUILD_TYPE CMAKE_INSTALL_PREFIX Python3_EXECUTABLE LLVM_TARGETS_TO_BUILD LLVM_ENABLE_PROJECTS LLVM_ENABLE_RUNTIMES LLVM_ENABLE_DOXYGEN LLVM_ENABLE_SPHINX LLVM_BUILD_LLVM_DYLIB LLVM_LINK_LLVM_DYLIB LLVM_PARALLEL_LINK_JOBS LLVM_OPTIMIZED_TABLEGEN See the list of frequently-used CMake variables for more information. To configure LLVM, follow these steps: Change directory into the object root directory: % cd OBJ_ROOT Run the cmake : % cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE = <type> -DCMAKE_INSTALL_PREFIX = /install/path [other options] SRC_ROOT Compiling the LLVM Suite Source Code ¶ Unlike with autotools, with CMake your build type is defined at configuration. If you want to change your build type, you can re-run CMake with the following invocation: % cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE = <type> SRC_ROOT Between runs, CMake preserves the values set for all options. CMake has the following build types defined: Debug These builds are the default. The build system will compile the tools and libraries unoptimized, with debugging information, and asserts enabled. Release For these builds, the build system will compile the tools and libraries with optimizations enabled and not generate debug info. CMakes default optimization level is -O3. This can be configured by setting the CMAKE_CXX_FLAGS_RELEASE variable on the CMake command line. RelWithDebInfo These builds are useful when debugging. They generate optimized binaries with debug information. CMakes default optimization level is -O2. This can be configured by setting the CMAKE_CXX_FLAGS_RELWITHDEBINFO variable on the CMake command line. Once you have LLVM configured, you can build it by entering the OBJ_ROOT directory and issuing the following command: % make If the build fails, please check here to see if you are using a version of GCC that is known not to compile LLVM. If you have multiple processors in your machine, you may wish to use some of the parallel build options provided by GNU Make. For example, you could use the command: % make -j2 There are several special targets which are useful when working with the LLVM source code: make clean Removes all files generated by the build. This includes object files, generated C/C++ files, libraries, and executables. make install Installs LLVM header files, libraries, tools, and documentation in a hierarchy under $PREFIX , specified with CMAKE_INSTALL_PREFIX , which defaults to /usr/local . make docs-llvm-html If configured with -DLLVM_ENABLE_SPHINX=On , this will generate a directory at OBJ_ROOT/docs/html which contains the HTML formatted documentation. Cross-Compiling LLVM ¶ It is possible to cross-compile LLVM itself. That is, you can create LLVM executables and libraries to be hosted on a platform different from the platform where they are built (a Canadian Cross build). To generate build files for cross-compiling CMake provides a variable CMAKE_TOOLCHAIN_FILE which can define compiler flags and variables used during the CMake test operations. The result of such a build is executables that are not runnable on the build host but can be executed on the target. As an example, the following CMake invocation can generate build files targeting iOS. This will work on macOS with the latest Xcode: % cmake -G "Ninja" -DCMAKE_OSX_ARCHITECTURES = "armv7;armv7s;arm64" -DCMAKE_TOOLCHAIN_FILE=<PATH_TO_LLVM>/cmake/platforms/iOS.cmake -DCMAKE_BUILD_TYPE=Release -DLLVM_BUILD_RUNTIME=Off -DLLVM_INCLUDE_TESTS=Off -DLLVM_INCLUDE_EXAMPLES=Off -DLLVM_ENABLE_BACKTRACES=Off [options] <PATH_TO_LLVM> Note: There are some additional flags that need to be passed when building for iOS due to limitations in the iOS SDK. Check How to cross-compile Clang/LLVM using Clang/LLVM and Clang docs on how to cross-compile in general for more information about cross-compiling. The Location of LLVM Object Files ¶ The LLVM build system is capable of sharing a single LLVM source tree among several LLVM builds. Hence, it is possible to build LLVM for several different platforms or configurations using the same source tree. Change directory to where the LLVM object files should live: % cd OBJ_ROOT Run cmake : % cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE = Release SRC_ROOT The LLVM build will create a structure underneath OBJ_ROOT that matches the LLVM source tree. At each level where source files are present in the source tree there will be a corresponding CMakeFiles directory in the OBJ_ROOT . Underneath that directory there is another directory with a name ending in .dir under which you’ll find object files for each source. For example: % cd llvm_build_dir % find lib/Support/ -name APFloat* lib/Support/CMakeFiles/LLVMSupport.dir/APFloat.cpp.o Optional Configuration Items ¶ If you’re running on a Linux system that supports the binfmt_misc module, and you have root access on the system, you can set your system up to execute LLVM bitcode files directly. To do this, use commands like this (the first command may not be required if you are already using the module): % mount -t binfmt_misc none /proc/sys/fs/binfmt_misc % echo ':llvm:M::BC::/path/to/lli:' > /proc/sys/fs/binfmt_misc/register % chmod u+x hello.bc ( if needed ) % ./hello.bc This allows you to execute LLVM bitcode files directly. On Debian, you can also use this command instead of the ‘echo’ command above: % sudo update-binfmts --install llvm /path/to/lli --magic 'BC' Directory Layout ¶ One useful source of information about the LLVM source base is the LLVM doxygen documentation available at https://llvm.org/doxygen/ . The following is a brief introduction to code layout: llvm/cmake ¶ Generates system build files. llvm/cmake/modules Build configuration for llvm user defined options. Checks compiler version and linker flags. llvm/cmake/platforms Toolchain configuration for Android NDK, iOS systems and non-Windows hosts to target MSVC. llvm/examples ¶ Some simple examples showing how to use LLVM as a compiler for a custom language - including lowering, optimization, and code generation. Kaleidoscope Tutorial: Kaleidoscope language tutorial runs through the implementation of a nice little compiler for a non-trivial language including a hand-written lexer, parser, AST, as well as code generation support using LLVM- both static (ahead of time) and various approaches to Just In Time (JIT) compilation. Kaleidoscope Tutorial for complete beginner . BuildingAJIT: Examples of the BuildingAJIT tutorial that shows how LLVM’s ORC JIT APIs interact with other parts of LLVM. It also teaches how to recombine them to build a custom JIT that is suited to your use-case. llvm/include ¶ Public header files exported from the LLVM library. The three main subdirectories: llvm/include/llvm All LLVM-specific header files, and subdirectories for different portions of LLVM: Analysis , CodeGen , Target , Transforms , etc… llvm/include/llvm/Support Generic support libraries provided with LLVM but not necessarily specific to LLVM. For example, some C++ STL utilities and a Command Line option processing library store header files here. llvm/include/llvm/Config Header files configured by cmake . They wrap “standard” UNIX and C header files. Source code can include these header files which automatically take care of the conditional #includes that cmake generates. llvm/lib ¶ Most source files are here. By putting code in libraries, LLVM makes it easy to share code among the tools . llvm/lib/IR/ Core LLVM source files that implement core classes like Instruction and BasicBlock. llvm/lib/AsmParser/ Source code for the LLVM assembly language parser library. llvm/lib/Bitcode/ Code for reading and writing bitcode. llvm/lib/Analysis/ A variety of program analyses, such as Call Graphs, Induction Variables, Natural Loop Identification, etc. llvm/lib/Transforms/ IR-to-IR program transformations, such as Aggressive Dead Code Elimination, Sparse Conditional Constant Propagation, Inlining, Loop Invariant Code Motion, Dead Global Elimination, and many others. llvm/lib/Target/ Files describing target architectures for code generation. For example, llvm/lib/Target/X86 holds the X86 machine description. llvm/lib/CodeGen/ The major parts of the code generator: Instruction Selector, Instruction Scheduling, and Register Allocation. llvm/lib/MC/ The libraries represent and process code at machine code level. Handles assembly and object-file emission. llvm/lib/ExecutionEngine/ Libraries for directly executing bitcode at runtime in interpreted and JIT-compiled scenarios. llvm/lib/Support/ Source code that corresponds to the header files in llvm/include/ADT/ and llvm/include/Support/ . llvm/bindings ¶ Contains bindings for the LLVM compiler infrastructure to allow programs written in languages other than C or C++ to take advantage of the LLVM infrastructure. The LLVM project provides language bindings for OCaml and Python. llvm/projects ¶ Projects not strictly part of LLVM but shipped with LLVM. This is also the directory for creating your own LLVM-based projects which leverage the LLVM build system. llvm/test ¶ Feature and regression tests and other sanity checks on LLVM infrastructure. These are intended to run quickly and cover a lot of territory without being exhaustive. test-suite ¶ A comprehensive correctness, performance, and benchmarking test suite for LLVM. This comes in a separate git repository <https://github.com/llvm/llvm-test-suite> , because it contains a large amount of third-party code under a variety of licenses. For details see the Testing Guide document. llvm/tools ¶ Executables built out of the libraries above, which form the main part of the user interface. You can always get help for a tool by typing tool_name -help . The following is a brief introduction to the most important tools. More detailed information is in the Command Guide . bugpoint bugpoint is used to debug optimization passes or code generation backends by narrowing down the given test case to the minimum number of passes and/or instructions that still cause a problem, whether it is a crash or miscompilation. See HowToSubmitABug.html for more information on using bugpoint . llvm-ar The archiver produces an archive containing the given LLVM bitcode files, optionally with an index for faster lookup. llvm-as The assembler transforms the human-readable LLVM assembly to LLVM bitcode. llvm-dis The disassembler transforms the LLVM bitcode to human-readable LLVM assembly. llvm-link llvm-link , not surprisingly, links multiple LLVM modules into a single program. lli lli is the LLVM interpreter, which can directly execute LLVM bitcode (although very slowly…). For architectures that support it (currently x86, Sparc, and PowerPC), by default, lli will function as a Just-In-Time compiler (if the functionality was compiled in), and will execute the code much faster than the interpreter. llc llc is the LLVM backend compiler, which translates LLVM bitcode to a native code assembly file. opt opt reads LLVM bitcode, applies a series of LLVM to LLVM transformations (which are specified on the command line), and outputs the resultant bitcode. ‘ opt -help ’ is a good way to get a list of the program transformations available in LLVM. opt can also run a specific analysis on an input LLVM bitcode file and print the results. Primarily useful for debugging analyses, or familiarizing yourself with what an analysis does. llvm/utils ¶ Utilities for working with LLVM source code; some are part of the build process because they are code generators for parts of the infrastructure. codegen-diff codegen-diff finds differences between code that LLC generates and code that LLI generates. This is useful if you are debugging one of them, assuming that the other generates correct output. For the full user manual, run `perldoc codegen-diff' . emacs/ Emacs and XEmacs syntax highlighting for LLVM assembly files and TableGen description files. See the README for information on using them. getsrcs.sh Finds and outputs all non-generated source files, useful if one wishes to do a lot of development across directories and does not want to find each file. One way to use it is to run, for example: xemacs `utils/getsources.sh` from the top of the LLVM source tree. llvmgrep Performs an egrep -H -n on each source file in LLVM and passes to it a regular expression provided on llvmgrep ’s command line. This is an efficient way of searching the source base for a particular regular expression. TableGen/ Contains the tool used to generate register descriptions, instruction set descriptions, and even assemblers from common TableGen description files. vim/ vim syntax-highlighting for LLVM assembly files and TableGen description files. See the README for how to use them. An Example Using the LLVM Tool Chain ¶ This section gives an example of using LLVM with the Clang front end. Example with clang ¶ First, create a simple C file, name it ‘hello.c’: #include <stdio.h> int main () { printf ( "hello world \n " ); return 0 ; } Next, compile the C file into a native executable: % clang hello.c -o hello Note Clang works just like GCC by default. The standard -S and -c arguments work as usual (producing a native .s or .o file, respectively). Next, compile the C file into an LLVM bitcode file: % clang -O3 -emit-llvm hello.c -c -o hello.bc The -emit-llvm option can be used with the -S or -c options to emit an LLVM .ll or .bc file (respectively) for the code. This allows you to use the standard LLVM tools on the bitcode file. Run the program in both forms. To run the program, use: % ./hello and % lli hello.bc The second example shows how to invoke the LLVM JIT, lli . Use the llvm-dis utility to take a look at the LLVM assembly code: % llvm-dis < hello.bc | less Compile the program to native assembly using the LLC code generator: % llc hello.bc -o hello.s Assemble the native assembly language file into a program: % /opt/SUNWspro/bin/cc -xarch = v9 hello.s -o hello.native # On Solaris % gcc hello.s -o hello.native # On others Execute the native code program: % ./hello.native Note that using clang to compile directly to native code (i.e. when the -emit-llvm option is not present) does steps 6/7/8 for you. Common Problems ¶ If you are having problems building or using LLVM, or if you have any other general questions about LLVM, please consult the Frequently Asked Questions page. If you are having problems with limited memory and build time, please try building with ninja instead of make . Please consider configuring the following options with CMake: -G Ninja Setting this option will allow you to build with ninja instead of make. Building with ninja significantly improves your build time, especially with incremental builds, and improves your memory usage. -DLLVM_USE_LINKER Setting this option to lld will significantly reduce linking time for LLVM executables, particularly on Linux and Windows. If you are building LLVM for the first time and lld is not available to you as a binary package, then you may want to use the gold linker as a faster alternative to GNU ld. -DCMAKE_BUILD_TYPE Controls optimization level and debug information of the build. This setting can affect RAM and disk usage, see CMAKE_BUILD_TYPE for more information. -DLLVM_ENABLE_ASSERTIONS This option defaults to ON for Debug builds and defaults to OFF for Release builds. As mentioned in the previous option, using the Release build type and enabling assertions may be a good alternative to using the Debug build type. -DLLVM_PARALLEL_LINK_JOBS Set this equal to number of jobs you wish to run simultaneously. This is similar to the -j option used with make , but only for link jobs. This option can only be used with ninja. You may wish to use a very low number of jobs, as this will greatly reduce the amount of memory used during the build process. If you have limited memory, you may wish to set this to 1 . -DLLVM_TARGETS_TO_BUILD Set this equal to the target you wish to build. You may wish to set this to only your host architecture. For example X86 if you are using an Intel or AMD machine. You will find a full list of targets within the llvm-project/llvm/lib/Target directory. -DLLVM_OPTIMIZED_TABLEGEN Set this to ON to generate a fully optimized TableGen compiler during your build, even if that build is a Debug build. This will significantly improve your build time. You should not enable this if your intention is to debug the TableGen compiler. -DLLVM_ENABLE_PROJECTS Set this equal to the projects you wish to compile (e.g. clang , lld , etc.) If compiling more than one project, separate the items with a semicolon. Should you run into issues with the semicolon, try surrounding it with single quotes. -DLLVM_ENABLE_RUNTIMES Set this equal to the runtimes you wish to compile (e.g. libcxx , libcxxabi , etc.) If compiling more than one runtime, separate the items with a semicolon. Should you run into issues with the semicolon, try surrounding it with single quotes. -DCLANG_ENABLE_STATIC_ANALYZER Set this option to OFF if you do not require the clang static analyzer. This should improve your build time slightly. -DLLVM_USE_SPLIT_DWARF Consider setting this to ON if you require a debug build, as this will ease memory pressure on the linker. This will make linking much faster, as the binaries will not contain any of the debug information. Instead, the debug information is in a separate DWARF object file (with the extension .dwo ). This only applies to host platforms using ELF, such as Linux. -DBUILD_SHARED_LIBS Setting this to ON will build shared libraries instead of static libraries. This will ease memory pressure on the linker. However, this should only be used when developing llvm. See BUILD_SHARED_LIBS for more information. Links ¶ This document is just an introduction on how to use LLVM to do some simple things… there are many more interesting and complicated things that you can do that aren’t documented here (but we’ll gladly accept a patch if you want to write something up!). For more information about LLVM, check out: LLVM Homepage LLVM Doxygen Tree Starting a Project that Uses LLVM Navigation index next | previous | LLVM Home | Documentation » Getting Started/Tutorials » Getting Started with the LLVM System © Copyright 2003-2026, LLVM Project. Last updated on 2026-01-13. Created using Sphinx 7.2.6. | 2026-01-13T09:30:34 |
https://fr-fr.facebook.com/login/?next=https%3A%2F%2Fl.facebook.com%2Fl.php%3Fu%3Dhttps%253A%252F%252Fwww.instagram.com%252F%26amp%253Bh%3DAT0_eO-stakg9vLsOUo7Lt26VLQ_aYkr9Pf8ZZaLhPMiLcqEKHlcgGyJNGUNiyBkcQSC6SVtvR-ocoHfFkGBRUIIfK4kqcuSRuVlDS1_g0KFGeSi6wHVGd63HL5D3r9fcZfhU2T8Lb5znjek | Facebook Facebook Adresse e-mail ou téléphone Mot de passe Informations de compte oubliées ? Créer un compte Cette fonction est temporairement bloquée Cette fonction est temporairement bloquée Il semble que vous ayez abusé de cette fonctionnalité en l’utilisant trop vite. Vous n’êtes plus autorisé à l’utiliser. Back Français (France) 한국어 English (US) Tiếng Việt Bahasa Indonesia ภาษาไทย Español 中文(简体) 日本語 Português (Brasil) Deutsch S’inscrire Se connecter Messenger Facebook Lite Vidéo Meta Pay Boutique Meta Meta Quest Ray-Ban Meta Meta AI Plus de contenu Meta AI Instagram Threads Centre d’information sur les élections Politique de confidentialité Centre de confidentialité À propos Créer une publicité Créer une Page Développeurs Emplois Cookies Choisir sa publicité Conditions générales Aide Importation des contacts et non-utilisateurs Paramètres Historique d’activité Meta © 2026 | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/541 | LLVM Weekly - #541, May 13th 2024 LLVM Weekly - #541, May 13th 2024 Welcome to the five hundred and forty-first issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org , @llvmweekly or @asbradbury on Twitter, or @llvmweekly@fosstodon.org or @asb@fosstodon.org . News and articles from around the web and events As part of trying to catch up on my blogging backlog, I wrote a new post with notes and thoughts from the Carbon panel session at EuroLLVM last month . Raymond Chen wrote a blog providing an informal comprison of three major implementations of std::string . According to the LLVM calendar in the coming week there will be the following: Office hours with the following hosts: Aaron Ballman, Alexey Bader, Alina Sbirlea, Phoebe Wang, Johannes Doerfert. Online sync-ups on the following topics: Flang, alias analysis, pointer authentication, SPIR-V, libc++, new contributors, LLVM/offload, classic Flang, C/C++ language working group, loop optimisations, floating point, OpenMP, MLIR. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Lucile Rose Nihlen outlined Google’s plan for the LLVM presubmit infrastructure . The 65th edition of MLIR News is now available (I guess I missed commenting on the 2^6 milestone - major credit is due to Javed Absar for carrying the torch on this). Brad Richardson shared an updated revision of the RFC on a parallel runtime interface for Fortran . Fangrui Song posted an RFC on handling some constructs such as .if not currently supported in integrated assemblers . Ahmed Bougacha proposed adding ptrauth constants , aiming to allow global initialisers to contain signed pointers. There was a lot of further discussion on the idea of a ‘stack’ MLIR dialect . LLVM commits The DXContainer format was documented. afeedd9 . A script was added to generate elaborated IR and assembly tests, along with documentation on how to use it. aacea0d . .option arch in RISC-V assembly now accepts the names of experimental extensions. d70267f . LoongArch gained a ‘W’ instruction MI-level optimisation pass inspired by the equivalent one in RISC-V. e9bcd2b . AArch64 extension information was moved to tablegen. 639a740 . A handy APInt:clearHighBits helper was added. 99934da . Clang commits clang-format learned to handle Java switch expressions. 236b3e1 . The -fseparate-named-sections command-line option was introduced. 8bcb073 . Clang started to emit PAuth ABI compatibility tag values for AArch64 when appropriate. ad652ef . Other project commits libcxx gained a faster std::gcd implementation. 27a062e . MLIR’s bufferization documentation was improved. 099417d . -mlir-print-unique-ssa-ids was added to MLIR’s AsmPrinter. 5717553 . libomptarget now statically links all plugin runtimes. fa9e90f . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/540 | LLVM Weekly - #540, May 6th 2024 LLVM Weekly - #540, May 6th 2024 Welcome to the five hundred and fortieth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org , @llvmweekly or @asbradbury on Twitter, or @llvmweekly@fosstodon.org or @asb@fosstodon.org . News and articles from around the web and events I didn’t spot any particular articles or blog posts this time. As always, feel free to email me anything relevant. According to the LLVM calendar in the coming week there will be the following: Office hours with the following hosts: Kristof Beyls, Johannes Doerfert. Online sync-ups on the following topics: pointer authentication, AArch64, new contributors, OpenMP, Flang, BOLT, RISC-V, LLVM libc, MLIR. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Xuan Zhang posted part 1 of an RFC on enhancing the MachineOutliner and Kyungwoo Lee posted part 2 . Joshua Cranmer kicked off a detailed RFC thread on improving IR fast-math semantics . Francesco Bertolaccini proposed adding a ‘stack’ dialect to MLIR , intended to model stack-based languages like CIL, JVM, or Etherum bytecode. Benjamin Maxwell shared a PSA about changes to the ArmSME lowering pipeline in MLIR . Reid Kleckner proposed a modification to the developer policy to give a position on the use of AI tools in contributions . James Y Knight started an RFC discussion on deprecating Clang’s -Ofast argument , which generated far too much discussion for me to usefully summarise. Fangrui Song is looking for an additional reviewer for the utility to generate elaborated assembly/IR tests . Lawrence Benson sought feedback on introducing a new intrinsic for masked vector compress without store . Joshua Cranmer provided insight on the tradeoffs using target extension types vs existing types . The hdoc developers reached out with an RFC to improve Clang’s comment parsing to conform better to Doxygen semantics , linking to several PRs already posted to help implement this. Nikita Popov initiated an RFC on adding nusw and nuw flags to getelementptr . LLVM commits LLVM switched from using debug intrinsics internally to using debug records by default. Debug intrinsics will only be supported on a best-effort basis from now on. 91446e2 . llvm-mca learned the -skip-unsupported-instructions option. 5f79f750 . AggressiveInstCombine learned to inline str[n]cmp where one of the strings is small and constant (and in the case of strncmp, the length is constant). 6b94870 . The set of opcodes recognised by canCreateUndefOrPoison was increased. b3c55b7 . LLVM vector interleave2/deinterleave2, reverse, and splice intrinsics are no longer in the experimental namespace. bfc0317 . The RISC-V disassembler can now support instructions up to 176 bits in length. 618adc7 . TLSDESC codegen was added for LoongArch. 09e7d86 . The MachineCombiner was enabled for floating point reassociation of add, sub, and mul on SystemZ. 6c32a1f . A new PatternMatch API was added for matching constants using custom conditions. d8428df , f561daf9 . Clang commits C++17 support in Clang is now viewed as complete, with the enablement of C++17 relaxed template template argument matching being turned on by default acting as the last piece of the puzzle. b86e099 . Scalable vectors are now supported for the __builtin_reduce_* functions. bd07c22 . -fexperimental-late-parse-attributes was added to enable an experimental feature allowing late parsing of certain attributes in specific contexts where they wouldn’t notmally be late parsed. b1867e1 . WebAssembly reference types were disabled for the generic target because it changes the encoding of call_indirect . 8c64a30 . Other project commits The fcntl function was implemented in LLVM’s libc. aca5117 . LLD’s --compress-sections argument now allows a compression level to be specified. 6d44a1e . Additional unary operations were added to MLIR’s linalg dialect. 4cec3b3 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
http://anh.cs.luc.edu/handsonPythonTutorial/ifstatements.html#wages-exercise | 3.1. If Statements — Hands-on Python Tutorial for Python 3 Navigation index next | previous | Hands-on Python Tutorial » 3. More On Flow of Control » 3.1. If Statements ¶ 3.1.1. Simple Conditions ¶ The statements introduced in this chapter will involve tests or conditions . More syntax for conditions will be introduced later, but for now consider simple arithmetic comparisons that directly translate from math into Python. Try each line separately in the Shell 2 < 5 3 > 7 x = 11 x > 10 2 * x < x type ( True ) You see that conditions are either True or False . These are the only possible Boolean values (named after 19th century mathematician George Boole). In Python the name Boolean is shortened to the type bool . It is the type of the results of true-false conditions or tests. Note The Boolean values True and False have no quotes around them! Just as '123' is a string and 123 without the quotes is not, 'True' is a string, not of type bool. 3.1.2. Simple if Statements ¶ Run this example program, suitcase.py. Try it at least twice, with inputs: 30 and then 55. As you an see, you get an extra result, depending on the input. The main code is: weight = float ( input ( "How many pounds does your suitcase weigh? " )) if weight > 50 : print ( "There is a $25 charge for luggage that heavy." ) print ( "Thank you for your business." ) The middle two line are an if statement. It reads pretty much like English. If it is true that the weight is greater than 50, then print the statement about an extra charge. If it is not true that the weight is greater than 50, then don’t do the indented part: skip printing the extra luggage charge. In any event, when you have finished with the if statement (whether it actually does anything or not), go on to the next statement that is not indented under the if . In this case that is the statement printing “Thank you”. The general Python syntax for a simple if statement is if condition : indentedStatementBlock If the condition is true, then do the indented statements. If the condition is not true, then skip the indented statements. Another fragment as an example: if balance < 0 : transfer = - balance # transfer enough from the backup account: backupAccount = backupAccount - transfer balance = balance + transfer As with other kinds of statements with a heading and an indented block, the block can have more than one statement. The assumption in the example above is that if an account goes negative, it is brought back to 0 by transferring money from a backup account in several steps. In the examples above the choice is between doing something (if the condition is True ) or nothing (if the condition is False ). Often there is a choice of two possibilities, only one of which will be done, depending on the truth of a condition. 3.1.3. if - else Statements ¶ Run the example program, clothes.py . Try it at least twice, with inputs 50 and then 80. As you can see, you get different results, depending on the input. The main code of clothes.py is: temperature = float ( input ( 'What is the temperature? ' )) if temperature > 70 : print ( 'Wear shorts.' ) else : print ( 'Wear long pants.' ) print ( 'Get some exercise outside.' ) The middle four lines are an if-else statement. Again it is close to English, though you might say “otherwise” instead of “else” (but else is shorter!). There are two indented blocks: One, like in the simple if statement, comes right after the if heading and is executed when the condition in the if heading is true. In the if - else form this is followed by an else: line, followed by another indented block that is only executed when the original condition is false . In an if - else statement exactly one of two possible indented blocks is executed. A line is also shown de dented next, removing indentation, about getting exercise. Since it is dedented, it is not a part of the if-else statement: Since its amount of indentation matches the if heading, it is always executed in the normal forward flow of statements, after the if - else statement (whichever block is selected). The general Python if - else syntax is if condition : indentedStatementBlockForTrueCondition else: indentedStatementBlockForFalseCondition These statement blocks can have any number of statements, and can include about any kind of statement. See Graduate Exercise 3.1.4. More Conditional Expressions ¶ All the usual arithmetic comparisons may be made, but many do not use standard mathematical symbolism, mostly for lack of proper keys on a standard keyboard. Meaning Math Symbol Python Symbols Less than < < Greater than > > Less than or equal ≤ <= Greater than or equal ≥ >= Equals = == Not equal ≠ != There should not be space between the two-symbol Python substitutes. Notice that the obvious choice for equals , a single equal sign, is not used to check for equality. An annoying second equal sign is required. This is because the single equal sign is already used for assignment in Python, so it is not available for tests. Warning It is a common error to use only one equal sign when you mean to test for equality, and not make an assignment! Tests for equality do not make an assignment, and they do not require a variable on the left. Any expressions can be tested for equality or inequality ( != ). They do not need to be numbers! Predict the results and try each line in the Shell : x = 5 x x == 5 x == 6 x x != 6 x = 6 6 == x 6 != x 'hi' == 'h' + 'i' 'HI' != 'hi' [ 1 , 2 ] != [ 2 , 1 ] An equality check does not make an assignment. Strings are case sensitive. Order matters in a list. Try in the Shell : 'a' > 5 When the comparison does not make sense, an Exception is caused. [1] Following up on the discussion of the inexactness of float arithmetic in String Formats for Float Precision , confirm that Python does not consider .1 + .2 to be equal to .3: Write a simple condition into the Shell to test. Here is another example: Pay with Overtime. Given a person’s work hours for the week and regular hourly wage, calculate the total pay for the week, taking into account overtime. Hours worked over 40 are overtime, paid at 1.5 times the normal rate. This is a natural place for a function enclosing the calculation. Read the setup for the function: def calcWeeklyWages ( totalHours , hourlyWage ): '''Return the total weekly wages for a worker working totalHours, with a given regular hourlyWage. Include overtime for hours over 40. ''' The problem clearly indicates two cases: when no more than 40 hours are worked or when more than 40 hours are worked. In case more than 40 hours are worked, it is convenient to introduce a variable overtimeHours. You are encouraged to think about a solution before going on and examining mine. You can try running my complete example program, wages.py, also shown below. The format operation at the end of the main function uses the floating point format ( String Formats for Float Precision ) to show two decimal places for the cents in the answer: def calcWeeklyWages ( totalHours , hourlyWage ): '''Return the total weekly wages for a worker working totalHours, with a given regular hourlyWage. Include overtime for hours over 40. ''' if totalHours <= 40 : totalWages = hourlyWage * totalHours else : overtime = totalHours - 40 totalWages = hourlyWage * 40 + ( 1.5 * hourlyWage ) * overtime return totalWages def main (): hours = float ( input ( 'Enter hours worked: ' )) wage = float ( input ( 'Enter dollars paid per hour: ' )) total = calcWeeklyWages ( hours , wage ) print ( 'Wages for {hours} hours at ${wage:.2f} per hour are ${total:.2f}.' . format ( ** locals ())) main () Here the input was intended to be numeric, but it could be decimal so the conversion from string was via float , not int . Below is an equivalent alternative version of the body of calcWeeklyWages , used in wages1.py . It uses just one general calculation formula and sets the parameters for the formula in the if statement. There are generally a number of ways you might solve the same problem! if totalHours <= 40 : regularHours = totalHours overtime = 0 else : overtime = totalHours - 40 regularHours = 40 return hourlyWage * regularHours + ( 1.5 * hourlyWage ) * overtime The in boolean operator : There are also Boolean operators that are applied to types others than numbers. A useful Boolean operator is in , checking membership in a sequence: >>> vals = [ 'this' , 'is' , 'it] >>> 'is' in vals True >>> 'was' in vals False It can also be used with not , as not in , to mean the opposite: >>> vals = [ 'this' , 'is' , 'it] >>> 'is' not in vals False >>> 'was' not in vals True In general the two versions are: item in sequence item not in sequence Detecting the need for if statements : Like with planning programs needing``for`` statements, you want to be able to translate English descriptions of problems that would naturally include if or if - else statements. What are some words or phrases or ideas that suggest the use of these statements? Think of your own and then compare to a few I gave: [2] 3.1.4.1. Graduate Exercise ¶ Write a program, graduate.py , that prompts students for how many credits they have. Print whether of not they have enough credits for graduation. (At Loyola University Chicago 120 credits are needed for graduation.) 3.1.4.2. Head or Tails Exercise ¶ Write a program headstails.py . It should include a function flip() , that simulates a single flip of a coin: It randomly prints either Heads or Tails . Accomplish this by choosing 0 or 1 arbitrarily with random.randrange(2) , and use an if - else statement to print Heads when the result is 0, and Tails otherwise. In your main program have a simple repeat loop that calls flip() 10 times to test it, so you generate a random sequence of 10 Heads and Tails . 3.1.4.3. Strange Function Exercise ¶ Save the example program jumpFuncStub.py as jumpFunc.py , and complete the definitions of functions jump and main as described in the function documentation strings in the program. In the jump function definition use an if - else statement (hint [3] ). In the main function definition use a for -each loop, the range function, and the jump function. The jump function is introduced for use in Strange Sequence Exercise , and others after that. 3.1.5. Multiple Tests and if - elif Statements ¶ Often you want to distinguish between more than two distinct cases, but conditions only have two possible results, True or False , so the only direct choice is between two options. As anyone who has played “20 Questions” knows, you can distinguish more cases by further questions. If there are more than two choices, a single test may only reduce the possibilities, but further tests can reduce the possibilities further and further. Since most any kind of statement can be placed in an indented statement block, one choice is a further if statement. For instance consider a function to convert a numerical grade to a letter grade, ‘A’, ‘B’, ‘C’, ‘D’ or ‘F’, where the cutoffs for ‘A’, ‘B’, ‘C’, and ‘D’ are 90, 80, 70, and 60 respectively. One way to write the function would be test for one grade at a time, and resolve all the remaining possibilities inside the next else clause: def letterGrade ( score ): if score >= 90 : letter = 'A' else : # grade must be B, C, D or F if score >= 80 : letter = 'B' else : # grade must be C, D or F if score >= 70 : letter = 'C' else : # grade must D or F if score >= 60 : letter = 'D' else : letter = 'F' return letter This repeatedly increasing indentation with an if statement as the else block can be annoying and distracting. A preferred alternative in this situation, that avoids all this indentation, is to combine each else and if block into an elif block: def letterGrade ( score ): if score >= 90 : letter = 'A' elif score >= 80 : letter = 'B' elif score >= 70 : letter = 'C' elif score >= 60 : letter = 'D' else : letter = 'F' return letter The most elaborate syntax for an if - elif - else statement is indicated in general below: if condition1 : indentedStatementBlockForTrueCondition1 elif condition2 : indentedStatementBlockForFirstTrueCondition2 elif condition3 : indentedStatementBlockForFirstTrueCondition3 elif condition4 : indentedStatementBlockForFirstTrueCondition4 else: indentedStatementBlockForEachConditionFalse The if , each elif , and the final else lines are all aligned. There can be any number of elif lines, each followed by an indented block. (Three happen to be illustrated above.) With this construction exactly one of the indented blocks is executed. It is the one corresponding to the first True condition, or, if all conditions are False , it is the block after the final else line. Be careful of the strange Python contraction. It is elif , not elseif . A program testing the letterGrade function is in example program grade1.py . See Grade Exercise . A final alternative for if statements: if - elif -.... with no else . This would mean changing the syntax for if - elif - else above so the final else: and the block after it would be omitted. It is similar to the basic if statement without an else , in that it is possible for no indented block to be executed. This happens if none of the conditions in the tests are true. With an else included, exactly one of the indented blocks is executed. Without an else , at most one of the indented blocks is executed. if weight > 120 : print ( 'Sorry, we can not take a suitcase that heavy.' ) elif weight > 50 : print ( 'There is a $25 charge for luggage that heavy.' ) This if - elif statement only prints a line if there is a problem with the weight of the suitcase. 3.1.5.1. Sign Exercise ¶ Write a program sign.py to ask the user for a number. Print out which category the number is in: 'positive' , 'negative' , or 'zero' . 3.1.5.2. Grade Exercise ¶ In Idle, load grade1.py and save it as grade2.py Modify grade2.py so it has an equivalent version of the letterGrade function that tests in the opposite order, first for F, then D, C, .... Hint: How many tests do you need to do? [4] Be sure to run your new version and test with different inputs that test all the different paths through the program. Be careful to test around cut-off points. What does a grade of 79.6 imply? What about exactly 80? 3.1.5.3. Wages Exercise ¶ * Modify the wages.py or the wages1.py example to create a program wages2.py that assumes people are paid double time for hours over 60. Hence they get paid for at most 20 hours overtime at 1.5 times the normal rate. For example, a person working 65 hours with a regular wage of $10 per hour would work at $10 per hour for 40 hours, at 1.5 * $10 for 20 hours of overtime, and 2 * $10 for 5 hours of double time, for a total of 10*40 + 1.5*10*20 + 2*10*5 = $800. You may find wages1.py easier to adapt than wages.py . Be sure to test all paths through the program! Your program is likely to be a modification of a program where some choices worked before, but once you change things, retest for all the cases! Changes can mess up things that worked before. 3.1.6. Nesting Control-Flow Statements ¶ The power of a language like Python comes largely from the variety of ways basic statements can be combined . In particular, for and if statements can be nested inside each other’s indented blocks. For example, suppose you want to print only the positive numbers from an arbitrary list of numbers in a function with the following heading. Read the pieces for now. def printAllPositive ( numberList ): '''Print only the positive numbers in numberList.''' For example, suppose numberList is [3, -5, 2, -1, 0, 7] . You want to process a list, so that suggests a for -each loop, for num in numberList : but a for -each loop runs the same code body for each element of the list, and we only want print ( num ) for some of them. That seems like a major obstacle, but think closer at what needs to happen concretely. As a human, who has eyes of amazing capacity, you are drawn immediately to the actual correct numbers, 3, 2, and 7, but clearly a computer doing this systematically will have to check every number. In fact, there is a consistent action required: Every number must be tested to see if it should be printed. This suggests an if statement, with the condition num > 0 . Try loading into Idle and running the example program onlyPositive.py , whose code is shown below. It ends with a line testing the function: def printAllPositive ( numberList ): '''Print only the positive numbers in numberList.''' for num in numberList : if num > 0 : print ( num ) printAllPositive ([ 3 , - 5 , 2 , - 1 , 0 , 7 ]) This idea of nesting if statements enormously expands the possibilities with loops. Now different things can be done at different times in loops, as long as there is a consistent test to allow a choice between the alternatives. Shortly, while loops will also be introduced, and you will see if statements nested inside of them, too. The rest of this section deals with graphical examples. Run example program bounce1.py . It has a red ball moving and bouncing obliquely off the edges. If you watch several times, you should see that it starts from random locations. Also you can repeat the program from the Shell prompt after you have run the script. For instance, right after running the program, try in the Shell bounceBall ( - 3 , 1 ) The parameters give the amount the shape moves in each animation step. You can try other values in the Shell , preferably with magnitudes less than 10. For the remainder of the description of this example, read the extracted text pieces. The animations before this were totally scripted, saying exactly how many moves in which direction, but in this case the direction of motion changes with every bounce. The program has a graphic object shape and the central animation step is shape . move ( dx , dy ) but in this case, dx and dy have to change when the ball gets to a boundary. For instance, imagine the ball getting to the left side as it is moving to the left and up. The bounce obviously alters the horizontal part of the motion, in fact reversing it, but the ball would still continue up. The reversal of the horizontal part of the motion means that the horizontal shift changes direction and therefore its sign: dx = - dx but dy does not need to change. This switch does not happen at each animation step, but only when the ball reaches the edge of the window. It happens only some of the time - suggesting an if statement. Still the condition must be determined. Suppose the center of the ball has coordinates (x, y). When x reaches some particular x coordinate, call it xLow, the ball should bounce. The edge of the window is at coordinate 0, but xLow should not be 0, or the ball would be half way off the screen before bouncing! For the edge of the ball to hit the edge of the screen, the x coordinate of the center must be the length of the radius away, so actually xLow is the radius of the ball. Animation goes quickly in small steps, so I cheat. I allow the ball to take one (small, quick) step past where it really should go ( xLow ), and then we reverse it so it comes back to where it belongs. In particular if x < xLow : dx = - dx There are similar bounding variables xHigh , yLow and yHigh , all the radius away from the actual edge coordinates, and similar conditions to test for a bounce off each possible edge. Note that whichever edge is hit, one coordinate, either dx or dy, reverses. One way the collection of tests could be written is if x < xLow : dx = - dx if x > xHigh : dx = - dx if y < yLow : dy = - dy if y > yHigh : dy = - dy This approach would cause there to be some extra testing: If it is true that x < xLow , then it is impossible for it to be true that x > xHigh , so we do not need both tests together. We avoid unnecessary tests with an elif clause (for both x and y): if x < xLow : dx = - dx elif x > xHigh : dx = - dx if y < yLow : dy = - dy elif y > yHigh : dy = - dy Note that the middle if is not changed to an elif , because it is possible for the ball to reach a corner , and need both dx and dy reversed. The program also uses several methods to read part of the state of graphics objects that we have not used in examples yet. Various graphics objects, like the circle we are using as the shape, know their center point, and it can be accessed with the getCenter() method. (Actually a clone of the point is returned.) Also each coordinate of a Point can be accessed with the getX() and getY() methods. This explains the new features in the central function defined for bouncing around in a box, bounceInBox . The animation arbitrarily goes on in a simple repeat loop for 600 steps. (A later example will improve this behavior.) def bounceInBox ( shape , dx , dy , xLow , xHigh , yLow , yHigh ): ''' Animate a shape moving in jumps (dx, dy), bouncing when its center reaches the low and high x and y coordinates. ''' delay = . 005 for i in range ( 600 ): shape . move ( dx , dy ) center = shape . getCenter () x = center . getX () y = center . getY () if x < xLow : dx = - dx elif x > xHigh : dx = - dx if y < yLow : dy = - dy elif y > yHigh : dy = - dy time . sleep ( delay ) The program starts the ball from an arbitrary point inside the allowable rectangular bounds. This is encapsulated in a utility function included in the program, getRandomPoint . The getRandomPoint function uses the randrange function from the module random . Note that in parameters for both the functions range and randrange , the end stated is past the last value actually desired: def getRandomPoint ( xLow , xHigh , yLow , yHigh ): '''Return a random Point with coordinates in the range specified.''' x = random . randrange ( xLow , xHigh + 1 ) y = random . randrange ( yLow , yHigh + 1 ) return Point ( x , y ) The full program is listed below, repeating bounceInBox and getRandomPoint for completeness. Several parts that may be useful later, or are easiest to follow as a unit, are separated out as functions. Make sure you see how it all hangs together or ask questions! ''' Show a ball bouncing off the sides of the window. ''' from graphics import * import time , random def bounceInBox ( shape , dx , dy , xLow , xHigh , yLow , yHigh ): ''' Animate a shape moving in jumps (dx, dy), bouncing when its center reaches the low and high x and y coordinates. ''' delay = . 005 for i in range ( 600 ): shape . move ( dx , dy ) center = shape . getCenter () x = center . getX () y = center . getY () if x < xLow : dx = - dx elif x > xHigh : dx = - dx if y < yLow : dy = - dy elif y > yHigh : dy = - dy time . sleep ( delay ) def getRandomPoint ( xLow , xHigh , yLow , yHigh ): '''Return a random Point with coordinates in the range specified.''' x = random . randrange ( xLow , xHigh + 1 ) y = random . randrange ( yLow , yHigh + 1 ) return Point ( x , y ) def makeDisk ( center , radius , win ): '''return a red disk that is drawn in win with given center and radius.''' disk = Circle ( center , radius ) disk . setOutline ( "red" ) disk . setFill ( "red" ) disk . draw ( win ) return disk def bounceBall ( dx , dy ): '''Make a ball bounce around the screen, initially moving by (dx, dy) at each jump.''' win = GraphWin ( 'Ball Bounce' , 290 , 290 ) win . yUp () radius = 10 xLow = radius # center is separated from the wall by the radius at a bounce xHigh = win . getWidth () - radius yLow = radius yHigh = win . getHeight () - radius center = getRandomPoint ( xLow , xHigh , yLow , yHigh ) ball = makeDisk ( center , radius , win ) bounceInBox ( ball , dx , dy , xLow , xHigh , yLow , yHigh ) win . close () bounceBall ( 3 , 5 ) 3.1.6.1. Short String Exercise ¶ Write a program short.py with a function printShort with heading: def printShort ( strings ): '''Given a list of strings, print the ones with at most three characters. >>> printShort(['a', 'long', one']) a one ''' In your main program, test the function, calling it several times with different lists of strings. Hint: Find the length of each string with the len function. The function documentation here models a common approach: illustrating the behavior of the function with a Python Shell interaction. This part begins with a line starting with >>> . Other exercises and examples will also document behavior in the Shell. 3.1.6.2. Even Print Exercise ¶ Write a program even1.py with a function printEven with heading: def printEven ( nums ): '''Given a list of integers nums, print the even ones. >>> printEven([4, 1, 3, 2, 7]) 4 2 ''' In your main program, test the function, calling it several times with different lists of integers. Hint: A number is even if its remainder, when dividing by 2, is 0. 3.1.6.3. Even List Exercise ¶ Write a program even2.py with a function chooseEven with heading: def chooseEven ( nums ): '''Given a list of integers, nums, return a list containing only the even ones. >>> chooseEven([4, 1, 3, 2, 7]) [4, 2] ''' In your main program, test the function, calling it several times with different lists of integers and printing the results in the main program. (The documentation string illustrates the function call in the Python shell, where the return value is automatically printed. Remember, that in a program, you only print what you explicitly say to print.) Hint: In the function, create a new list, and append the appropriate numbers to it, before returning the result. 3.1.6.4. Unique List Exercise ¶ * The madlib2.py program has its getKeys function, which first generates a list of each occurrence of a cue in the story format. This gives the cues in order, but likely includes repetitions. The original version of getKeys uses a quick method to remove duplicates, forming a set from the list. There is a disadvantage in the conversion, though: Sets are not ordered, so when you iterate through the resulting set, the order of the cues will likely bear no resemblance to the order they first appeared in the list. That issue motivates this problem: Copy madlib2.py to madlib2a.py , and add a function with this heading: def uniqueList ( aList ): ''' Return a new list that includes the first occurrence of each value in aList, and omits later repeats. The returned list should include the first occurrences of values in aList in their original order. >>> vals = ['cat', 'dog', 'cat', 'bug', 'dog', 'ant', 'dog', 'bug'] >>> uniqueList(vals) ['cat', 'dog', 'bug', 'ant'] ''' Hint: Process aList in order. Use the in syntax to only append elements to a new list that are not already in the new list. After perfecting the uniqueList function, replace the last line of getKeys , so it uses uniqueList to remove duplicates in keyList . Check that your madlib2a.py prompts you for cue values in the order that the cues first appear in the madlib format string. 3.1.7. Compound Boolean Expressions ¶ To be eligible to graduate from Loyola University Chicago, you must have 120 credits and a GPA of at least 2.0. This translates directly into Python as a compound condition : credits >= 120 and GPA >= 2.0 This is true if both credits >= 120 is true and GPA >= 2.0 is true. A short example program using this would be: credits = float ( input ( 'How many units of credit do you have? ' )) GPA = float ( input ( 'What is your GPA? ' )) if credits >= 120 and GPA >= 2.0 : print ( 'You are eligible to graduate!' ) else : print ( 'You are not eligible to graduate.' ) The new Python syntax is for the operator and : condition1 and condition2 The compound condition is true if both of the component conditions are true. It is false if at least one of the conditions is false. See Congress Exercise . In the last example in the previous section, there was an if - elif statement where both tests had the same block to be done if the condition was true: if x < xLow : dx = - dx elif x > xHigh : dx = - dx There is a simpler way to state this in a sentence: If x < xLow or x > xHigh, switch the sign of dx. That translates directly into Python: if x < xLow or x > xHigh : dx = - dx The word or makes another compound condition: condition1 or condition2 is true if at least one of the conditions is true. It is false if both conditions are false. This corresponds to one way the word “or” is used in English. Other times in English “or” is used to mean exactly one alternative is true. Warning When translating a problem stated in English using “or”, be careful to determine whether the meaning matches Python’s or . It is often convenient to encapsulate complicated tests inside a function. Think how to complete the function starting: def isInside ( rect , point ): '''Return True if the point is inside the Rectangle rect.''' pt1 = rect . getP1 () pt2 = rect . getP2 () Recall that a Rectangle is specified in its constructor by two diagonally oppose Point s. This example gives the first use in the tutorials of the Rectangle methods that recover those two corner points, getP1 and getP2 . The program calls the points obtained this way pt1 and pt2 . The x and y coordinates of pt1 , pt2 , and point can be recovered with the methods of the Point type, getX() and getY() . Suppose that I introduce variables for the x coordinates of pt1 , point , and pt2 , calling these x-coordinates end1 , val , and end2 , respectively. On first try you might decide that the needed mathematical relationship to test is end1 <= val <= end2 Unfortunately, this is not enough: The only requirement for the two corner points is that they be diagonally opposite, not that the coordinates of the second point are higher than the corresponding coordinates of the first point. It could be that end1 is 200; end2 is 100, and val is 120. In this latter case val is between end1 and end2 , but substituting into the expression above 200 <= 120 <= 100 is False. The 100 and 200 need to be reversed in this case. This makes a complicated situation. Also this is an issue which must be revisited for both the x and y coordinates. I introduce an auxiliary function isBetween to deal with one coordinate at a time. It starts: def isBetween ( val , end1 , end2 ): '''Return True if val is between the ends. The ends do not need to be in increasing order.''' Clearly this is true if the original expression, end1 <= val <= end2 , is true. You must also consider the possible case when the order of the ends is reversed: end2 <= val <= end1 . How do we combine these two possibilities? The Boolean connectives to consider are and and or . Which applies? You only need one to be true, so or is the proper connective: A correct but redundant function body would be: if end1 <= val <= end2 or end2 <= val <= end1 : return True else : return False Check the meaning: if the compound expression is True , return True . If the condition is False , return False – in either case return the same value as the test condition. See that a much simpler and neater version is to just return the value of the condition itself! return end1 <= val <= end2 or end2 <= val <= end1 Note In general you should not need an if - else statement to choose between true and false values! Operate directly on the boolean expression. A side comment on expressions like end1 <= val <= end2 Other than the two-character operators, this is like standard math syntax, chaining comparisons. In Python any number of comparisons can be chained in this way, closely approximating mathematical notation. Though this is good Python, be aware that if you try other high-level languages like Java and C++, such an expression is gibberish. Another way the expression can be expressed (and which translates directly to other languages) is: end1 <= val and val <= end2 So much for the auxiliary function isBetween . Back to the isInside function. You can use the isBetween function to check the x coordinates, isBetween ( point . getX (), p1 . getX (), p2 . getX ()) and to check the y coordinates, isBetween ( point . getY (), p1 . getY (), p2 . getY ()) Again the question arises: how do you combine the two tests? In this case we need the point to be both between the sides and between the top and bottom, so the proper connector is and . Think how to finish the isInside method. Hint: [5] Sometimes you want to test the opposite of a condition. As in English you can use the word not . For instance, to test if a Point was not inside Rectangle Rect, you could use the condition not isInside ( rect , point ) In general, not condition is True when condition is False , and False when condition is True . The example program chooseButton1.py , shown below, is a complete program using the isInside function in a simple application, choosing colors. Pardon the length. Do check it out. It will be the starting point for a number of improvements that shorten it and make it more powerful in the next section. First a brief overview: The program includes the functions isBetween and isInside that have already been discussed. The program creates a number of colored rectangles to use as buttons and also as picture components. Aside from specific data values, the code to create each rectangle is the same, so the action is encapsulated in a function, makeColoredRect . All of this is fine, and will be preserved in later versions. The present main function is long, though. It has the usual graphics starting code, draws buttons and picture elements, and then has a number of code sections prompting the user to choose a color for a picture element. Each code section has a long if - elif - else test to see which button was clicked, and sets the color of the picture element appropriately. '''Make a choice of colors via mouse clicks in Rectangles -- A demonstration of Boolean operators and Boolean functions.''' from graphics import * def isBetween ( x , end1 , end2 ): '''Return True if x is between the ends or equal to either. The ends do not need to be in increasing order.''' return end1 <= x <= end2 or end2 <= x <= end1 def isInside ( point , rect ): '''Return True if the point is inside the Rectangle rect.''' pt1 = rect . getP1 () pt2 = rect . getP2 () return isBetween ( point . getX (), pt1 . getX (), pt2 . getX ()) and \ isBetween ( point . getY (), pt1 . getY (), pt2 . getY ()) def makeColoredRect ( corner , width , height , color , win ): ''' Return a Rectangle drawn in win with the upper left corner and color specified.''' corner2 = corner . clone () corner2 . move ( width , - height ) rect = Rectangle ( corner , corner2 ) rect . setFill ( color ) rect . draw ( win ) return rect def main (): win = GraphWin ( 'pick Colors' , 400 , 400 ) win . yUp () # right side up coordinates redButton = makeColoredRect ( Point ( 310 , 350 ), 80 , 30 , 'red' , win ) yellowButton = makeColoredRect ( Point ( 310 , 310 ), 80 , 30 , 'yellow' , win ) blueButton = makeColoredRect ( Point ( 310 , 270 ), 80 , 30 , 'blue' , win ) house = makeColoredRect ( Point ( 60 , 200 ), 180 , 150 , 'gray' , win ) door = makeColoredRect ( Point ( 90 , 150 ), 40 , 100 , 'white' , win ) roof = Polygon ( Point ( 50 , 200 ), Point ( 250 , 200 ), Point ( 150 , 300 )) roof . setFill ( 'black' ) roof . draw ( win ) msg = Text ( Point ( win . getWidth () / 2 , 375 ), 'Click to choose a house color.' ) msg . draw ( win ) pt = win . getMouse () if isInside ( pt , redButton ): color = 'red' elif isInside ( pt , yellowButton ): color = 'yellow' elif isInside ( pt , blueButton ): color = 'blue' else : color = 'white' house . setFill ( color ) msg . setText ( 'Click to choose a door color.' ) pt = win . getMouse () if isInside ( pt , redButton ): color = 'red' elif isInside ( pt , yellowButton ): color = 'yellow' elif isInside ( pt , blueButton ): color = 'blue' else : color = 'white' door . setFill ( color ) win . promptClose ( msg ) main () The only further new feature used is in the long return statement in isInside . return isBetween ( point . getX (), pt1 . getX (), pt2 . getX ()) and \ isBetween ( point . getY (), pt1 . getY (), pt2 . getY ()) Recall that Python is smart enough to realize that a statement continues to the next line if there is an unmatched pair of parentheses or brackets. Above is another situation with a long statement, but there are no unmatched parentheses on a line. For readability it is best not to make an enormous long line that would run off your screen or paper. Continuing to the next line is recommended. You can make the final character on a line be a backslash ( '\\' ) to indicate the statement continues on the next line. This is not particularly neat, but it is a rather rare situation. Most statements fit neatly on one line, and the creator of Python decided it was best to make the syntax simple in the most common situation. (Many other languages require a special statement terminator symbol like ‘;’ and pay no attention to newlines). Extra parentheses here would not hurt, so an alternative would be return ( isBetween ( point . getX (), pt1 . getX (), pt2 . getX ()) and isBetween ( point . getY (), pt1 . getY (), pt2 . getY ()) ) The chooseButton1.py program is long partly because of repeated code. The next section gives another version involving lists. 3.1.7.1. Congress Exercise ¶ A person is eligible to be a US Senator who is at least 30 years old and has been a US citizen for at least 9 years. Write an initial version of a program congress.py to obtain age and length of citizenship from the user and print out if a person is eligible to be a Senator or not. A person is eligible to be a US Representative who is at least 25 years old and has been a US citizen for at least 7 years. Elaborate your program congress.py so it obtains age and length of citizenship and prints out just the one of the following three statements that is accurate: You are eligible for both the House and Senate. You eligible only for the House. You are ineligible for Congress. 3.1.8. More String Methods ¶ Here are a few more string methods useful in the next exercises, assuming the methods are applied to a string s : s .startswith( pre ) returns True if string s starts with string pre : Both '-123'.startswith('-') and 'downstairs'.startswith('down') are True , but '1 - 2 - 3'.startswith('-') is False . s .endswith( suffix ) returns True if string s ends with string suffix : Both 'whoever'.endswith('ever') and 'downstairs'.endswith('airs') are True , but '1 - 2 - 3'.endswith('-') is False . s .replace( sub , replacement , count ) returns a new string with up to the first count occurrences of string sub replaced by replacement . The replacement can be the empty string to delete sub . For example: s = '-123' t = s . replace ( '-' , '' , 1 ) # t equals '123' t = t . replace ( '-' , '' , 1 ) # t is still equal to '123' u = '.2.3.4.' v = u . replace ( '.' , '' , 2 ) # v equals '23.4.' w = u . replace ( '.' , ' dot ' , 5 ) # w equals '2 dot 3 dot 4 dot ' 3.1.8.1. Article Start Exercise ¶ In library alphabetizing, if the initial word is an article (“The”, “A”, “An”), then it is ignored when ordering entries. Write a program completing this function, and then testing it: def startsWithArticle ( title ): '''Return True if the first word of title is "The", "A" or "An".''' Be careful, if the title starts with “There”, it does not start with an article. What should you be testing for? 3.1.8.2. Is Number String Exercise ¶ ** In the later Safe Number Input Exercise , it will be important to know if a string can be converted to the desired type of number. Explore that here. Save example isNumberStringStub.py as isNumberString.py and complete it. It contains headings and documentation strings for the functions in both parts of this exercise. A legal whole number string consists entirely of digits. Luckily strings have an isdigit method, which is true when a nonempty string consists entirely of digits, so '2397'.isdigit() returns True , and '23a'.isdigit() returns False , exactly corresponding to the situations when the string represents a whole number! In both parts be sure to test carefully. Not only confirm that all appropriate strings return True . Also be sure to test that you return False for all sorts of bad strings. Recognizing an integer string is more involved, since it can start with a minus sign (or not). Hence the isdigit method is not enough by itself. This part is the most straightforward if you have worked on the sections String Indices and String Slices . An alternate approach works if you use the count method from Object Orientation , and some methods from this section. Complete the function isIntStr . Complete the function isDecimalStr , which introduces the possibility of a decimal point (though a decimal point is not required). The string methods mentioned in the previous part remain useful. [1] This is an improvement that is new in Python 3. [2] “In this case do ___; otherwise”, “if ___, then”, “when ___ is true, then”, “___ depends on whether”, [3] If you divide an even number by 2, what is the remainder? Use this idea in your if condition. [4] 4 tests to distinguish the 5 cases, as in the previous version [5] Once again, you are calculating and returning a Boolean result. You do not need an if - else statement. Table Of Contents 3.1. If Statements 3.1.1. Simple Conditions 3.1.2. Simple if Statements 3.1.3. if - else Statements 3.1.4. More Conditional Expressions 3.1.4.1. Graduate Exercise 3.1.4.2. Head or Tails Exercise 3.1.4.3. Strange Function Exercise 3.1.5. Multiple Tests and if - elif Statements 3.1.5.1. Sign Exercise 3.1.5.2. Grade Exercise 3.1.5.3. Wages Exercise 3.1.6. Nesting Control-Flow Statements 3.1.6.1. Short String Exercise 3.1.6.2. Even Print Exercise 3.1.6.3. Even List Exercise 3.1.6.4. Unique List Exercise 3.1.7. Compound Boolean Expressions 3.1.7.1. Congress Exercise 3.1.8. More String Methods 3.1.8.1. Article Start Exercise 3.1.8.2. Is Number String Exercise Previous topic 3. More On Flow of Control Next topic 3.2. Loops and Tuples This Page Show Source Quick search Enter search terms or a module, class or function name. Navigation index next | previous | Hands-on Python Tutorial » 3. More On Flow of Control » © Copyright 2019, Dr. Andrew N. Harrington. Last updated on Jan 05, 2020. Created using Sphinx 1.3.1+. | 2026-01-13T09:30:34 |
https://www.php.net/manual/uk/function.session-name.php | PHP: session_name - Manual update page now Downloads Documentation Get Involved Help Search docs Getting Started Introduction A simple tutorial Language Reference Basic syntax Types Variables Constants Expressions Operators Control Structures Functions Classes and Objects Namespaces Enumerations Errors Exceptions Fibers Generators Attributes References Explained Predefined Variables Predefined Exceptions Predefined Interfaces and Classes Predefined Attributes Context options and parameters Supported Protocols and Wrappers Security Introduction General considerations Installed as CGI binary Installed as an Apache module Session Security Filesystem Security Database Security Error Reporting User Submitted Data Hiding PHP Keeping Current Features HTTP authentication with PHP Cookies Sessions Handling file uploads Using remote files Connection handling Persistent Database Connections Command line usage Garbage Collection DTrace Dynamic Tracing Function Reference Affecting PHP's Behaviour Audio Formats Manipulation Authentication Services Command Line Specific Extensions Compression and Archive Extensions Cryptography Extensions Database Extensions Date and Time Related Extensions File System Related Extensions Human Language and Character Encoding Support Image Processing and Generation Mail Related Extensions Mathematical Extensions Non-Text MIME Output Process Control Extensions Other Basic Extensions Other Services Search Engine Extensions Server Specific Extensions Session Extensions Text Processing Variable and Type Related Extensions Web Services Windows Only Extensions XML Manipulation GUI Extensions Keyboard Shortcuts ? This help j Next menu item k Previous menu item g p Previous man page g n Next man page G Scroll to bottom g g Scroll to top g h Goto homepage g s Goto search (current page) / Focus search box session_regenerate_id » « session_module_name Посібник з PHP Довідник функцій Розширення для роботи зі сесіями Sessions Session Функції Change language: English German Spanish French Italian Japanese Brazilian Portuguese Russian Turkish Ukrainian Chinese (Simplified) Other session_name (PHP 4, PHP 5, PHP 7, PHP 8) session_name — Get and/or set the current session name Опис session_name ( ? string $name = null ): string | false session_name() returns the name of the current session. If name is given, session_name() will update the session name and return the old session name. If a new session name is supplied, session_name() modifies the HTTP cookie (and outputs the content when session.use_trans_sid is enabled). Once the HTTP cookie has been sent, calling session_name() raises an E_WARNING . session_name() must be called before session_start() for the session to work properly. The session name is reset to the default value stored in session.name at request startup time. Thus, you need to call session_name() for every request (and before session_start() is called). Параметри name The session name references the name of the session, which is used in cookies and URLs (e.g. PHPSESSID ). It should contain only alphanumeric characters; it should be short and descriptive (i.e. for users with enabled cookie warnings). If name is specified and not null , the name of the current session is changed to its value. Увага The session name can't consist of digits only, at least one letter must be present. Otherwise a new session id is generated every time. Значення, що повертаються Returns the name of the current session. If name is given and function updates the session name, name of the old session is returned, або false в разі помилки. Журнал змін Версія Опис 8.0.0 name is nullable now. 7.2.0 session_name() checks session status, previously it only checked cookie status. Therefore, older session_name() allows to call session_name() after session_start() which may crash PHP and may result in misbehaviors. Приклади Приклад #1 session_name() example <?php /* set the session name to WebsiteID */ $previous_name = session_name ( "WebsiteID" ); echo "The previous session name was $previous_name <br />" ; ?> Прогляньте також The session.name configuration directive Found A Problem? Learn How To Improve This Page • Submit a Pull Request • Report a Bug + add a note User Contributed Notes 9 notes up down 146 Hongliang Qiang ¶ 21 years ago This may sound no-brainer: the session_name() function will have no essential effect if you set session.auto_start to "true" in php.ini . And the obvious explanation is the session already started thus cannot be altered before the session_name() function--wherever it is in the script--is executed, same reason session_name needs to be called before session_start() as documented. I know it is really not a big deal. But I had a quite hard time before figuring this out, and hope it might be helpful to someone like me. up down 65 php at wiz dot cx ¶ 17 years ago if you try to name a php session "example.com" it gets converted to "example_com" and everything breaks. don't use a period in your session name. up down 40 relsqui at chiliahedron dot com ¶ 16 years ago Remember, kids--you MUST use session_name() first if you want to use session_set_cookie_params() to, say, change the session timeout. Otherwise it won't work, won't give any error, and nothing in the documentation (that I've seen, anyway) will explain why. Thanks to brandan of bildungsroman.com who left a note under session_set_cookie_params() explaining this or I'd probably still be throwing my hands up about it. up down 21 Joseph Dalrymple ¶ 14 years ago For those wondering, this function is expensive! On a script that was executing in a consistent 0.0025 seconds, just the use of session_name("foo") shot my execution time up to ~0.09s. By simply sacrificing session_name("foo"), I sped my script up by roughly 0.09 seconds. up down 10 Victor H ¶ 10 years ago As Joseph Dalrymple said, adding session_name do slow down a little bit the execution time. But, what i've observed is that it decreased the fluctuation between requests. Requests on my script fluctuated between 0,045 and 0,022 seconds. With session_name("myapp"), it goes to 0,050 and 0,045. Not a big deal, but that's a point to note. For those with problems setting the name, when session.auto_start is set to 1, you need to set the session.name on php.ini! up down 3 mmulej at gmail dot com ¶ 4 years ago Hope this is not out of php.net noting scope. session_name('name') must be set before session_start() because the former changes ini settings and the latter reads them. For the same reason session_set_cookie_params($options) must be set before session_start() as well. I find it best to do the following. function is_session_started() { if (php_sapi_name() === 'cli') return false; if (version_compare(phpversion(), '5.4.0', '>=')) return session_status() === PHP_SESSION_ACTIVE; return session_id() !== ''; } if (!is_session_started()) { session_name($session_name); session_set_cookie_params($cookie_options); session_start(); } up down 0 tony at marston-home dot demon dot co dot uk ¶ 7 years ago The description that session_name() gets and/or sets the name of the current session is technically wrong. It does nothing but deal with the value originally supplied by the session.name value within the php.ini file. Thus:- $name = session_name(); is functionally equivalent to $name = ini_get('session.name'); and session_name('newname); is functionally equivalent to ini_set('session.name','newname'); This also means that: $old_name = session_name('newname'); is functionally equivalent to $old_name = ini_set('session.name','newname'); The current value of session.name is not attached to a session until session_start() is called. Once session_start() has used session.name to lookup the session_id() in the cookie data the name becomes irrelevant as all further operations on the session data are keyed by the session_id(). Note that changing session.name while a session is currently active will not update the name in any session cookie. The new name does not take effect until the next call to session_start(), and this requires that the current session, which was created with the previous value for session.name, be closed. up down -4 tony at marston-home dot demon dot co dot uk ¶ 7 years ago The description has recently been modified to contain the statement "When new session name is supplied, session_name() modifies HTTP cookie". This is not correct as session_name() has never modified any cookie data. A change in session.name does not become effective until session_start() is called, and it is session_start() that creates the cookie if it does not already exist. See the following bug report for details: https://bugs.php.net/bug.php?id=76413 up down -3 descartavel1+php at gmail dot com ¶ 2 years ago Always try to set the prefix for your session name attribute to either `__Host-` or `__Secure-` to benefit from Browsers improved security. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie#attributes Also, if you have auto_session enabled, you must set this name in session.name in your config (php.ini, htaccess, etc) + add a note Session Функції session_​abort session_​cache_​expire session_​cache_​limiter session_​commit session_​create_​id session_​decode session_​destroy session_​encode session_​gc session_​get_​cookie_​params session_​id session_​module_​name session_​name session_​regenerate_​id session_​register_​shutdown session_​reset session_​save_​path session_​set_​cookie_​params session_​set_​save_​handler session_​start session_​status session_​unset session_​write_​close Copyright © 2001-2026 The PHP Documentation Group My PHP.net Contact Other PHP.net sites Privacy policy ↑ and ↓ to navigate • Enter to select • Esc to close • / to open Press Enter without selection to search using Google | 2026-01-13T09:30:34 |
https://support.microsoft.com/pl-pl/windows/zarz%C4%85dzanie-plikami-cookie-w-przegl%C4%85darce-microsoft-edge-wy%C5%9Bwietlanie-zezwalanie-blokowanie-usuwanie-i-u%C5%BCywanie-168dab11-0753-043d-7c16-ede5947fc64d | Zarządzanie plikami cookie w przeglądarce Microsoft Edge: wyświetlanie, zezwalanie, blokowanie, usuwanie i używanie - Pomoc techniczna firmy Microsoft Powiązane tematy × Zabezpieczenia, bezpieczeństwo i prywatność systemu Windows Przegląd Omówienie zabezpieczeń, bezpieczeństwa i prywatności Zabezpieczenia systemu Windows Uzyskiwanie pomocy dotyczącej aplikacji Zabezpieczenia Windows Ochrona za pomocą aplikacji Zabezpieczenia Windows Przed oddaniem do recyklingu, sprzedażą lub podarowaniem konsoli Xbox lub komputera z systemem Windows Usuwanie złośliwego oprogramowania z komputera z systemem Windows Bezpieczeństwo systemu Windows Uzyskiwanie pomocy dotyczącej bezpieczeństwa systemu Windows Wyświetlanie i usuwanie historii przeglądania w przeglądarce Microsoft Edge Usuwanie plików cookie i zarządzanie nimi Bezpieczne usuwanie cennej zawartości podczas ponownego instalowania systemu Windows Znajdowanie i blokowanie utraconego urządzenia z systemem Windows Prywatność w systemie Windows Uzyskiwanie pomocy dotyczącej prywatności w systemie Windows Ustawienia ochrony prywatności systemu Windows używane przez aplikacje Wyświetlanie swoich danych na pulpicie nawigacyjnym prywatności Przejdź do głównej zawartości Microsoft Pomoc techniczna Pomoc techniczna Pomoc techniczna Strona główna Microsoft 365 Office Produkty Microsoft 365 Outlook Microsoft Teams OneDrive Microsoft Copilot OneNote Windows więcej... Urządzenia Surface Akcesoria komputerowe Xbox Gry komputerowe HoloLens Surface Hub Gwarancje sprzętowe Konto i rozliczenia klient Sklep Microsoft Store & rozliczenia Zasoby Co nowego Fora społeczności Administratorzy platformy Microsoft 365 Portal dla małych firm Deweloper Edukacja Zgłoś oszustwo związane z pomocą techniczną Bezpieczeństwo produktu Więcej Kup platformę Microsoft 365 Wszystkie produkty Microsoft Global Microsoft 365 Teams Copilot Windows Surface Xbox Promocje i zniżki Pomoc techniczna Oprogramowanie Oprogramowanie Aplikacje systemu Windows AI OneDrive Outlook Przejście ze Skype do Teams OneNote Microsoft Teams Komputery i urządzenia Komputery i urządzenia Kup Xbox Akcesoria Rozrywka Rozrywka Xbox Game Pass Ultimate Xbox i gry Gry PC Sklep dla firm Sklep dla firm Rozwiązania zabezpieczające firmy Microsoft Azure Dynamics 365 Microsoft 365 dla firm Rozwiązania Microsoft dla przedsiębiorstw Microsoft Power Platform Windows 365 Dla programistów i specjalistów IT Dla programistów i specjalistów IT Deweloper Microsoft Microsoft Learn Pomoc techniczna do aplikacji z platformy handlowej opartych na AI Społeczność Microsoft Tech Microsoft Marketplace Visual Studio Marketplace Rewards Inny Inny Bezpłatne pliki do pobrania i zabezpieczenia Edukacja Bony upominkowe Licencjonowanie Wyświetl mapę witryny Wyszukaj Wyszukaj, aby uzyskać pomoc Brak wyników Anuluj Zaloguj się Zaloguj się przy użyciu konta Microsoft Zaloguj się lub utwórz konto. Witaj, Wybierz inne konto. Masz wiele kont Wybierz konto, za pomocą którego chcesz się zalogować. Powiązane tematy Zabezpieczenia, bezpieczeństwo i prywatność systemu Windows Przegląd Omówienie zabezpieczeń, bezpieczeństwa i prywatności Zabezpieczenia systemu Windows Uzyskiwanie pomocy dotyczącej aplikacji Zabezpieczenia Windows Ochrona za pomocą aplikacji Zabezpieczenia Windows Przed oddaniem do recyklingu, sprzedażą lub podarowaniem konsoli Xbox lub komputera z systemem Windows Usuwanie złośliwego oprogramowania z komputera z systemem Windows Bezpieczeństwo systemu Windows Uzyskiwanie pomocy dotyczącej bezpieczeństwa systemu Windows Wyświetlanie i usuwanie historii przeglądania w przeglądarce Microsoft Edge Usuwanie plików cookie i zarządzanie nimi Bezpieczne usuwanie cennej zawartości podczas ponownego instalowania systemu Windows Znajdowanie i blokowanie utraconego urządzenia z systemem Windows Prywatność w systemie Windows Uzyskiwanie pomocy dotyczącej prywatności w systemie Windows Ustawienia ochrony prywatności systemu Windows używane przez aplikacje Wyświetlanie swoich danych na pulpicie nawigacyjnym prywatności Zarządzanie plikami cookie w przeglądarce Microsoft Edge: wyświetlanie, zezwalanie, blokowanie, usuwanie i używanie Dotyczy Windows 10 Windows 11 Microsoft Edge Pliki cookie to niewielkie dane przechowywane na urządzeniu użytkownika przez odwiedzane witryny internetowe. Służą one do różnych celów, takich jak zapamiętywanie poświadczeń logowania, preferencji witryny i śledzenie zachowań użytkowników. Możesz jednak zechcieć usunąć pliki cookie ze względów prywatności lub rozwiązać problemy z przeglądaniem. Ten artykuł zawiera instrukcje dotyczące wykonywania następujących czynności: Wyświetl wszystkie pliki cookie Zezwalaj na wszystkie pliki cookie Zezwalaj na pliki cookie z określonej witryny internetowej Blokuj pliki cookie innych firm Blokuj wszystkie pliki cookie Blokowanie plików cookie z określonej witryny Usuwanie wszystkich plików cookie Usuwanie plików cookie z określonej witryny Usuwanie plików cookie każdorazowo podczas zamykania przeglądarki Użyj plików cookie, aby wstępnie załadować stronę do szybszego przeglądania Wyświetl wszystkie pliki cookie Otwórz przeglądarkę Edge, wybierz ustawienia i nie tylko w prawym górnym rogu okna przeglądarki. Wybierz pozycję Ustawienia > Prywatność, wyszukiwanie i usługi . Wybierz pozycję Pliki cookie , a następnie kliknij pozycję Wyświetl wszystkie pliki cookie i dane witryn , aby wyświetlić wszystkie przechowywane pliki cookie i powiązane informacje o witrynie. Zezwalaj na wszystkie pliki cookie Zezwalając na obsługę plików cookie, witryny internetowe będą mogły zapisywać i pobierać dane w przeglądarce, co może usprawnić przeglądanie, zapamiętyjąc preferencje i informacje logowania. Otwórz przeglądarkę Edge, wybierz ustawienia i nie tylko w prawym górnym rogu okna przeglądarki. Wybierz pozycję Ustawienia > Prywatność, wyszukiwanie i usługi . Wybierz pozycję Pliki cookie i włącz przełącznik Zezwalaj witrynom na zapisywanie i odczytywanie danych plików cookie (zalecane), aby zezwolić na wszystkie pliki cookie. Zezwalaj na pliki cookie z określonej witryny Zezwalając na obsługę plików cookie, witryny internetowe będą mogły zapisywać i pobierać dane w przeglądarce, co może usprawnić przeglądanie, zapamiętyjąc preferencje i informacje logowania. Otwórz przeglądarkę Edge, wybierz ustawienia i nie tylko w prawym górnym rogu okna przeglądarki. Wybierz pozycję Ustawienia > Prywatność, wyszukiwanie i usługi . Wybierz pozycję Pliki cookie i przejdź do pozycji Dozwolone, aby zapisać pliki cookie. Wybierz pozycję Dodaj witrynę , aby zezwolić na pliki cookie dla danej witryny, wprowadzając adres URL witryny. Blokuj pliki cookie innych firm Jeśli nie chcesz, aby witryny innych firm przechowywać pliki cookie na komputerze, możesz zablokować pliki cookie. Może to jednak uniemożliwić prawidłowe wyświetlanie niektórych stron lub spowodować wyświetlenie komunikatu z witryny informującego o konieczności zezwolenia na pliki cookie w celu jej wyświetlenia. Otwórz przeglądarkę Edge, wybierz ustawienia i nie tylko w prawym górnym rogu okna przeglądarki. Wybierz pozycję Ustawienia > Prywatność, wyszukiwanie i usługi . Wybierz pozycję Pliki cookie i włącz przełącznik Blokuj pliki cookie innych firm. Blokuj wszystkie pliki cookie Jeśli nie chcesz, aby witryny innych firm przechowywać pliki cookie na komputerze, możesz zablokować pliki cookie. Może to jednak uniemożliwić prawidłowe wyświetlanie niektórych stron lub spowodować wyświetlenie komunikatu z witryny informującego o konieczności zezwolenia na pliki cookie w celu jej wyświetlenia. Otwórz przeglądarkę Edge, wybierz ustawienia i nie tylko w prawym górnym rogu okna przeglądarki. Wybierz pozycję Ustawienia > Prywatność, wyszukiwanie i usługi . Wybierz pozycję Pliki cookie i wyłącz opcję Zezwalaj witrynom na zapisywanie i odczytywanie danych plików cookie (zalecane), aby zablokować wszystkie pliki cookie. Blokowanie plików cookie z określonej witryny Przeglądarka Microsoft Edge umożliwia blokowanie plików cookie z określonej witryny, jednak może to uniemożliwić prawidłowe wyświetlanie niektórych stron lub może zostać wyświetlony komunikat z witryny informujący o konieczności zezwolenia na pliki cookie w celu wyświetlenia tej witryny. Aby zablokować pliki cookie z określonej witryny: Otwórz przeglądarkę Edge, wybierz ustawienia i nie tylko w prawym górnym rogu okna przeglądarki. Wybierz pozycję Ustawienia > Prywatność, wyszukiwanie i usługi . Wybierz pozycję Pliki cookie i przejdź do pozycji Pliki cookie, które nie mogą być zapisywane i odczytywane . Wybierz pozycję Dodaj witrynę , aby zablokować pliki cookie dla danej witryny, wprowadzając jej adres URL. Usuwanie wszystkich plików cookie Otwórz przeglądarkę Edge, wybierz ustawienia i nie tylko w prawym górnym rogu okna przeglądarki. Wybierz kolejno pozycje Ustawienia > Prywatność, wyszukiwanie i usługi . Wybierz pozycję Wyczyść dane przeglądania , a następnie wybierz pozycję Wybierz elementy do wyczyszczenia znajdujące się obok pozycji Wyczyść dane przeglądania teraz . W obszarze Zakres czasu wybierz zakres czasu z listy. Wybierz opcję Pliki cookie i inne dane witryn , a następnie wybierz pozycję Wyczyść teraz . Uwaga: Możesz również usunąć pliki cookie, naciskając jednocześnie CTRL + SHIFT + DELETE , a następnie wykonując kroki 4 i 5. Wszystkie pliki cookie i inne dane witryn zostaną usunięte dla wybranego zakresu czasu. Spowoduje to wylogowanie z większości witryn. Usuwanie plików cookie z określonej witryny Otwórz przeglądarkę Edge, wybierz Ustawienia i nie tylko > Ustawienia > Prywatność, wyszukiwanie i usługi . Wybierz pozycję Pliki cookie , a następnie kliknij pozycję Wyświetl wszystkie pliki cookie i dane witryn i wyszukaj witrynę, której pliki cookie chcesz usunąć. Wybierz strzałkę w dół po prawej stronie witryny, której pliki cookie chcesz usunąć, a następnie wybierz pozycję Usuń . Pliki cookie wybranej witryny są teraz usuwane. Powtórz ten krok dla każdej witryny, której pliki cookie chcesz usunąć. Usuwanie plików cookie każdorazowo podczas zamykania przeglądarki Otwórz przeglądarkę Edge, wybierz Ustawienia i nie tylko > Ustawienia > Prywatność, wyszukiwanie i usługi . Wybierz pozycję Wyczyść dane przeglądania , a następnie wybierz pozycję Wybierz elementy do wyczyszczenia za każdym razem, gdy zamkniesz przeglądarkę . Włącz przełącznik Pliki cookie i inne dane witryn . Po włączeniu tej funkcji za każdym razem, gdy zamkniesz przeglądarkę Edge, wszystkie pliki cookie i inne dane witryn zostaną usunięte. Spowoduje to wylogowanie z większości witryn. Użyj plików cookie, aby wstępnie załadować stronę do szybszego przeglądania Otwórz przeglądarkę Edge, wybierz ustawienia i nie tylko w prawym górnym rogu okna przeglądarki. Wybierz kolejno pozycje Ustawienia > Prywatność, wyszukiwanie i usługi . Wybierz pozycję Pliki cookie i włącz opcję Wstępnie załaduj strony w celu szybszego przeglądania i wyszukiwania. ZASUBSKRYBUJ KANAŁY INFORMACYJNE RSS Potrzebujesz dalszej pomocy? Chcesz uzyskać więcej opcji? Odkryj Społeczność Skontaktuj się z nami Poznaj korzyści z subskrypcji, przeglądaj kursy szkoleniowe, dowiedz się, jak zabezpieczyć urządzenie i nie tylko. Korzyści z subskrypcji platformy Microsoft 365 Szkolenia dotyczące platformy Microsoft 365 Rozwiązania dotyczące zabezpieczeń firmy Microsoft Centrum ułatwień dostępu Społeczności pomagają zadawać i odpowiadać na pytania, przekazywać opinie i słuchać ekspertów z bogatą wiedzą. Zapytaj społeczność firmy Microsoft Społeczność techniczna firmy Microsoft Niejawny program testów systemu Windows Microsoft 365 Insiders Znajdź rozwiązania typowych problemów lub uzyskaj pomoc od agenta pomocy technicznej. Pomoc techniczna online Czy te informacje były pomocne? Tak Nie Dziękujemy! Chcesz przekazać więcej opinii firmie Microsoft? Czy możesz nam pomóc w ulepszaniu? (Wysyłanie opinii do firmy Microsoft, abyśmy mogli zapewnić pomoc.) Jaka jest jakość języka? Co wpłynęło na Twoje wrażenia? Rozwiązał mój problem Zrozumiałe instrukcje Łatwy do zrozumienia Brak żargonu Pomocne ilustracje Jakość tłumaczenia Niedopasowane do mojego ekranu Niepoprawne instrukcje Zbyt techniczne Za mało informacji Za mało ilustracji Jakość tłumaczenia Czy chcesz przekazać jakieś inne uwagi? (Opcjonalnie) Prześlij opinię Jeśli naciśniesz pozycję „Wyślij”, Twoja opinia zostanie użyta do ulepszania produktów i usług firmy Microsoft. Twój administrator IT będzie mógł gromadzić te dane. Oświadczenie o ochronie prywatności. Dziękujemy za opinię! × Co nowego Surface Pro Surface Laptop Surface Laptop Studio 2 Copilot dla organizacji Copilot do użytku osobistego Microsoft 365 Poznaj produkty Microsoft Microsoft Store Profil konta Centrum pobierania Pomoc techniczna Microsoft Store Zwroty Śledzenie zamówienia Recykling Gwarancje handlowe Edukacja Microsoft Education Urządzenia dla instytucji edukacyjnych Microsoft Teams dla Instytucji Edukacyjnych Microsoft 365 Education Office Education Szkolenia i możliwości rozwoju dla nauczycieli Oferty dla uczniów i rodziców Azure dla uczniów Dla firm Rozwiązania zabezpieczające firmy Microsoft Azure Dynamics 365 Microsoft 365 Microsoft Advertising Microsoft 365 Copilot Microsoft Teams Dla programistów i specjalistów IT Deweloper Microsoft Microsoft Learn Pomoc techniczna do aplikacji z platformy handlowej opartych na AI Społeczność Microsoft Tech Microsoft Marketplace Microsoft Power Platform Marketplace Rewards Visual Studio Firma Praca Informacje o firmie Microsoft Aktualności Ochrona prywatności Inwestorzy Zrównoważony rozwój Polski (Polska) Ikona rezygnacji z opcji prywatności Twoje opcje wyboru dotyczące prywatności Ikona rezygnacji z opcji prywatności Twoje opcje wyboru dotyczące prywatności Zasady prywatności dotyczące zdrowia użytkowników Skontaktuj się z Microsoft Ochrona prywatności Zarządzaj plikami cookie Zasady użytkowania Znaki towarowe Informacje o naszych reklamach EU Compliance DoCs © Microsoft 2026 | 2026-01-13T09:30:34 |
https://ja-jp.facebook.com/login/?next=https%3A%2F%2Fl.facebook.com%2Fl.php%3Fu%3Dhttps%253A%252F%252Fwww.instagram.com%252F%26amp%253Bh%3DAT2keGOX2gxggubPvbD2U4v7iuDC9S3X2Lk7QEf_zG6UCPFlqRgkB8gDW-EZJfw1fiHskSqvBHeZQITabHORPi6bmhZQB9HgmigfedLFjaYt-gvk5RTqrih_kcyIODnVVuMY0a6Vs1-agSJ_ | Facebook Facebook メールアドレスまたは電話番号 パスワード アカウントを忘れた場合 新しいアカウントを作成 機能の一時停止 機能の一時停止 この機能の使用ペースが早過ぎるため、機能の使用が一時的にブロックされました。 Back 日本語 한국어 English (US) Tiếng Việt Bahasa Indonesia ภาษาไทย Español 中文(简体) Português (Brasil) Français (France) Deutsch アカウント登録 ログイン Messenger Facebook Lite 動画 Meta Pay Metaストア Meta Quest Ray-Ban Meta Meta AI Meta AIのコンテンツをもっと見る Instagram Threads 投票情報センター プライバシーポリシー プライバシーセンター Facebookについて 広告を作成 ページを作成 開発者 採用情報 Cookie AdChoices 規約 ヘルプ 連絡先のアップロードと非ユーザー 設定 アクティビティログ Meta © 2026 | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/542 | LLVM Weekly - #542, May 20th 2024 LLVM Weekly - #542, May 20th 2024 Welcome to the five hundred and forty-second issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org , @llvmweekly or @asbradbury on Twitter, or @llvmweekly@fosstodon.org or @asb@fosstodon.org . News and articles from around the web and events The next Cambridge compiler social will take place on June 19th at the Computer Laboratory . Be sure to RSVP. LLVM 18.1.6 was released . This is the last planned release for 18.1.x. The call for papers for the tenth workshop on the LLVM compiler infrastructure in HPC is out . According to the LLVM calendar in the coming week there will be the following: Office hours with the following hosts: Kristof Beyls, Johannes Doerfert. Online sync-ups on the following topics: SPIR-V, GlobalISel, security grop, new contributors, pointer authentication, OpenMP, Flang, RISC-V, MLIR, embedded toolchains. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Vasileios Porpodas kicked off an RFC discussion on introducing the “sandbox vectorizer”, an experimental modular vectorizer . This involves a transactional IR allowing the reversion of IR state if a transformation is found not to be profitable. Catherine "whitequark" posted an RFC on adding support for building Clang and LLVM for WebAssembly , linking to patches implementing this. Matthias Springer proposed a new dialect conversion driver for MLIR , detailing in depth the issues with the current dialect conversion approach (sloww, too complicated, partly undocumented restrictions, and it’s difficult to use correctly). John Regehr shared the good news that the AArch64 translation validation fuzzer is struggling to find new miscompilation bugs . “linuxlonelyeagle” suggested adding an memref.null operation to MLIR’s memref dialect . Simeon is seeking feedback on adding support for recursive inlining to LLVM . Sergio Afonso started an MLIR RFC discussion on adding a clause-based representation of OpenMP dialect operations . Théo Degioanni proposed allowing symbol references to escape from SymbolTable in MLIR. “Menooker” suggested adding a new MLIR dialect containing utility functions like printf . It received some pushback, with concerns that it might not be tighly scoped enough. LLVM commits An llvm.experimental.histogram intrinsic was added. fbb37e9 . The algorithm used to match “stale” profiles was improved. 23f8fac7 . RISCVInsertVSETVLI was moved to after phi elmination. 1a58e88 . The current status of VPlan for the loop vectorizer was documented. 99de3a6 . The LoopMicroOpBufferSize setting for Zen 3 and 4 scheduling models was reduced in order to limit the amount of partial loop unrolling performed. 54e52aa . The pip requirements.txt in the LLVM tree now includes git hashes. 89b83d2 . update_test_checks now matches basic blocks labels using FileCheck expressions. 597ac47 . A pair of changes to the TableGen SubtargetEmitter result in substantial improvement to tblgen time for the RISC-V backend (reportedly from 29 down to 9 seconds). c675a58 . 67beebf . Clang commits Non-constant tile sizes are now allowed in OpenMP. b0b6c16 . Work to break up the giant ‘Sema’ file up continued with SemaObjC and SemaCodeCompletion split out. 31a203f , 874f511 . RISC-V profiles that are not yet ratified are now gated behind the -menable-experimental-extensions flag. e5a277b , 891d687 . New AArch64 intrinsics were added for bfloat16 min/max/minnm/maxnm. f7392f4 . As part of the bounds safety work, the counted_by attribute can now be used on pointers in structs in C. 0ec3b97 . The modernize-use-std-format check was added, converting absl::StrFormat and similar functions to std::format . af79372d . Other project commits Flang gained initial debuginfo support for local variables. cd5ee27 . --enable-non-contiguous-regions is now implemented for LLD. 6731144 . More debug server packets were documented in LLDB’s docs. b6f050f . Liveness information is now used in the tile allocator for ArmSME in MLIR. 041baf2 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://releases.llvm.org/18.1.8/tools/lld/docs/index.html | LLD - The LLVM Linker — lld 18.1.8 documentation Navigation index next | lld Home | LLD - The LLVM Linker Bugs To report bugs, please visit PE/COFF , ELF , Mach-O , or WebAssembly . LLD - The LLVM Linker ¶ LLD is a linker from the LLVM project that is a drop-in replacement for system linkers and runs much faster than them. It also provides features that are useful for toolchain developers. The linker supports ELF (Unix), PE/COFF (Windows), Mach-O (macOS) and WebAssembly in descending order of completeness. Internally, LLD consists of several different linkers. The ELF port is the one that will be described in this document. The PE/COFF port is complete, including Windows debug info (PDB) support. The WebAssembly port is still a work in progress (See WebAssembly lld port ). Features ¶ LLD is a drop-in replacement for the GNU linkers that accepts the same command line arguments and linker scripts as GNU. LLD is very fast. When you link a large program on a multicore machine, you can expect that LLD runs more than twice as fast as the GNU gold linker. Your mileage may vary, though. It supports various CPUs/ABIs including AArch64, AMDGPU, ARM, Hexagon, LoongArch, MIPS 32/64 big/little-endian, PowerPC, PowerPC64, RISC-V, SPARC V9, x86-32 and x86-64. Among these, AArch64, ARM (>= v4), LoongArch, PowerPC, PowerPC64, RISC-V, x86-32 and x86-64 have production quality. MIPS seems decent too. It is always a cross-linker, meaning that it always supports all the above targets however it was built. In fact, we don’t provide a build-time option to enable/disable each target. This should make it easy to use our linker as part of a cross-compile toolchain. You can embed LLD in your program to eliminate dependencies on external linkers. All you have to do is to construct object files and command line arguments just like you would do to invoke an external linker and then call the linker’s main function, lld::lldMain , from your code. It is small. We are using LLVM libObject library to read from object files, so it is not a completely fair comparison, but as of February 2017, LLD/ELF consists only of 21k lines of C++ code while GNU gold consists of 198k lines of C++ code. Link-time optimization (LTO) is supported by default. Essentially, all you have to do to do LTO is to pass the -flto option to clang. Then clang creates object files not in the native object file format but in LLVM bitcode format. LLD reads bitcode object files, compile them using LLVM and emit an output file. Because in this way LLD can see the entire program, it can do the whole program optimization. Some very old features for ancient Unix systems (pre-90s or even before that) have been removed. Some default settings have been tuned for the 21st century. For example, the stack is marked as non-executable by default to tighten security. Performance ¶ This is a link time comparison on a 2-socket 20-core 40-thread Xeon E5-2680 2.80 GHz machine with an SSD drive. We ran gold and lld with or without multi-threading support. To disable multi-threading, we added -no-threads to the command lines. Program Output size GNU ld GNU gold w/o threads GNU gold w/threads lld w/o threads lld w/threads ffmpeg dbg 92 MiB 1.72s 1.16s 1.01s 0.60s 0.35s mysqld dbg 154 MiB 8.50s 2.96s 2.68s 1.06s 0.68s clang dbg 1.67 GiB 104.03s 34.18s 23.49s 14.82s 5.28s chromium dbg 1.14 GiB 209.05s [ 1 ] 64.70s 60.82s 27.60s 16.70s As you can see, lld is significantly faster than GNU linkers. Note that this is just a benchmark result of our environment. Depending on number of available cores, available amount of memory or disk latency/throughput, your results may vary. [ 1 ] Since GNU ld doesn’t support the -icf=all and -gdb-index options, we removed them from the command line for GNU ld. GNU ld would have been slower than this if it had these options. Build ¶ If you have already checked out LLVM using SVN, you can check out LLD under tools directory just like you probably did for clang. For the details, see Getting Started with the LLVM System . If you haven’t checked out LLVM, the easiest way to build LLD is to check out the entire LLVM projects/sub-projects from a git mirror and build that tree. You need cmake and of course a C++ compiler. $ git clone https://github.com/llvm/llvm-project llvm-project $ mkdir build $ cd build $ cmake -DCMAKE_BUILD_TYPE = Release -DLLVM_ENABLE_PROJECTS = lld -DCMAKE_INSTALL_PREFIX = /usr/local ../llvm-project/llvm $ make install Using LLD ¶ LLD is installed as ld.lld . On Unix, linkers are invoked by compiler drivers, so you are not expected to use that command directly. There are a few ways to tell compiler drivers to use ld.lld instead of the default linker. The easiest way to do that is to overwrite the default linker. After installing LLD to somewhere on your disk, you can create a symbolic link by doing ln -s /path/to/ld.lld /usr/bin/ld so that /usr/bin/ld is resolved to LLD. If you don’t want to change the system setting, you can use clang’s -fuse-ld option. In this way, you want to set -fuse-ld=lld to LDFLAGS when building your programs. LLD leaves its name and version number to a .comment section in an output. If you are in doubt whether you are successfully using LLD or not, run readelf --string-dump .comment <output-file> and examine the output. If the string “Linker: LLD” is included in the output, you are using LLD. History ¶ Here is a brief project history of the ELF and COFF ports. May 2015: We decided to rewrite the COFF linker and did that. Noticed that the new linker is much faster than the MSVC linker. July 2015: The new ELF port was developed based on the COFF linker architecture. September 2015: The first patches to support MIPS and AArch64 landed. October 2015: Succeeded to self-host the ELF port. We have noticed that the linker was faster than the GNU linkers, but we weren’t sure at the time if we would be able to keep the gap as we would add more features to the linker. July 2016: Started working on improving the linker script support. December 2016: Succeeded to build the entire FreeBSD base system including the kernel. We had widen the performance gap against the GNU linkers. Internals ¶ For the internals of the linker, please read The ELF, COFF and Wasm Linkers . It is a bit outdated but the fundamental concepts remain valid. We’ll update the document soon. The ELF, COFF and Wasm Linkers Design WebAssembly lld port Windows support Missing Key Function Error Handling Script Partitions lld 18.1.8 Release Notes Linker Script implementation notes and policy -z start-stop-gc –warn-backrefs Mach-O LLD Port Navigation index next | lld Home | LLD - The LLVM Linker © Copyright 2011-2024, LLVM Project. Last updated on 2024-06-19. Created using Sphinx 7.1.2. | 2026-01-13T09:30:34 |
https://releases.llvm.org/18.1.8/tools/lld/docs/ReleaseNotes.html | lld 18.1.8 Release Notes — lld 18.1.8 documentation Navigation index next | previous | lld Home | lld 18.1.8 Release Notes Table of Contents lld 18.1.8 Release Notes Introduction Non-comprehensive list of changes in this release ELF Improvements Breaking changes COFF Improvements MinGW Improvements MachO Improvements WebAssembly Improvements SystemZ Fixes Previous topic Partitions Next topic Linker Script implementation notes and policy This Page Show Source Quick search lld 18.1.8 Release Notes ¶ Introduction Non-comprehensive list of changes in this release ELF Improvements Breaking changes COFF Improvements MinGW Improvements MachO Improvements WebAssembly Improvements SystemZ Fixes Introduction ¶ This document contains the release notes for the lld linker, release 18.1.8. Here we describe the status of lld, including major improvements from the previous release. All lld releases may be downloaded from the LLVM releases web site . Non-comprehensive list of changes in this release ¶ ELF Improvements ¶ --fat-lto-objects option is added to support LLVM FatLTO. Without --fat-lto-objects , LLD will link LLVM FatLTO objects using the relocatable object file. ( D146778 ) -Bsymbolic-non-weak is added to directly bind non-weak definitions. ( D158322 ) --lto-validate-all-vtables-have-type-infos , which complements --lto-whole-program-visibility , is added to disable unsafe whole-program devirtualization. --lto-known-safe-vtables=<glob> can be used to mark known-safe vtable symbols. ( D155659 ) --save-temps --lto-emit-asm now derives ELF/asm file names from bitcode file names. ld.lld --save-temps a.o d/b.o -o out will create ELF relocatable files out.lto.a.o / d/out.lto.b.o instead of out1.lto.o / out2.lto.o . ( #78835 ) --no-allow-shlib-undefined now reports errors for DSO referencing non-exported definitions. ( #70769 ) common-page-size can now be larger than the system page-size. ( #57618 ) When call graph profile information is available due to instrumentation or sample PGO, input sections are now sorted using the new cdsort algorithm, better than the previous hfsort algorithm. ( D152840 ) Symbol assignments like a = DEFINED(a) ? a : 0; are now handled. ( #65866 ) OVERLAY now supports optional start address and LMA ( #77272 ) Relocations referencing a symbol defined in /DISCARD/ section now lead to an error. ( #69295 ) For AArch64 MTE, global variable descriptors have been implemented. ( D152921 ) R_AARCH64_GOTPCREL32 is now supported. ( #72584 ) R_LARCH_PCREL20_S2 / R_LARCH_ADD6 / R_LARCH_CALL36 and extreme code model relocations are now supported. --emit-relocs is now supported for RISC-V linker relaxation. ( D159082 ) Call relaxation respects RVC when mixing +c and -c relocatable files. ( #73977 ) R_RISCV_GOT32_PCREL is now supported. ( #72587 ) R_RISCV_SET_ULEB128 / R_RISCV_SUB_ULEB128 relocations are now supported. ( #72610 ) ( #77261 ) RISC-V TLSDESC is now supported. ( #79239 ) Breaking changes ¶ COFF Improvements ¶ Added support for --time-trace and associated --time-trace-granularity . This generates a .json profile trace of the linker execution. ( #68236 ) The -dependentloadflag option was implemented. ( #71537 ) LLD now prefers library paths specified with -libpath: over the implicitly detected toolchain paths. ( #78039 ) Added new options -lldemit:llvm and -lldemit:asm for getting the output of LTO compilation as LLVM bitcode or assembly. ( #66964 ) ( #67079 ) Added a new option -build-id for generating a .buildid section when not generating a PDB. A new symbol __buildid is generated by the linker, allowing code to reference the build ID of the binary. ( #71433 ) ( #74652 ) A new, LLD specific option, -lld-allow-duplicate-weak , was added for allowing duplicate weak symbols. ( #68077 ) More correctly handle LTO of files that define __imp_ prefixed dllimport redirections. ( #70777 ) ( #71376 ) ( #72989 ) Linking undefined references to weak symbols with LTO now works. ( #70430 ) Use the SOURCE_DATE_EPOCH environment variable for the PE header and debug directory timestamps, if neither the /Brepro nor /timestamp: options have been specified. This makes the linker output reproducible by setting this environment variable. ( #81326 ) Lots of incremental work towards supporting linking ARM64EC binaries. MinGW Improvements ¶ Added support for many LTO and ThinLTO options (most LTO options supported by the ELF driver, that are implemented by the COFF backend as well, should be supported now). ( D158412 ) ( D158887 ) ( #77387 ) ( #81475 ) LLD no longer tries to autodetect and use library paths from MSVC/WinSDK installations when run in MinGW mode; that mode of operation shouldn’t ever be needed in MinGW mode, and could be a source of unexpected behaviours. ( D144084 ) The --icf=safe option now works as expected; it was previously a no-op. ( #70037 ) The strip flags -S and -s now can be used to strip out DWARF debug info and symbol tables while emitting a PDB debug info file. ( #75181 ) The option --dll is handled as an alias for the --shared option. ( #68575 ) The option --sort-common is ignored now. ( #66336 ) MachO Improvements ¶ WebAssembly Improvements ¶ Indexes are no longer required on archive files. Instead symbol information is read from object files within the archive. This matches the behaviour of the ELF linker. SystemZ ¶ Add target support for SystemZ (s390x). Fixes ¶ Navigation index next | previous | lld Home | lld 18.1.8 Release Notes © Copyright 2011-2024, LLVM Project. Last updated on 2024-06-19. Created using Sphinx 7.1.2. | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/549 | LLVM Weekly - #549, July 8th 2024 LLVM Weekly - #549, July 8th 2024 Welcome to the five hundred and forty-ninth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org , @llvmweekly or @asbradbury on Twitter, or @llvmweekly@fosstodon.org or @asb@fosstodon.org . News and articles from around the web and events A couple of papers were shared this week that may be of interest to readers, Verifying Peephole Rewriting In SSA Compiler IRs and Refined Input, Degraded Output: The Counterintuitive World of Compiler Behavior . The 2024-06 C++ committee trip report from /r/cpp contributors is now available . According to the LLVM calendar in the coming week there will be the following: Office hours with the following hosts: Aaron Ballman, Alexey Bader, Anton Korobeynikov, Alina Sbirlea, Johannes Doerfert. Online sync-ups on the following topics: Flang, alias analysis, pointer authentication, libc++, new contributors, LLVM/Offload, loop optimisations, BOLT, OpenMP in Flang, MLIR. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Tanya Lattner is seeking new nominations for the LLVM CoC committee members . Tom Stellard provided an update on the new criteria for commit access , noting that the number of users with commit access reduced from 1798 to 1471. Tobias Stadler made a GlobalISel RFC on allowing arbitrary instruction erasure in InstructionSelect . Brandon Wu proposed supporting RISC-V vector tuple types in LLVM . Aiden Grossman shared an RFC on testing strategy for X86 CPU/feature detection . Rafael Ubal proposed improvements to the MLIR ‘quant’ dialect . LLVM commits Documentation was added on RISC-V vector codegen. e860c16 . Initial code for the sandbox vectoriser started to land. f9efc29 , d5f5dc9 . SimplifyCFG learned how to simplify nested branches. 4997af9 . InstCombine can now simplify select using KnownBits of the condition. 77eb056 . Support was added to the MC layer, llvm-readobj, and yaml2obj for the CREL relocation format. 1b704e8 . 128-bit operands can now be used in inline asm with the NVPTX backend. cbd3f25 . The “generic” and “bleeding-edge” WebAssembly CPU definitions were updated. fb6e024 . LoopIdiomVectorize now supports using VP intrinsics to replace byte compare loops. 8b55d34 . An llvm.experimental.partial.reduce.add* intrinsic was added. 6222c8f . Clang commits clang-repl now supports executing C++ code interactively inside a JS engine using WebAssembly when built with Emscripten. 9a9546e . Clang static analyzer docs were migrated from HTML to RST. 3cab132 , 093aaca . A new bugprone-pointer-arithmetic-on-polymorphic-object check was introduced to clang-tidy. f329e3e . Other project commits A placeholder dlfcn.h header was added to LLVM’s libc. b151c7e . MLIR OpenMP_Op definitions were reworked to be based on OpenMP_Clause which allows for more aspects to be tablegen'ed. d1fcfce . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://ja-jp.facebook.com/login/?next=https%3A%2F%2Fl.facebook.com%2Fl.php%3Fu%3Dhttps%253A%252F%252Fwww.instagram.com%252F%26amp%253Bh%3DAT3lN7wpIGgWaTyXErk4SF1WCKEs9PDHJPdbiN3Yl_7Rba5JxYLIit5nb079Cw1WWMDlqJeHCsuW7di_wtBhIrbQwVsBpN2U3fkPKWMYiOrU-c_JuLCa8u7nUnpt80Nm-EprkKK9rPoa2_sw | Facebook Facebook メールアドレスまたは電話番号 パスワード アカウントを忘れた場合 新しいアカウントを作成 機能の一時停止 機能の一時停止 この機能の使用ペースが早過ぎるため、機能の使用が一時的にブロックされました。 Back 日本語 한국어 English (US) Tiếng Việt Bahasa Indonesia ภาษาไทย Español 中文(简体) Português (Brasil) Français (France) Deutsch アカウント登録 ログイン Messenger Facebook Lite 動画 Meta Pay Metaストア Meta Quest Ray-Ban Meta Meta AI Meta AIのコンテンツをもっと見る Instagram Threads 投票情報センター プライバシーポリシー プライバシーセンター Facebookについて 広告を作成 ページを作成 開発者 採用情報 Cookie AdChoices 規約 ヘルプ 連絡先のアップロードと非ユーザー 設定 アクティビティログ Meta © 2026 | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/544 | LLVM Weekly - #544, June 3rd 2024 LLVM Weekly - #544, June 3rd 2024 Welcome to the five hundred and forty-fourth issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org , @llvmweekly or @asbradbury on Twitter, or @llvmweekly@fosstodon.org or @asb@fosstodon.org . News and articles from around the web and events There will be one final, unscheduled LLVM release. LLVM 18.1.7 is expected on Tuesday , fixing a regression introduced in 18.1.6. According to the LLVM calendar in the coming week there will be the following: Office hours with the following hosts: Renato Golin, Quentin Colombet, Johannes Doerfert. Online sync-ups on the following topics: MLIR C/C++ frontend, pointer authentication, SPIR-V, MemorySSA, AArch64, OpenMP, new contributors, Clang C/C++ language working group, Flang, RISC-V, libc, MLIR, HLSL, MLGO. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Chaitanya Shahare is seeking input to a survey for the LLVM.org website redesign that’s being pursued as a GSoC project. There was further discussion on the proposal to introduce the ‘sandbox vectorizer’, with Alina Sbirlea indicating interest from some Google LLVM engineers in experimenting , Eli Friedman suggesting checkpointing via cloning as an alternative , and Arthur Eubanks offering a summary of pros/cons for various options for where this could live . Stephen Tozer kicked off an RFC thread on updating how debug locations are handled in LLVM , suggesting making DebugLog a mandatory argument for creating instructions. This attracted a counter-proposal from Nikita Popov suggesting that this is enforced in the verifier instead. Donald Chen proposes adding operandIndex to the MLIR EffectInstance class , allowing an effect to be specified on an operand rather than a value used by some operand. The 66th edition of MLIR News is now available . Maksim Levental shared a more minimal MLIR project example (self-described as an ‘unconventional’ approach). Rong Xu and Han Shen posted a proposal for optimising the Linux kernel with AutoFDO including ThinLTO and Propeller , including a variety of performance measurements. Joshua Cranmer posted an RFC on changing llvm::Value’s layout , motivated by lack of space to add more fast-math flags. LLVM commits getelementptr nuw and nsw flags were introduced. 8cdecd4 . A new ptrauth(...) IR constant was introduced to represent a ptrauth signed pointer as used in AArch64 PAuth. 0edc97f . A design document was added for the TableGen specification of DXIL operations. 495bc3c . Documentation for updating code to handle debug records was added. a8e03ae . The RISC-V backend gained a rematerialisable pseudo instruction for LUI+ADDI for global addresses. 2d00c6f . The exnref type was added to the WebAssembly backend. c179d50 . The SPIR-V backend gained support for llvm.ptr.annotation. f63adf3 . DIExpression::foldConstantMath was introduced. b12f81b , 69969c7 , f4681be . Clang commits A new flag was added to only emit debuginfo for referenced member functions. 6e975ec . HLSL availability diagnostics were implemented. 8890209 . Work to split up Sema continued. ed35a92 . Other project commits BOLT gained a script to automatically generate much of its user guide. 765ce86 . LLD now supports Thumb PLTs. 760c2aa . The deprecated dialect-specific bufferization passes were removed from MLIR. debdbed . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/543 | LLVM Weekly - #543, May 27th 2024 LLVM Weekly - #543, May 27th 2024 Welcome to the five hundred and forty-third issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org , @llvmweekly or @asbradbury on Twitter, or @llvmweekly@fosstodon.org or @asb@fosstodon.org . News and articles from around the web and events Min-Yih Hsu blogged about legalizations in LLVM backends . According to the LLVM calendar in the coming week there will be the following: Office hours with the following hosts: Johannes Doerfert. Online sync-ups on the following topics: Flang, new contributors, pointer authentication, LLVM/Offload, classic flang, loop optimisations, OpenMP, MLIR. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Tanya Lattner shared a save the date for the US LLVM Developers' Meeting , which will take place October 22-24th in Santa Clara. Applications are open to volunteer for the program and travel grant committees. “pablo” posted an MLIR RFC on adding transform.loop.tile_indirection_using_forall . Corentin Ferry proposed removing signless type support in MLIR’s EmitC dialect . There was quite a lot of further discussion about the idea of introducing new ‘one-short’ dialect conversion driver in MLIR. See in particular, Alex Zinenko’s discussion of the history behind the current design and assessment of the current state . Eric Fiselier posted an RFC aiming to end a point of disagreement between libc++ contributors concerning support for “in-tree headers” during development . LLVM commits Conditional indirect call promotion using vtable-based comparison was implemented. 5d3f296 . constexpr getelementptrs are now canonicalised to i8 element types. 8e8d259 . Contributions to LLVM as of June 1st will no longer need to be licensed under both the current and legacy license. ee76f1e . The -call-latency command-line option to llvm-mca can be used to control the assumed latency of a call instruction. 848bef5 . AMDGPU-specific module splitting was implemented, allowing --lto-partitions to work more consistently. d7c3713 . Inline assembly is now supported for SPIR-V. 214e6b4 . For RISC-V vector, the vsetvl insertion pass now runs after RVV register allocation. 675e7bd . Clang commits Work started to move away from Clang assuming the default address space is AS 0. 10edb49 . Sized deallocation was enabled by default in C++14 onwards. 130e93c . The security.SetgidSetuidOrdr checker was implemented in Clang’s static analyzer. 11b97da . Other project commits LLVM’s libc docgen now handles macro handling, POSIX status, and validation. 0f6c4d8 . C++20 atomic_ref was implemented in libcxx. 42ba740 . MLIR’s various topological sort utilities were consolidated into one place. b00e0c1 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
https://www.iso.org/es/sectores/gestion-servicios | ISO - Gestión y servicios Ir directamente al contenido principal Aplicaciones OBP español English français русский Menú Normas Sectores Salud Tecnologías de la información y afines Gestión y servicios Seguridad, protección y gestión de riesgos Transporte Energía Diversidad e inclusión Sostenibilidad ambiental Alimentos y agricultura Materiales Edificación y construcción Ingeniería Sobre nosotros Perspectivas y actualidad Perspectivas Todos los artículos Salud Inteligencia artificial Cambio climático Transporte Ciberseguridad Gestión de la calidad Energías renovables Seguridad y salud en el trabajo Actualidad Opinión de expertos El mundo de las normas Kit de prensa Resources ISO 22000 explained ISO 9001 explained ISO 14001 explained Participar Tienda Buscar Carrito Gestión y servicios Sistemas de gestión Banca y finanzas Educación Gobernanza RR. HH. Turismo Servicios de atención sanitaria Envejecimiento de las sociedades Ciudades inteligentes Gestión ambiental Otros En el dinámico mundo de la gestión y los servicios, alcanzar la excelencia y la eficiencia es primordial. Las normas de este sector proporcionan marcos para buenas prácticas, lo que garantiza la calidad, la sostenibilidad y la satisfacción del cliente. Abarcan un gran abanico de áreas que incluyen la planificación estratégica, la optimización de procesos y la prestación de servicios, todo ello atendiendo a las necesidades diversas de las empresas y comunidades. Find the right management standard for your business Answer 5 quick questions. No login. Quick quiz to guide you. Start What is your organization’s main goal right now? Improve overall efficiency and customer satisfaction Reduce environmental impact and operate sustainably Keep employees safe and healthy at work Save energy and cut costs Manage multiple sites and services efficiently Ensure food safety and quality Protect operations and ensure business continuity Strengthen security and duty of care Protect and restore biodiversity Next What best describes your activity? Manufacturing / Production Services / Office-based Food / Catering / Agriculture Security / Logistics / Facilities NGO / Public sector Back Next How many people work in your organization? 1–10 11–50 51–250 250+ Back Next Which challenge sounds most familiar? Inconsistent processes or too much rework High energy bills or carbon goals Frequent incidents or safety worries Client audits or certification requests Supply chain disruptions or emergencies Food safety complaints or inspections Back Next How experienced are you with ISO standards? Never used any Familiar with one or two We’re already certified Back See my recommendation This quiz needs JavaScript. Explore our management standards: ISO 9001, 14001, 45001, 50001, 41001, 22000, 22301, 18788, 17298. Normas esenciales ISO 41001 Facility management — Management systems — Requirements with guidance for use Publicado en 2018 CHF 196 ISO 9001 Sistemas de gestión de calidad — Requisitos Publicado en 2015 CHF 179 ISO/IEC 27001 Information security, cybersecurity and privacy protection — Information security management systems — Requirements Publicado en 2022 CHF 155 ISO/IEC 29187-1 Information technology — Identification of privacy protection requirements pertaining to learning, education and training (LET) Part 1: Framework and reference model Publicado en 2013 CHF 0 ISO 50001 Sistemas de gestión de la energía — Requisitos con orientación para su uso Revisado en 2024 CHF 179 ISO/IEC 12785-1 Information technology — Learning, education, and training — Content packaging Part 1: Information model Revisado en 2025 CHF 0 ISO/IEC 24751-3 Information technology — Individualized adaptability and accessibility in e-learning, education and training Part 3: "Access for all" digital resource description Publicado en 2008 CHF 0 Cargar más Perspectivas Fiabilidad de la cadena de suministro: reforzar la resiliencia empresarial A medida que las cadenas de suministro mundiales se hacen más complejas, adelantarse a los riesgos no es solo una ventaja, sino una necesidad. Veamos más de cerca las tecnologías, estrategias e innovaciones que configuran el futuro de las cadenas de suministro y cómo pueden ayudar a las empresas a funcionar desde una posición de fuerza. Gestión de la calidad: el camino hacia la mejora continua Existen muchos conceptos erróneos sobre lo que hace un sistema de gestión de la calidad (SGC). Profundicemos en lo que es un SGC, cómo debería ser y por qué su empresa necesita uno sin ninguna duda. The circular economy: building trust through conformity assessment Standards and conformity assessment provide assurance on aspects of the circular economy including product lifetime and recyclability, safety and efficiency. Sectores Gestión y servicios Mapa del sitio Normas Beneficios Normas más comunes Evaluación de la conformidad ODS Sectores Salud Tecnologías de la información y afines Gestión y servicios Seguridad, protección y gestión de riesgos Transporte Energía Sostenibilidad ambiental Materiales Sobre nosotros Qué es lo que hacemos Estructura Miembros Events Estrategia Perspectivas y actualidad Perspectivas Todos los artículos Salud Inteligencia artificial Cambio climático Transporte Actualidad Opinión de expertos El mundo de las normas Kit de prensa Resources ISO 22000 explained ISO 9001 explained ISO 14001 explained Participar Who develops standards Deliverables Get involved Colaboración para acelerar una acción climática eficaz Resources Drafting standards Tienda Tienda Publications and products ISO name and logo Privacy Notice Copyright Cookie policy Media kit Jobs Help and support Seguimos haciendo que la vida sea mejor , más fácil y más segura . Inscríbase para recibir actualizaciones por correo electrónico © Reservados todos los derechos Todos los materiales y publicaciones de ISO están protegidos por derechos de autor y sujetos a la aceptación por parte del usuario de las condiciones de derechos de autor de ISO. Cualquier uso, incluida la reproducción, requiere nuestra autorización por escrito. Dirija todas las solicitudes relacionadas con los derechos de autor a copyright@iso.org . Nos comprometemos a garantizar que nuestro sitio web sea accesible para todo el mundo. Si tiene alguna pregunta o sugerencia relacionada con la accesibilidad de este sitio web, póngase en contacto con nosotros. Añadir al carrito | 2026-01-13T09:30:34 |
http://docs.buildbot.net/current/manual/configuration/changesources.html#changes | 2.5.3. Change Sources and Changes — Buildbot 4.3.0 documentation Buildbot 1. Buildbot Tutorial 2. Buildbot Manual 2.1. Introduction 2.2. Installation 2.3. Concepts 2.4. Secret Management 2.5. Configuration 2.5.1. Configuring Buildbot 2.5.2. Global Configuration 2.5.3. Change Sources and Changes 2.5.3.1. How Different VC Systems Specify Sources 2.5.3.2. Choosing a Change Source 2.5.3.3. Configuring Change Sources 2.5.3.4. Mail-parsing ChangeSources 2.5.3.5. PBChangeSource 2.5.3.6. P4Source 2.5.3.7. SVNPoller 2.5.3.8. Bzr Poller 2.5.3.9. GitPoller 2.5.3.10. HgPoller 2.5.3.11. GitHubPullrequestPoller 2.5.3.12. BitbucketPullrequestPoller 2.5.3.13. GerritChangeSource 2.5.3.14. GerritEventLogPoller 2.5.3.15. GerritChangeFilter 2.5.3.16. Change Hooks (HTTP Notifications) 2.5.4. Changes buildbot.changes.changes.Change 2.5.4.1. Who 2.5.4.2. Files 2.5.4.3. Comments 2.5.4.4. Project 2.5.4.5. Repository 2.5.4.6. Codebase 2.5.4.7. Revision 2.5.4.8. Branches 2.5.4.9. Change Properties 2.5.5. Schedulers 2.5.6. Workers 2.5.7. Builder Configuration 2.5.8. Projects 2.5.9. Codebases 2.5.10. Build Factories 2.5.11. Build Sets 2.5.12. Properties 2.5.13. Build Steps 2.5.14. Interlocks 2.5.15. Report Generators 2.5.16. Reporters 2.5.17. Web Server 2.5.18. Change Hooks 2.5.19. Custom Services 2.5.20. DbConfig 2.5.21. Configurators 2.5.22. Manhole 2.5.23. Multimaster 2.5.24. Multiple-Codebase Builds 2.5.25. Miscellaneous Configuration 2.5.26. Testing Utilities 2.6. Customization 2.7. Command-line Tool 2.8. Resources 2.9. Optimization 2.10. Plugin Infrastructure in Buildbot 2.11. Deployment 2.12. Upgrading 3. Buildbot Development 4. Release Notes 5. Older Release Notes 6. API Indices Buildbot 2. Buildbot Manual 2.5. Configuration 2.5.3. Change Sources and Changes View page source 2.5.3. Change Sources and Changes How Different VC Systems Specify Sources Comparison Tree Stability Choosing a Change Source Configuring Change Sources Repository and Project Mail-parsing ChangeSources Subscribing the Buildmaster Using Maildirs Parsing Email Change Messages CVSMaildirSource SVNCommitEmailMaildirSource BzrLaunchpadEmailMaildirSource PBChangeSource Bzr Hook P4Source Example #1 Example #2 SVNPoller Bzr Poller GitPoller HgPoller GitHubPullrequestPoller BitbucketPullrequestPoller GerritChangeSource GerritEventLogPoller GerritChangeFilter Change Hooks (HTTP Notifications) A change source is the mechanism which is used by Buildbot to get information about new changes in a repository maintained by a Version Control System. These change sources fall broadly into two categories: pollers which periodically check the repository for updates; and hooks, where the repository is configured to notify Buildbot whenever an update occurs. A Change is an abstract way that Buildbot uses to represent changes in any of the Version Control Systems it supports. It contains just enough information needed to acquire specific version of the tree when needed. This usually happens as one of the first steps in a Build . This concept does not map perfectly to every version control system. For example, for CVS, Buildbot must guess that version updates made to multiple files within a short time represent a single change. Change s can be provided by a variety of ChangeSource types, although any given project will typically have only a single ChangeSource active. 2.5.3.1. How Different VC Systems Specify Sources For CVS, the static specifications are repository and module . In addition to those, each build uses a timestamp (or omits the timestamp to mean the latest ) and branch tag (which defaults to HEAD ). These parameters collectively specify a set of sources from which a build may be performed. Subversion combines the repository, module, and branch into a single Subversion URL parameter. Within that scope, source checkouts can be specified by a numeric revision number (a repository-wide monotonically-increasing marker, such that each transaction that changes the repository is indexed by a different revision number), or a revision timestamp. When branches are used, the repository and module form a static baseURL , while each build has a revision number and a branch (which defaults to a statically-specified defaultBranch ). The baseURL and branch are simply concatenated together to derive the repourl to use for the checkout. Perforce is similar. The server is specified through a P4PORT parameter. Module and branch are specified in a single depot path, and revisions are depot-wide. When branches are used, the p4base and defaultBranch are concatenated together to produce the depot path. Bzr (which is a descendant of Arch/Bazaar, and is frequently referred to as “Bazaar”) has the same sort of repository-vs-workspace model as Arch, but the repository data can either be stored inside the working directory or kept elsewhere (either on the same machine or on an entirely different machine). For the purposes of Buildbot (which never commits changes), the repository is specified with a URL and a revision number. The most common way to obtain read-only access to a bzr tree is via HTTP, simply by making the repository visible through a web server like Apache. Bzr can also use FTP and SFTP servers, if the worker process has sufficient privileges to access them. Higher performance can be obtained by running a special Bazaar-specific server. None of these matter to the buildbot: the repository URL just has to match the kind of server being used. The repoURL argument provides the location of the repository. Branches are expressed as subdirectories of the main central repository, which means that if branches are being used, the BZR step is given a baseURL and defaultBranch instead of getting the repoURL argument. Darcs doesn’t really have the notion of a single master repository. Nor does it really have branches. In Darcs, each working directory is also a repository, and there are operations to push and pull patches from one of these repositories to another. For the Buildbot’s purposes, all you need to do is specify the URL of a repository that you want to build from. The worker will then pull the latest patches from that repository and build them. Multiple branches are implemented by using multiple repositories (possibly living on the same server). Builders which use Darcs therefore have a static repourl which specifies the location of the repository. If branches are being used, the source Step is instead configured with a baseURL and a defaultBranch , and the two strings are simply concatenated together to obtain the repository’s URL. Each build then has a specific branch which replaces defaultBranch , or just uses the default one. Instead of a revision number, each build can have a context , which is a string that records all the patches that are present in a given tree (this is the output of darcs changes --context , and is considerably less concise than, e.g. Subversion’s revision number, but the patch-reordering flexibility of Darcs makes it impossible to provide a shorter useful specification). Mercurial follows a decentralized model, and each repository can have several branches and tags. The source Step is configured with a static repourl which specifies the location of the repository. Branches are configured with the defaultBranch argument. The revision is the hash identifier returned by hg identify . Git also follows a decentralized model, and each repository can have several branches and tags. The source Step is configured with a static repourl which specifies the location of the repository. In addition, an optional branch parameter can be specified to check out code from a specific branch instead of the default master branch. The revision is specified as a SHA1 hash as returned by e.g. git rev-parse . No attempt is made to ensure that the specified revision is actually a subset of the specified branch. Monotone is another that follows a decentralized model where each repository can have several branches and tags. The source Step is configured with static repourl and branch parameters, which specifies the location of the repository and the branch to use. The revision is specified as a SHA1 hash as returned by e.g. mtn automate select w: . No attempt is made to ensure that the specified revision is actually a subset of the specified branch. Comparison Name Change Revision Branches CVS patch [1] timestamp unnamed Subversion revision integer directories Git commit sha1 hash named refs Mercurial changeset sha1 hash different repos or (permanently) named commits Darcs ? none [2] different repos Bazaar ? ? ? Perforce ? ? ? BitKeeper changeset ? different repos [1] note that CVS only tracks patches to individual files. Buildbot tries to recognize coordinated changes to multiple files by correlating change times. [2] Darcs does not have a concise way of representing a particular revision of the source. Tree Stability Changes tend to arrive at a buildmaster in bursts. In many cases, these bursts of changes are meant to be taken together. For example, a developer may have pushed multiple commits to a DVCS that comprise the same new feature or bugfix. To avoid trying to build every change, Buildbot supports the notion of tree stability , by waiting for a burst of changes to finish before starting to schedule builds. This is implemented as a timer, with builds not scheduled until no changes have occurred for the duration of the timer. 2.5.3.2. Choosing a Change Source There are a variety of ChangeSource classes available, some of which are meant to be used in conjunction with other tools to deliver Change events from the VC repository to the buildmaster. As a quick guide, here is a list of VC systems and the ChangeSource s that might be useful with them. Note that some of these modules are in Buildbot’s master/contrib directory, meaning that they have been offered by other users in hopes they may be useful, and might require some additional work to make them functional. CVS CVSMaildirSource (watching mail sent by master/contrib/buildbot_cvs_mail.py script) PBChangeSource (listening for connections from buildbot sendchange run in a loginfo script) PBChangeSource (listening for connections from a long-running master/contrib/viewcvspoll.py polling process which examines the ViewCVS database directly) Change Hooks in WebStatus SVN PBChangeSource (listening for connections from master/contrib/svn_buildbot.py run in a postcommit script) PBChangeSource (listening for connections from a long-running master/contrib/svn_watcher.py or master/contrib/svnpoller.py polling process SVNCommitEmailMaildirSource (watching for email sent by commit-email.pl ) SVNPoller (polling the SVN repository) Change Hooks in WebStatus Darcs PBChangeSource (listening for connections from master/contrib/darcs_buildbot.py in a commit script) Change Hooks in WebStatus Mercurial Change Hooks in WebStatus (including master/contrib/hgbuildbot.py , configurable in a changegroup hook) BitBucket change hook (specifically designed for BitBucket notifications, but requiring a publicly-accessible WebStatus) HgPoller (polling a remote Mercurial repository) BitbucketPullrequestPoller (polling Bitbucket for pull requests) Mail-parsing ChangeSources , though there are no ready-to-use recipes Bzr (the newer Bazaar) PBChangeSource (listening for connections from master/contrib/bzr_buildbot.py run in a post-change-branch-tip or commit hook) BzrPoller (polling the Bzr repository) Change Hooks in WebStatus Git PBChangeSource (listening for connections from master/contrib/git_buildbot.py run in the post-receive hook) PBChangeSource (listening for connections from master/contrib/github_buildbot.py , which listens for notifications from GitHub) Change Hooks in WebStatus GitHub change hook (specifically designed for GitHub notifications, but requiring a publicly-accessible WebStatus) BitBucket change hook (specifically designed for BitBucket notifications, but requiring a publicly-accessible WebStatus) GitPoller (polling a remote Git repository) GitHubPullrequestPoller (polling GitHub API for pull requests) BitbucketPullrequestPoller (polling Bitbucket for pull requests) Repo/Gerrit GerritChangeSource connects to Gerrit via SSH and optionally HTTP to get a live stream of changes GerritEventLogPoller connects to Gerrit via HTTP with the help of the plugin events-log Monotone PBChangeSource (listening for connections from monotone-buildbot.lua , which is available with Monotone) All VC systems can be driven by a PBChangeSource and the buildbot sendchange tool run from some form of commit script. If you write an email parsing function, they can also all be driven by a suitable mail-parsing source . Additionally, handlers for web-based notification (i.e. from GitHub) can be used with WebStatus’ change_hook module. The interface is simple, so adding your own handlers (and sharing!) should be a breeze. See Change Source Index for a full list of change sources. 2.5.3.3. Configuring Change Sources The change_source configuration key holds all active change sources for the configuration. Most configurations have a single ChangeSource , watching only a single tree, e.g., from buildbot.plugins import changes c [ 'change_source' ] = changes . PBChangeSource () For more advanced configurations, the parameter can be a list of change sources: source1 = ... source2 = ... c [ 'change_source' ] = [ source1 , source2 ] Repository and Project ChangeSource s will, in general, automatically provide the proper repository attribute for any changes they produce. For systems which operate on URL-like specifiers, this is a repository URL. Other ChangeSource s adapt the concept as necessary. Many ChangeSource s allow you to specify a project, as well. This attribute is useful when building from several distinct codebases in the same buildmaster: the project string can serve to differentiate the different codebases. Schedulers can filter on project, so you can configure different builders to run for each project. 2.5.3.4. Mail-parsing ChangeSources Many projects publish information about changes to their source tree by sending an email message out to a mailing list, frequently named PROJECT -commits or PROJECT -changes . Each message usually contains a description of the change (who made the change, which files were affected) and sometimes a copy of the diff. Humans can subscribe to this list to stay informed about what’s happening to the source tree. Buildbot can also subscribe to a -commits mailing list, and can trigger builds in response to Changes that it hears about. The buildmaster admin needs to arrange for these email messages to arrive in a place where the buildmaster can find them, and configure the buildmaster to parse the messages correctly. Once that is in place, the email parser will create Change objects and deliver them to the schedulers (see Schedulers ) just like any other ChangeSource. There are two components to setting up an email-based ChangeSource. The first is to route the email messages to the buildmaster, which is done by dropping them into a maildir . The second is to actually parse the messages, which is highly dependent upon the tool that was used to create them. Each VC system has a collection of favorite change-emailing tools with a slightly different format and its own parsing function. Buildbot has a separate ChangeSource variant for each of these parsing functions. Once you’ve chosen a maildir location and a parsing function, create the change source and put it in change_source : from buildbot.plugins import changes c [ 'change_source' ] = changes . CVSMaildirSource ( "~/maildir-buildbot" , prefix = "/trunk/" ) Subscribing the Buildmaster The recommended way to install Buildbot is to create a dedicated account for the buildmaster. If you do this, the account will probably have a distinct email address (perhaps buildmaster@example.org ). Then just arrange for this account’s email to be delivered to a suitable maildir (described in the next section). If Buildbot does not have its own account, extension addresses can be used to distinguish between emails intended for the buildmaster and emails intended for the rest of the account. In most modern MTAs, the e.g. foo@example.org account has control over every email address at example.org which begins with “foo”, such that emails addressed to account-foo@example.org can be delivered to a different destination than account-bar@example.org . qmail does this by using separate .qmail files for the two destinations ( .qmail-foo and .qmail-bar , with .qmail controlling the base address and .qmail-default controlling all other extensions). Other MTAs have similar mechanisms. Thus you can assign an extension address like foo-buildmaster@example.org to the buildmaster and retain foo@example.org for your own use. Using Maildirs A maildir is a simple directory structure originally developed for qmail that allows safe atomic update without locking. Create a base directory with three subdirectories: new , tmp , and cur . When messages arrive, they are put into a uniquely-named file (using pids, timestamps, and random numbers) in tmp . When the file is complete, it is atomically renamed into new . Eventually the buildmaster notices the file in new , reads and parses the contents, then moves it into cur . A cronjob can be used to delete files in cur at leisure. Maildirs are frequently created with the maildirmake tool, but a simple mkdir -p ~/ MAILDIR / cur,new,tmp is pretty much equivalent. Many modern MTAs can deliver directly to maildirs. The usual .forward or .procmailrc syntax is to name the base directory with a trailing slash, so something like ~/ MAILDIR / . qmail and postfix are maildir-capable MTAs, and procmail is a maildir-capable MDA (Mail Delivery Agent). Here is an example procmail config, located in ~/.procmailrc : # .procmailrc # routes incoming mail to appropriate mailboxes PATH=/usr/bin:/usr/local/bin MAILDIR=$HOME/Mail LOGFILE=.procmail_log SHELL=/bin/sh :0 * new If procmail is not setup on a system wide basis, then the following one-line .forward file will invoke it. !/usr/bin/procmail For MTAs which cannot put files into maildirs directly, the safecat tool can be executed from a .forward file to accomplish the same thing. The Buildmaster uses the linux DNotify facility to receive immediate notification when the maildir’s new directory has changed. When this facility is not available, it polls the directory for new messages, every 10 seconds by default. Parsing Email Change Messages The second component to setting up an email-based ChangeSource is to parse the actual notices. This is highly dependent upon the VC system and commit script in use. A couple of common tools used to create these change emails, along with the Buildbot tools to parse them, are: CVS Buildbot CVS MailNotifier CVSMaildirSource SVN svnmailer http://opensource.perlig.de/en/svnmailer/ commit-email.pl SVNCommitEmailMaildirSource Bzr Launchpad BzrLaunchpadEmailMaildirSource Mercurial NotifyExtension https://www.mercurial-scm.org/wiki/NotifyExtension Git post-receive-email http://git.kernel.org/?p=git/git.git;a=blob;f=contrib/hooks/post-receive-email;hb=HEAD The following sections describe the parsers available for each of these tools. Most of these parsers accept a prefix= argument, which is used to limit the set of files that the buildmaster pays attention to. This is most useful for systems like CVS and SVN which put multiple projects in a single repository (or use repository names to indicate branches). Each filename that appears in the email is tested against the prefix: if the filename does not start with the prefix, the file is ignored. If the filename does start with the prefix, that prefix is stripped from the filename before any further processing is done. Thus the prefix usually ends with a slash. CVSMaildirSource class buildbot.changes.mail. CVSMaildirSource This parser works with the master/contrib/buildbot_cvs_mail.py script. The script sends an email containing all the files submitted in one directory. It is invoked by using the CVSROOT/loginfo facility. The Buildbot’s CVSMaildirSource knows how to parse these messages and turn them into Change objects. It takes the directory name of the maildir root. For example: from buildbot.plugins import changes c [ 'change_source' ] = changes . CVSMaildirSource ( "/home/buildbot/Mail" ) Configuration of CVS and buildbot_cvs_mail.py CVS must be configured to invoke the buildbot_cvs_mail.py script when files are checked in. This is done via the CVS loginfo configuration file. To update this, first do: cvs checkout CVSROOT cd to the CVSROOT directory and edit the file loginfo, adding a line like: SomeModule /cvsroot/CVSROOT/buildbot_cvs_mail.py --cvsroot :ext:example.com:/cvsroot -e buildbot -P SomeModule %@{sVv@} Note For cvs version 1.12.x, the --path %p option is required. Version 1.11.x and 1.12.x report the directory path differently. The above example you put the buildbot_cvs_mail.py script under /cvsroot/CVSROOT. It can be anywhere. Run the script with --help to see all the options. At the very least, the options -e (email) and -P (project) should be specified. The line must end with %{sVv} . This is expanded to the files that were modified. Additional entries can be added to support more modules. See buildbot_cvs_mail.py --help for more information on the available options. SVNCommitEmailMaildirSource class buildbot.changes.mail. SVNCommitEmailMaildirSource SVNCommitEmailMaildirSource parses message sent out by the commit-email.pl script, which is included in the Subversion distribution. It does not currently handle branches: all of the Change objects that it creates will be associated with the default (i.e. trunk) branch. from buildbot.plugins import changes c [ 'change_source' ] = changes . SVNCommitEmailMaildirSource ( "~/maildir-buildbot" ) BzrLaunchpadEmailMaildirSource class buildbot.changes.mail. BzrLaunchpadEmailMaildirSource BzrLaunchpadEmailMaildirSource parses the mails that are sent to addresses that subscribe to branch revision notifications for a bzr branch hosted on Launchpad. The branch name defaults to lp: Launchpad path . For example lp:~maria-captains/maria/5.1 . If only a single branch is used, the default branch name can be changed by setting defaultBranch . For multiple branches, pass a dictionary as the value of the branchMap option to map specific repository paths to specific branch names (see example below). The leading lp: prefix of the path is optional. The prefix option is not supported (it is silently ignored). Use the branchMap and defaultBranch instead to assign changes to branches (and just do not subscribe the Buildbot to branches that are not of interest). The revision number is obtained from the email text. The bzr revision id is not available in the mails sent by Launchpad. However, it is possible to set the bzr append_revisions_only option for public shared repositories to avoid new pushes of merges changing the meaning of old revision numbers. from buildbot.plugins import changes bm = { 'lp:~maria-captains/maria/5.1' : '5.1' , 'lp:~maria-captains/maria/6.0' : '6.0' } c [ 'change_source' ] = changes . BzrLaunchpadEmailMaildirSource ( "~/maildir-buildbot" , branchMap = bm ) 2.5.3.5. PBChangeSource class buildbot.changes.pb. PBChangeSource PBChangeSource actually listens on a TCP port for clients to connect and push change notices into the Buildmaster. This is used by the built-in buildbot sendchange notification tool, as well as several version-control hook scripts. This change is also useful for creating new kinds of change sources that work on a push model instead of some kind of subscription scheme, for example a script which is run out of an email .forward file. This ChangeSource always runs on the same TCP port as the workers. It shares the same protocol, and in fact shares the same space of “usernames”, so you cannot configure a PBChangeSource with the same name as a worker. If you have a publicly accessible worker port and are using PBChangeSource , you must establish a secure username and password for the change source . If your sendchange credentials are known (e.g., the defaults), then your buildmaster is susceptible to injection of arbitrary changes, which (depending on the build factories) could lead to arbitrary code execution on workers. The PBChangeSource is created with the following arguments. port Which port to listen on. If None (which is the default), it shares the port used for worker connections. user The user account that the client program must use to connect. Defaults to change passwd The password for the connection - defaults to changepw . Can be a Secret . Do not use this default on a publicly exposed port! prefix The prefix to be found and stripped from filenames delivered over the connection, defaulting to None . Any filenames which do not start with this prefix will be removed. If all the filenames in a given Change are removed, then that whole Change will be dropped. This string should probably end with a directory separator. This is useful for changes coming from version control systems that represent branches as parent directories within the repository (like SVN and Perforce). Use a prefix of trunk/ or project/branches/foobranch/ to only follow one branch and to get correct tree-relative filenames. Without a prefix, the PBChangeSource will probably deliver Changes with filenames like trunk/foo.c instead of just foo.c . Of course this also depends upon the tool sending the Changes in (like buildbot sendchange ) and what filenames it is delivering: that tool may be filtering and stripping prefixes at the sending end. For example: from buildbot.plugins import changes c [ 'change_source' ] = changes . PBChangeSource ( port = 9999 , user = 'laura' , passwd = 'fpga' ) The following hooks are useful for sending changes to a PBChangeSource : Bzr Hook Bzr is also written in Python, and the Bzr hook depends on Twisted to send the changes. To install, put master/contrib/bzr_buildbot.py in one of your plugins locations a bzr plugins directory (e.g., ~/.bazaar/plugins ). Then, in one of your bazaar conf files (e.g., ~/.bazaar/locations.conf ), set the location you want to connect with Buildbot with these keys: buildbot_on one of ‘commit’, ‘push, or ‘change’. Turns the plugin on to report changes via commit, changes via push, or any changes to the trunk. ‘change’ is recommended. buildbot_server (required to send to a Buildbot master) the URL of the Buildbot master to which you will connect (as of this writing, the same server and port to which workers connect). buildbot_port (optional, defaults to 9989) the port of the Buildbot master to which you will connect (as of this writing, the same server and port to which workers connect) buildbot_pqm (optional, defaults to not pqm) Normally, the user that commits the revision is the user that is responsible for the change. When run in a pqm (Patch Queue Manager, see https://launchpad.net/pqm ) environment, the user that commits is the Patch Queue Manager, and the user that committed the parent revision is responsible for the change. To turn on the pqm mode, set this value to any of (case-insensitive) “Yes”, “Y”, “True”, or “T”. buildbot_dry_run (optional, defaults to not a dry run) Normally, the post-commit hook will attempt to communicate with the configured Buildbot server and port. If this parameter is included and any of (case-insensitive) “Yes”, “Y”, “True”, or “T”, then the hook will simply print what it would have sent, but not attempt to contact the Buildbot master. buildbot_send_branch_name (optional, defaults to not sending the branch name) If your Buildbot’s bzr source build step uses a repourl, do not turn this on. If your buildbot’s bzr build step uses a baseURL, then you may set this value to any of (case-insensitive) “Yes”, “Y”, “True”, or “T” to have the Buildbot master append the branch name to the baseURL. Note The bzr smart server (as of version 2.2.2) doesn’t know how to resolve bzr:// urls into absolute paths so any paths in locations.conf won’t match, hence no change notifications will be sent to Buildbot. Setting configuration parameters globally or in-branch might still work. When Buildbot no longer has a hardcoded password, it will be a configuration option here as well. Here’s a simple example that you might have in your ~/.bazaar/locations.conf . [chroot-*:///var/local/myrepo/mybranch] buildbot_on = change buildbot_server = localhost 2.5.3.6. P4Source The P4Source periodically polls a Perforce depot for changes. It accepts the following arguments: p4port The Perforce server to connect to (as host : port ). p4user The Perforce user. p4passwd The Perforce password. p4base The base depot path to watch, without the trailing ‘/…’. p4bin An optional string parameter. Specify the location of the perforce command line binary (p4). You only need to do this if the perforce binary is not in the path of the Buildbot user. Defaults to p4 . split_file A function that maps a pathname, without the leading p4base , to a (branch, filename) tuple. The default just returns (None, branchfile) , which effectively disables branch support. You should supply a function which understands your repository structure. pollInterval How often to poll, in seconds. Defaults to 600 (10 minutes). pollRandomDelayMin Minimum delay in seconds to wait before each poll, default is 0. This is useful in case you have a lot of pollers and you want to spread the polling load over a period of time. Setting it equal to the maximum delay will effectively delay all polls by a fixed amount of time. Must be less than or equal to the maximum delay. pollRandomDelayMax Maximum delay in seconds to wait before each poll, default is 0. This is useful in case you have a lot of pollers and you want to spread the polling load over a period of time. Must be less than the poll interval. project Set the name of the project to be used for the P4Source . This will then be set in any changes generated by the P4Source , and can be used in a Change Filter for triggering particular builders. pollAtLaunch Determines when the first poll occurs. True = immediately on launch, False = wait for one pollInterval (default). histmax The maximum number of changes to inspect at a time. If more than this number occur since the last poll, older changes will be silently ignored. encoding The character encoding of p4 's output. This defaults to “utf8”, but if your commit messages are in another encoding, specify that here. For example, if you’re using Perforce on Windows, you may need to use “cp437” as the encoding if “utf8” generates errors in your master log. server_tz The timezone of the Perforce server, using the usual timezone format (e.g: "Europe/Stockholm" ) in case it’s not in UTC. use_tickets Set to True to use ticket-based authentication, instead of passwords (but you still need to specify p4passwd ). ticket_login_interval How often to get a new ticket, in seconds, when use_tickets is enabled. Defaults to 86400 (24 hours). revlink A function that maps branch and revision to a valid url (e.g. p4web), stored along with the change. This function must be a callable which takes two arguments, the branch and the revision. Defaults to lambda branch, revision: (u’’) resolvewho A function that resolves the Perforce ‘user @ workspace ’ into a more verbose form, stored as the author of the change. Useful when usernames do not match email addresses and external, client-side lookup is required. This function must be a callable which takes one argument. Defaults to lambda who: (who) Example #1 This configuration uses the P4PORT , P4USER , and P4PASSWD specified in the buildmaster’s environment. It watches a project in which the branch name is simply the next path component, and the file is all path components after. from buildbot.plugins import changes s = changes . P4Source ( p4base = '//depot/project/' , split_file = lambda branchfile : branchfile . split ( '/' , 1 )) c [ 'change_source' ] = s Example #2 Similar to the previous example but also resolves the branch and revision into a valid revlink. from buildbot.plugins import changes s = changes . P4Source ( p4base = '//depot/project/' , split_file = lambda branchfile : branchfile . split ( '/' , 1 )) revlink = lambda branch , revision : 'http://p4web:8080/@md=d&@/ {} ?ac=10' . format ( revision ) c [ 'change_source' ] = s 2.5.3.7. SVNPoller class buildbot.changes.svnpoller. SVNPoller The SVNPoller is a ChangeSource which periodically polls a Subversion repository for new revisions, by running the svn log command in a subshell. It can watch a single branch or multiple branches. SVNPoller accepts the following arguments: repourl The base URL path to watch, like svn://svn.twistedmatrix.com/svn/Twisted/trunk , or http://divmod.org/svn/Divmo/ , or even file:///home/svn/Repository/ProjectA/branches/1.5/ . This must include the access scheme, the location of the repository (both the hostname for remote ones, and any additional directory names necessary to get to the repository), and the sub-path within the repository’s virtual filesystem for the project and branch of interest. The SVNPoller will only pay attention to files inside the subdirectory specified by the complete repourl. split_file A function to convert pathnames into (branch, relative_pathname) tuples. Use this to explain your repository’s branch-naming policy to SVNPoller . This function must accept a single string (the pathname relative to the repository) and return a two-entry tuple. Directory pathnames always end with a right slash to distinguish them from files, like trunk/src/ , or src/ . There are a few utility functions in buildbot.changes.svnpoller that can be used as a split_file function; see below for details. For directories, the relative pathname returned by split_file should end with a right slash but an empty string is also accepted for the root, like ("branches/1.5.x", "") being converted from "branches/1.5.x/" . The default value always returns (None, path) , which indicates that all files are on the trunk. Subclasses of SVNPoller can override the split_file method instead of using the split_file= argument. project Set the name of the project to be used for the SVNPoller . This will then be set in any changes generated by the SVNPoller , and can be used in a Change Filter for triggering particular builders. svnuser An optional string parameter. If set, the option –user argument will be added to all svn commands. Use this if you have to authenticate to the svn server before you can do svn info or svn log commands. Can be a Secret . svnpasswd Like svnuser , this will cause a option –password argument to be passed to all svn commands. Can be a Secret . pollInterval How often to poll, in seconds. Defaults to 600 (checking once every 10 minutes). Lower this if you want the Buildbot to notice changes faster, raise it if you want to reduce the network and CPU load on your svn server. Please be considerate of public SVN repositories by using a large interval when polling them. pollRandomDelayMin Minimum delay in seconds to wait before each poll, default is 0. This is useful in case you have a lot of pollers and you want to spread the polling load over a period of time. Setting it equal to the maximum delay will effectively delay all polls by a fixed amount of time. Must be less than or equal to the maximum delay. pollRandomDelayMax Maximum delay in seconds to wait before each poll, default is 0. This is useful in case you have a lot of pollers and you want to spread the polling load over a period of time. Must be less than the poll interval. pollAtLaunch Determines when the first poll occurs. True = immediately on launch, False = wait for one pollInterval (default). histmax The maximum number of changes to inspect at a time. Every pollInterval seconds, the SVNPoller asks for the last histmax changes and looks through them for any revisions it does not already know about. If more than histmax revisions have been committed since the last poll, older changes will be silently ignored. Larger values of histmax will cause more time and memory to be consumed on each poll attempt. histmax defaults to 100. svnbin This controls the svn executable to use. If subversion is installed in a weird place on your system (outside of the buildmaster’s PATH ), use this to tell SVNPoller where to find it. The default value of svn will almost always be sufficient. revlinktmpl This parameter is deprecated in favour of specifying a global revlink option. This parameter allows a link to be provided for each revision (for example, to websvn or viewvc). These links appear anywhere changes are shown, such as on build or change pages. The proper form for this parameter is an URL with the portion that will substitute for a revision number replaced by ‘’%s’’. For example, 'http://myserver/websvn/revision.php?rev=%s' could be used to cause revision links to be created to a websvn repository viewer. cachepath If specified, this is a pathname of a cache file that SVNPoller will use to store its state between restarts of the master. extra_args If specified, the extra arguments will be added to the svn command args. Several split file functions are available for common SVN repository layouts. For a poller that is only monitoring trunk, the default split file function is available explicitly as split_file_alwaystrunk : from buildbot.plugins import changes , util c [ 'change_source' ] = changes . SVNPoller ( repourl = "svn://svn.twistedmatrix.com/svn/Twisted/trunk" , split_file = util . svn . split_file_alwaystrunk ) For repositories with the /trunk and /branches/ BRANCH layout, split_file_branches will do the job: from buildbot.plugins import changes , util c [ 'change_source' ] = changes . SVNPoller ( repourl = "https://amanda.svn.sourceforge.net/svnroot/amanda/amanda" , split_file = util . svn . split_file_branches ) When using this splitter the poller will set the project attribute of any changes to the project attribute of the poller. For repositories with the PROJECT /trunk and PROJECT /branches/ BRANCH layout, split_file_projects_branches will do the job: from buildbot.plugins import changes , util c [ 'change_source' ] = changes . SVNPoller ( repourl = "https://amanda.svn.sourceforge.net/svnroot/amanda/" , split_file = util . svn . split_file_projects_branches ) When using this splitter the poller will set the project attribute of any changes to the project determined by the splitter. The SVNPoller is highly adaptable to various Subversion layouts. See Customizing SVNPoller for details and some common scenarios. 2.5.3.8. Bzr Poller If you cannot insert a Bzr hook in the server, you can use the BzrPoller . To use it, put master/contrib/bzr_buildbot.py somewhere that your Buildbot configuration can import it. Even putting it in the same directory as the master.cfg should work. Install the poller in the Buildbot configuration as with any other change source. Minimally, provide a URL that you want to poll ( bzr:// , bzr+ssh:// , or lp: ), making sure the Buildbot user has necessary privileges. # put bzr_buildbot.py file to the same directory as master.cfg from bzr_buildbot import BzrPoller c [ 'change_source' ] = BzrPoller ( url = 'bzr://hostname/my_project' , poll_interval = 300 ) The BzrPoller parameters are: url The URL to poll. poll_interval The number of seconds to wait between polls. Defaults to 10 minutes. branch_name Any value to be used as the branch name. Defaults to None, or specify a string, or specify the constants from bzr_buildbot.py SHORT or FULL to get the short branch name or full branch address. blame_merge_author Normally, the user that commits the revision is the user that is responsible for the change. When run in a pqm (Patch Queue Manager, see https://launchpad.net/pqm ) environment, the user that commits is the Patch Queue Manager, and the user that committed the merged, parent revision is responsible for the change. Set this value to True if this is pointed against a PQM-managed branch. 2.5.3.9. GitPoller If you cannot take advantage of post-receive hooks as provided by master/contrib/git_buildbot.py for example, then you can use the GitPoller . The GitPoller periodically fetches from a remote Git repository and processes any changes. It requires its own working directory for operation. The default should be adequate, but it can be overridden via the workdir property. Note There can only be a single GitPoller pointed at any given repository. The GitPoller requires Git-1.7 and later. It accepts the following arguments: repourl The git-url that describes the remote repository, e.g. git@example.com:foobaz/myrepo.git (see the git fetch help for more info on git-url formats) branches One of the following: a list of the branches to fetch. Non-existing branches are ignored. True indicating that all branches should be fetched a callable which takes a single argument. It should take a remote refspec (such as 'refs/heads/master' ), and return a boolean indicating whether that branch should be fetched. If not provided, GitPoller will use HEAD to fetch the remote default branch. branch Accepts a single branch name to fetch. Exists for backwards compatibility with old configurations. pollInterval Interval in seconds between polls, default is 10 minutes pollRandomDelayMin Minimum delay in seconds to wait before each poll, default is 0. This is useful in case you have a lot of pollers and you want to spread the polling load over a period of time. Setting it equal to the maximum delay will effectively delay all polls by a fixed amount of time. Must be less than or equal to the maximum delay. pollRandomDelayMax Maximum delay in seconds to wait before each poll, default is 0. This is useful in case you have a lot of pollers and you want to spread the polling load over a period of time. Must be less than the poll interval. pollAtLaunch Determines when the first poll occurs. True = immediately on launch, False = wait for one pollInterval (default). buildPushesWithNoCommits Determines if a push on a new branch or update of an already known branch with already known commits should trigger a build. This is useful in case you have build steps depending on the name of the branch and you use topic branches for development. When you merge your topic branch into “master” (for instance), a new build will be triggered. (defaults to False). gitbin Path to the Git binary, defaults to just 'git' category Set the category to be used for the changes produced by the GitPoller . This will then be set in any changes generated by the GitPoller , and can be used in a Change Filter for triggering particular builders. project Set the name of the project to be used for the GitPoller . This will then be set in any changes generated by the GitPoller , and can be used in a Change Filter for triggering particular builders. codebase (optional) Set the codebase that poller is tracking. If set, GitPoller will store more granular, per-commit data that can be viewed in the web UI. usetimestamps Parse each revision’s commit timestamp (default is True ), or ignore it in favor of the current time, so that recently processed commits appear together in the waterfall page. encoding Set encoding will be used to parse author’s name and commit message. Default encoding is 'utf-8' . This will not be applied to file names since Git will translate non-ascii file names to unreadable escape sequences. workdir The directory where the poller should keep its local repository. The default is gitpoller_work . If this is a relative path, it will be interpreted relative to the master’s basedir. Multiple Git pollers can share the same directory. only_tags Determines if the GitPoller should poll for new tags in the git repository. sshPrivateKey (optional) Specifies private SSH key for git to use. This may be either a Secret or just a string. This option requires Git-2.3 or later. The master must either have the host in the known hosts file or the host key must be specified via the sshHostKey option. sshHostKey (optional) Specifies public host key to match when authenticating with SSH public key authentication. This may be either a Secret or just a string. sshPrivateKey must be specified in order to use this option. The host key must be in the form of <key type> <base64-encoded string> , e.g. ssh-rsa AAAAB3N<…>FAaQ== . sshKnownHosts (optional) Specifies the contents of the SSH known_hosts file to match when authenticating with SSH public key authentication. This may be either a Secret or just a string. sshPrivateKey must be specified in order to use this option. sshHostKey must not be specified in order to use this option. auth_credentials (optional) An username/password tuple to use when running git for fetch operations. The worker’s git version needs to be at least 1.7.9. git_credentials (optional) See GitCredentialOptions . The worker’s git version needs to be at least 1.7.9. A configuration for the Git poller might look like this: from buildbot.plugins import changes c [ 'change_source' ] = changes . GitPoller ( repourl = 'git@example.com:foobaz/myrepo.git' , branches = [ 'master' , 'great_new_feature' ]) 2.5.3.10. HgPoller The HgPoller periodically pulls a named branch from a remote Mercurial repository and processes any changes. It requires its own working directory for operation, which must be specified via the workdir property. The HgPoller requires a working hg executable, and at least a read-only access to the repository it polls (possibly through ssh keys or by tweaking the hgrc of the system user Buildbot runs as). The HgPoller will not transmit any change if there are several heads on the watched named branch. This is similar (although not identical) to the Mercurial executable behaviour. This exceptional condition is usually the result of a developer mistake, and usually does not last for long. It is reported in logs. If fixed by a later merge, the buildmaster administrator does not have anything to do: that merge will be transmitted, together with the intermediate ones. The HgPoller accepts the following arguments: name The name of the poller. This must be unique, and defaults to the repourl . repourl The url that describes the remote repository, e.g. http://hg.example.com/projects/myrepo . Any url suitable for hg pull can be specified. bookmarks A list of the bookmarks to monitor. branches A list of the branches to monitor; defaults to ['default'] . branch The desired branch to pull. Exists for backwards compatibility with old configurations. workdir The directory where the poller should keep its local repository. It is mandatory for now, although later releases may provide a meaningful default. It also serves to identify the poller in the buildmaster internal database. Changing it may result in re-processing all changes so far. Several HgPoller instances may share the same workdir for mutualisation of the common history between two different branches, thus easing on local and remote system resources and bandwidth. If relative, the workdir will be interpreted from the master directory. pollInterval Interval in seconds between polls, default is 10 minutes pollRandomDelayMin Minimum delay in seconds to wait before each poll, default is 0. This is useful in case you have a lot of pollers and you want to spread the polling load over a period of time. Setting it equal to the maximum delay will effectively delay all polls by a fixed amount of time. Must be less than or equal to the maximum delay. pollRandomDelayMax Maximum delay in seconds to wait before each poll, default is 0. This is useful in case you have a lot of pollers and you want to spread the polling load over a period of time. Must be less than the poll interval. pollAtLaunch Determines when the first poll occurs. True = immediately on launch, False = wait for one pollInterval (default). hgbin Path to the Mercurial binary, defaults to just 'hg' . category Set the category to be used for the changes produced by the HgPoller . This will then be set in any changes generated by the HgPoller , and can be used in a Change Filter for triggering particular builders. project Set the name of the project to be used for the HgPoller . This will then be set in any changes generated by the HgPoller , and can be used in a Change Filter for triggering particular builders. usetimestamps Parse each revision’s commit timestamp (default is True ), or ignore it in favor of the current time, so that recently processed commits appear together in the waterfall page. encoding Set encoding will be used to parse author’s name and commit message. Default encoding is 'utf-8' . revlink A function that maps branch and revision to a valid url (e.g. hgweb), stored along with the change. This function must be a callable which takes two arguments, the branch and the revision. Defaults to lambda branch, revision: (u’’) A configuration for the Mercurial poller might look like this: from buildbot.plugins import changes c [ 'change_source' ] = changes . HgPoller ( repourl = 'http://hg.example.org/projects/myrepo' , branch = 'great_new_feature' , workdir = 'hg | 2026-01-13T09:30:34 |
http://docs.buildbot.net/current/developer/index.html#buildbot-development | 3. Buildbot Development — Buildbot 4.3.0 documentation Buildbot 1. Buildbot Tutorial 2. Buildbot Manual 3. Buildbot Development 3.1. Development Quick-start 3.2. Submitting Pull Requests 3.3. General Documents 3.4. REST API 3.5. REST API Specification 3.6. Data API 3.7. Database 3.8. Database connectors API 3.9. Messaging and Queues 3.10. Classes 4. Release Notes 5. Older Release Notes 6. API Indices Buildbot 3. Buildbot Development View page source 3. Buildbot Development This chapter is the official repository for the collected wisdom of the Buildbot hackers. It is intended both for developers writing patches that will be included in Buildbot itself and for advanced users who wish to customize Buildbot. Note Public API Any API that is not documented in the official Buildbot documentation is considered internal and subject to change. If you would like it to be officially exposed, open a bug report on the Buildbot Github project . 3.1. Development Quick-start 3.1.1. Create a Buildbot Python Environment 3.1.2. Create a JavaScript Frontend Environment 3.2. Submitting Pull Requests 3.2.1. Guidelines 3.2.2. How to create a pull request 3.2.3. Local testing cheat sheet 3.3. General Documents 3.3.1. Master Organization 3.3.2. Buildbot Coding Style 3.3.3. Buildbot’s Test Suite 3.3.4. Configuration 3.3.5. Configuration in AngularJS 3.3.6. Writing Schedulers 3.3.7. Utilities 3.3.8. Build Result Codes 3.3.9. WWW Server 3.3.10. Javascript Data Module 3.3.11. Base web application 3.3.12. Authentication 3.3.13. Authorization 3.3.14. Master-Worker API 3.3.15. Master-Worker connection with MessagePack over WebSocket protocol 3.3.16. Claiming Build Requests 3.3.17. String Encodings 3.3.18. Metrics 3.3.19. Secrets 3.3.20. Secrets manager 3.3.21. Secrets providers 3.3.22. Statistics Service 3.3.23. How to package Buildbot plugins 3.4. REST API 3.4.1. Versions 3.4.2. Getting 3.4.3. Collections 3.4.4. Controlling 3.4.5. Authentication 3.5. REST API Specification 3.5.1. builder 3.5.2. buildrequest 3.5.3. build 3.5.4. buildset 3.5.5. build_data 3.5.6. change 3.5.7. changesource 3.5.8. codebase 3.5.9. codebase_branch 3.5.10. codebase_commit 3.5.11. forcescheduler 3.5.12. identifier 3.5.13. logchunk 3.5.14. log 3.5.15. master 3.5.16. patch 3.5.17. project 3.5.18. rootlink 3.5.19. scheduler 3.5.20. sourcedproperties 3.5.21. sourcestamp 3.5.22. spec 3.5.23. step 3.5.24. worker 3.5.25. test_result 3.5.26. test_result_set 3.5.27. Raw endpoints 3.6. Data API 3.6.1. Sections 3.6.2. Concrete Interfaces 3.6.3. Extending the Data API 3.6.4. Data Model 3.7. Database 3.7.1. Database Overview 3.7.2. Schema 3.7.3. Identifier 3.7.4. Writing Database Connector Methods 3.7.5. Modifying the Database Schema 3.7.6. Foreign key checking 3.7.7. Database Compatibility Notes 3.7.8. Testing migrations with real databases 3.8. Database connectors API 3.8.1. Buildsets connector 3.8.2. Buildrequests connector 3.8.3. Builders connector 3.8.4. Builds connector 3.8.5. Build data connector 3.8.6. Steps connector 3.8.7. Logs connector 3.8.8. Changes connector 3.8.9. Change sources connector 3.8.10. Schedulers connector 3.8.11. Source stamps connector 3.8.12. State connector 3.8.13. Users connector 3.8.14. Masters connector 3.8.15. Workers connector 3.9. Messaging and Queues 3.9.1. Overview 3.9.2. Connector API 3.9.3. Queue Schema 3.9.4. Message Schema 3.10. Classes 3.10.1. Builds 3.10.2. Workers 3.10.3. BuildFactory 3.10.4. Change Sources 3.10.5. RemoteCommands 3.10.6. BuildSteps 3.10.7. BaseScheduler 3.10.8. ForceScheduler 3.10.9. IRenderable 3.10.10. IProperties 3.10.11. IConfigurator 3.10.12. ResultSpecs 3.10.13. Protocols 3.10.14. WorkerManager 3.10.15. Logs 3.10.16. LogObservers 3.10.17. Authentication 3.10.18. Avatars 3.10.19. Web Server Classes Previous Next © Copyright Buildbot Team Members. Built with Sphinx using a theme provided by Read the Docs . | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/553 | LLVM Weekly - #553, August 5th 2024 LLVM Weekly - #553, August 5th 2024 Welcome to the five hundred and fifty-third issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org , @llvmweekly or @asbradbury on Twitter, or @llvmweekly@fosstodon.org or @asb@fosstodon.org . News and articles from around the web and events Jeremy Kun blogged about defining MLIR patterns with PDLL . The next Portland LLVM social will be taking place on August 15th . According to the LLVM calendar in the coming week there will be the following: Office hours with the following hosts: Anastasia Stulova, Anton Korobeynikov, Quentin Colombet, Johannes Doerfert. Online sync-ups on the following topics: Flang, MLIR C/C++ frontend, pointer authentication, SPIR-V, MemorySSA, libc++, new contributors, LLVM/Offload, classic flang, Clang C/C++ language working group, loop optimisation, OpenMP for flang, MLIR. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums LLVM 19.1.0-rc2 was tagged . As Tobias Hieta noted “As we pass the second RC we should make sure we are more conservative with the fixes we pick. Make sure you don’t submit backports for things except regressions and important bug fixes. New features will have to wait until LLVM 20 at this point.” “ArcaneNibble” shared an RFC on multilib selection for RISC-V bare metal , discussing how to move towards adopting the YAML configuration file. Ivan R. Ivanov introduced the input-gen tool for automatically generating runnable inputs for IR. Stella Laurenzo proposes drastically reducing the documented scope of the torch-mlir project , including things like making it clear that it’s not positioned as an end-user facing project. Lang Hames started a discussion about removing MCJIT and RuntimeDyld from LLVM after a deprecation period, and started a separate thread to discuss adding deprecation warnings to those libraries . Donát Nagy shared thoughts on improving loop modeling with Clang Static Analyzer . Jonas Paulsson kicked off an RFC thread on introducing a NoExt attribute so ABI lowering can always require an explicit extension type attribute . LLVM commits Following an RFC discussion , per-function numbers were added to basic blocks. 9f75270 . MVT::SimpleValueType was extended from uint8_t to uint16_t as 208 of 256 MVTs were already used, though a followup patch limited the maximum MVT to 511 for now. a4c6ebe , e2c74aa . SandboxIR implementation continues with e.g. GetElementPtr, CallBrInst. f9765a2 , cfb92be . An LLVMCantFail function as added to LLVM’s C API. 45ef0d4 . LowerConstantIntrinsics was merged into PreIselIntrinsicLowering. b5fc083 . The LangRef documentation on some fast-math flags was updated. 858bea8 . Support for the f8E3M4 IEEE 754 float type was added to APFloat. abc2fe3 . Clang commits The AVX10.2 X86 ISA option is now supported by clang, along with the relevant builtins. 10bad2c , 0dba538 , 3d5cc7e . Clarifications were made about the -Ofast deprecation. 48d4d4b . Other project commits An AArch64 implementation of setjmp/longjmp was added to LLVM’s libc. 2a6268d . Tutorial documentation was committed for the mlir-opt tool. 7f19686 . ASan’s one definition rule violation checking was sped up substantially. c584c42 . libcxx’s std::unique_lock is now available under _LIBCPP_HAS_NO_THREADS . e9d5842 . libunwind can now build for RISC-V RV32E. b33a675 . Offload can now track allocations and deallocations in order to report issues such as double frees. c95abe9 . Subscribe at LLVMWeekly.org . | 2026-01-13T09:30:34 |
http://docs.buildbot.net/current/manual/installation/misc.html#f1 | 2.2.6. Next Steps — Buildbot 4.3.0 documentation Buildbot 1. Buildbot Tutorial 2. Buildbot Manual 2.1. Introduction 2.2. Installation 2.2.1. Buildbot Components 2.2.2. Requirements 2.2.3. Installing the code 2.2.4. Buildmaster Setup 2.2.5. Worker Setup 2.2.6. Next Steps 2.2.6.1. Launching the daemons 2.2.6.2. Launching worker as Windows service 2.2.6.3. Logfiles 2.2.6.4. Shutdown 2.3. Concepts 2.4. Secret Management 2.5. Configuration 2.6. Customization 2.7. Command-line Tool 2.8. Resources 2.9. Optimization 2.10. Plugin Infrastructure in Buildbot 2.11. Deployment 2.12. Upgrading 3. Buildbot Development 4. Release Notes 5. Older Release Notes 6. API Indices Buildbot 2. Buildbot Manual 2.2. Installation 2.2.6. Next Steps View page source 2.2.6. Next Steps 2.2.6.1. Launching the daemons Both the buildmaster and the worker run as daemon programs. To launch them, pass the working directory to the buildbot and buildbot-worker commands, as appropriate: # start a master buildbot start [ BASEDIR ] # start a worker buildbot-worker start [ WORKER_BASEDIR ] The BASEDIR is optional and can be omitted if the current directory contains the buildbot configuration (the buildbot.tac file). buildbot start This command will start the daemon and then return, so normally it will not produce any output. To verify that the programs are indeed running, look for a pair of files named twistd.log and twistd.pid that should be created in the working directory. twistd.pid contains the process ID of the newly-spawned daemon. When the worker connects to the buildmaster, new directories will start appearing in its base directory. The buildmaster tells the worker to create a directory for each Builder which will be using that worker. All build operations are performed within these directories: CVS checkouts, compiles, and tests. Once you get everything running, you will want to arrange for the buildbot daemons to be started at boot time. One way is to use cron , by putting them in a @reboot crontab entry [ 1 ] @reboot buildbot start [ BASEDIR ] When you run crontab to set this up, remember to do it as the buildmaster or worker account! If you add this to your crontab when running as your regular account (or worse yet, root), then the daemon will run as the wrong user, quite possibly as one with more authority than you intended to provide. It is important to remember that the environment provided to cron jobs and init scripts can be quite different than your normal runtime. There may be fewer environment variables specified, and the PATH may be shorter than usual. It is a good idea to test out this method of launching the worker by using a cron job with a time in the near future, with the same command, and then check twistd.log to make sure the worker actually started correctly. Common problems here are for /usr/local or ~/bin to not be on your PATH , or for PYTHONPATH to not be set correctly. Sometimes HOME is messed up too. If using systemd to launch buildbot-worker , it may be a good idea to specify a fixed PATH using the Environment directive (see systemd unit file example ). Some distributions may include conveniences to make starting buildbot at boot time easy. For instance, with the default buildbot package in Debian-based distributions, you may only need to modify /etc/default/buildbot (see also /etc/init.d/buildbot , which reads the configuration in /etc/default/buildbot ). Buildbot also comes with its own init scripts that provide support for controlling multi-worker and multi-master setups (mostly because they are based on the init script from the Debian package). With a little modification, these scripts can be used on both Debian and RHEL-based distributions. Thus, they may prove helpful to package maintainers who are working on buildbot (or to those who haven’t yet split buildbot into master and worker packages). # install as /etc/default/buildbot-worker # or /etc/sysconfig/buildbot-worker worker/contrib/init-scripts/buildbot-worker.default # install as /etc/default/buildmaster # or /etc/sysconfig/buildmaster master/contrib/init-scripts/buildmaster.default # install as /etc/init.d/buildbot-worker worker/contrib/init-scripts/buildbot-worker.init.sh # install as /etc/init.d/buildmaster master/contrib/init-scripts/buildmaster.init.sh # ... and tell sysvinit about them chkconfig buildmaster reset # ... or update-rc.d buildmaster defaults 2.2.6.2. Launching worker as Windows service Security consideration Setting up the buildbot worker as a Windows service requires Windows administrator rights. It is important to distinguish installation stage from service execution. It is strongly recommended run Buildbot worker with lowest required access rights. It is recommended run a service under machine local non-privileged account. If you decide run Buildbot worker under domain account it is recommended to create dedicated strongly limited user account that will run Buildbot worker service. Windows service setup In this description, we assume that the buildbot worker account is the local domain account worker . In case worker should run under domain user account please replace .\worker with <domain>\worker . Please replace <worker.passwd> with given user password. Please replace <worker.basedir> with the full/absolute directory specification to the created worker (what is called BASEDIR in Creating a worker ). buildbot_worker_windows_service --user .\worker --password < worker.passwd > --startup auto install powershell -command "& {&'New-Item' -path Registry::HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BuildBot\Parameters}" powershell -command "& {&'set-ItemProperty' -path Registry::HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\BuildBot\Parameters -Name directories -Value '<worker.basedir>'}" The first command automatically adds user rights to run Buildbot as service. Modify environment variables This step is optional and may depend on your needs. At least we have found useful to have dedicated temp folder worker steps. It is much easier discover what temporary files your builds leaks/misbehaves. As Administrator run regedit Open the key Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Buildbot . Create a new value of type REG_MULTI_SZ called Environment . Add entries like TMP = c : \ bbw \ tmp TEMP = c : \ bbw \ tmp Check if Buildbot can start correctly configured as Windows service As admin user run the command net start buildbot . In case everything goes well, you should see following output The BuildBot service is starting . The BuildBot service was started successfully . Troubleshooting If anything goes wrong check Twisted log on C:\bbw\worker\twistd.log Windows system event log ( eventvwr.msc in command line, Show-EventLog in PowerShell). 2.2.6.3. Logfiles While a buildbot daemon runs, it emits text to a logfile, named twistd.log . A command like tail -f twistd.log is useful to watch the command output as it runs. The buildmaster will announce any errors with its configuration file in the logfile, so it is a good idea to look at the log at startup time to check for any problems. Most buildmaster activities will cause lines to be added to the log. 2.2.6.4. Shutdown To stop a buildmaster or worker manually, use: buildbot stop [ BASEDIR ] # or buildbot-worker stop [ WORKER_BASEDIR ] This simply looks for the twistd.pid file and kills whatever process is identified within. At system shutdown, all processes are sent a SIGKILL . The buildmaster and worker will respond to this by shutting down normally. The buildmaster will respond to a SIGHUP by re-reading its config file. Of course, this only works on Unix-like systems with signal support and not on Windows. The following shortcut is available: buildbot reconfig [ BASEDIR ] When you update the Buildbot code to a new release, you will need to restart the buildmaster and/or worker before they can take advantage of the new code. You can do a buildbot stop BASEDIR and buildbot start BASEDIR in succession, or you can use the restart shortcut, which does both steps for you: buildbot restart [ BASEDIR ] Workers can similarly be restarted with: buildbot-worker restart [ BASEDIR ] There are certain configuration changes that are not handled cleanly by buildbot reconfig . If this occurs, buildbot restart is a more robust way to fully switch over to the new configuration. buildbot restart may also be used to start a stopped Buildbot instance. This behavior is useful when writing scripts that stop, start, and restart Buildbot. A worker may also be gracefully shutdown from the web UI. This is useful to shutdown a worker without interrupting any current builds. The buildmaster will wait until the worker has finished all its current builds, and will then tell the worker to shutdown. [ 1 ] This @reboot syntax is understood by Vixie cron, which is the flavor usually provided with Linux systems. Other unices may have a cron that doesn’t understand @reboot Previous Next © Copyright Buildbot Team Members. Built with Sphinx using a theme provided by Read the Docs . | 2026-01-13T09:30:34 |
https://ja-jp.facebook.com/login/?next=https%3A%2F%2Fl.facebook.com%2Fl.php%3Fu%3Dhttps%253A%252F%252Fwww.instagram.com%252F%26amp%253Bh%3DAT1KY84hKm_-by-YRr4vpnWKQpQpwSlqcHtHuJaVKw1bW8wItTohGpLUCnyGOFaKEl24B0RlP8zqhkc8lgjUOBrHeD_3-x2x1wAeaDTMaMg12_UywdHBUwoulr3RtwamghKKTr5Z0_NPtXeL | Facebook Facebook メールアドレスまたは電話番号 パスワード アカウントを忘れた場合 新しいアカウントを作成 機能の一時停止 機能の一時停止 この機能の使用ペースが早過ぎるため、機能の使用が一時的にブロックされました。 Back 日本語 한국어 English (US) Tiếng Việt Bahasa Indonesia ภาษาไทย Español 中文(简体) Português (Brasil) Français (France) Deutsch アカウント登録 ログイン Messenger Facebook Lite 動画 Meta Pay Metaストア Meta Quest Ray-Ban Meta Meta AI Meta AIのコンテンツをもっと見る Instagram Threads 投票情報センター プライバシーポリシー プライバシーセンター Facebookについて 広告を作成 ページを作成 開発者 採用情報 Cookie AdChoices 規約 ヘルプ 連絡先のアップロードと非ユーザー 設定 アクティビティログ Meta © 2026 | 2026-01-13T09:30:34 |
https://id-id.facebook.com/login/?next=https%3A%2F%2Fl.facebook.com%2Fl.php%3Fu%3Dhttps%253A%252F%252Fwww.instagram.com%252F%26amp%253Bh%3DAT3lN7wpIGgWaTyXErk4SF1WCKEs9PDHJPdbiN3Yl_7Rba5JxYLIit5nb079Cw1WWMDlqJeHCsuW7di_wtBhIrbQwVsBpN2U3fkPKWMYiOrU-c_JuLCa8u7nUnpt80Nm-EprkKK9rPoa2_sw | Facebook Facebook Email atau telepon Kata Sandi Lupa akun? Buat Akun Baru Anda Diblokir Sementara Anda Diblokir Sementara Sepertinya Anda menyalahgunakan fitur ini dengan menggunakannya terlalu cepat. Anda dilarang menggunakan fitur ini untuk sementara. Back Bahasa Indonesia 한국어 English (US) Tiếng Việt ภาษาไทย Español 中文(简体) 日本語 Português (Brasil) Français (France) Deutsch Daftar Masuk Messenger Facebook Lite Video Meta Pay Meta Store Meta Quest Ray-Ban Meta Meta AI Konten Meta AI lainnya Instagram Threads Pusat Informasi Pemilu Kebijakan Privasi Pusat Privasi Tentang Buat Iklan Buat Halaman Developer Karier Cookie Pilihan Iklan Ketentuan Bantuan Pengunggahan Kontak & Non-Pengguna Pengaturan Log aktivitas Meta © 2026 | 2026-01-13T09:30:34 |
https://llvmweekly.org/issue/547 | LLVM Weekly - #547, June 24th 2024 LLVM Weekly - #547, June 24th 2024 Welcome to the five hundred and forty-seventh issue of LLVM Weekly, a weekly newsletter (published every Monday) covering developments in LLVM, Clang, and related projects. LLVM Weekly is brought to you by Alex Bradbury . Subscribe to future issues at http://llvmweekly.org and pass it on to anyone else you think may be interested. Please send any tips or feedback to asb@asbradbury.org , @llvmweekly or @asbradbury on Twitter, or @llvmweekly@fosstodon.org or @asb@fosstodon.org . If you’re at the RISC-V Summit Europe this week in Munich be sure to say hello. I gave a tutorial today on implementing support for custom RISC-V instruction set extensions in LLVM and will point to the shares when they go up on the website. News and articles from around the web and events Recordings of presentations from EuroLLVM 2024 have started to be posted from YouTube (so you can now see if my attempted Carbon panel write-up was accurate). Marek Surovič and Henrich Lauko posted their EuroLLVM 2024 trip report . LLVM 18.1.8 was released . This is intended to be the last 18.1.x point release. The 19.x branch will be created on July 23rd. Eduardo Blázquez’s published a lengthy article on writing an IR from scratch . According to the LLVM calendar in the coming week there will be the following: Office hours with the following hosts: Anton Korobeynikov, Kristof Beyls, Johannes Doerfert, Amara Emberson. Online sync-ups on the following topics: Flang, pointer authentication, SYCL, new contributors, LLVM/offload, classic flang, loop optimisations, OpenMP for Flang, MLIR. For more details see the LLVM calendar , getting involved documentation on online sync ups and office hours . On the forums Arryan Shukla and Rose Zhang posted an RFC on reworking LLVM libc’s headergen to not depend on TableGen . Oliver Hunt proposed extending clang with typed allocator support , introducing a new typed_memory_operation attribute. MLIR news #67 is now available . Valentin Clement proposed an RFC on enhancing location information in flang . LLVM commits The MachineOutliner was changed to consider all leaf descendants in the suffix tree as candidates for outlining. The commit message notes this results in a 3% reduction in text segment size for Clang/LLD compared to the previous -Oz . d9a00ed . SimplifyCFG gained support for sinking instructions with multiple uses. ede27d8 . Initial support for 3-way comparison intrinsics was added to the SelectionDAG. 995835f . RISCVInsertVSETVLI no longer requires LiveIntervals to be present. 8756043 . llvm-cov can now generate a UI in its HTML output that allows jumping between uncovered parts of code. 06aa078 . Work on SVE in GlobalISel continued with support for translating SVE formal arguments and COPY. 0e21f12 . Processor definitions were added for the SpacemiT-X60 RISC-V design. aede380 . Trigonometric intrinsics were added and supported in the DXIL backend. 936bc9b . Support was removed for shl constant expressions. 76d3ab2 . A mitigation was been added for Arm CMSE function calls, to remove the assumption that callees arguments were sign/zero-extend and that return values passed back to a caller were sign/zero-extended. 78ff617 . An initializes attribute was introduced. 5ece35d . Clang commits Clang now claims full conformance to C99, see the commit message for details on those proposals marked as partial. 918ef31 . Reference types for WebAssembly were re-enabled by default. 6e38df3 . Clang’s support for restrict is now characterised as fully conforming but partially implemented (as not all optimisations are provided). 6bc71cd . Other project commits Compiler-rt changes to support the numerical sanitizer (nsan) were committed. cae6d45 . LLD’s COFF linker gained support for ARM64EC entry thunks. fed8e38 . MLIR GPU dialect layering was documented. 560b645 . All Perl scripts in openmp were replaced with Python scripts. 88dae3d . | 2026-01-13T09:30:34 |
https://www.php.net/manual/es/function.session-name.php | PHP: session_name - Manual update page now Downloads Documentation Get Involved Help Search docs Getting Started Introduction A simple tutorial Language Reference Basic syntax Types Variables Constants Expressions Operators Control Structures Functions Classes and Objects Namespaces Enumerations Errors Exceptions Fibers Generators Attributes References Explained Predefined Variables Predefined Exceptions Predefined Interfaces and Classes Predefined Attributes Context options and parameters Supported Protocols and Wrappers Security Introduction General considerations Installed as CGI binary Installed as an Apache module Session Security Filesystem Security Database Security Error Reporting User Submitted Data Hiding PHP Keeping Current Features HTTP authentication with PHP Cookies Sessions Handling file uploads Using remote files Connection handling Persistent Database Connections Command line usage Garbage Collection DTrace Dynamic Tracing Function Reference Affecting PHP's Behaviour Audio Formats Manipulation Authentication Services Command Line Specific Extensions Compression and Archive Extensions Cryptography Extensions Database Extensions Date and Time Related Extensions File System Related Extensions Human Language and Character Encoding Support Image Processing and Generation Mail Related Extensions Mathematical Extensions Non-Text MIME Output Process Control Extensions Other Basic Extensions Other Services Search Engine Extensions Server Specific Extensions Session Extensions Text Processing Variable and Type Related Extensions Web Services Windows Only Extensions XML Manipulation GUI Extensions Keyboard Shortcuts ? This help j Next menu item k Previous menu item g p Previous man page g n Next man page G Scroll to bottom g g Scroll to top g h Goto homepage g s Goto search (current page) / Focus search box session_regenerate_id » « session_module_name Manual de PHP Referencia de funciones Extensiones de sesiones Sesiones Funciones de sesión Change language: English German Spanish French Italian Japanese Brazilian Portuguese Russian Turkish Ukrainian Chinese (Simplified) Other session_name (PHP 4, PHP 5, PHP 7, PHP 8) session_name — Lee y/o modifica el nombre de la sesión Descripción session_name ( ? string $name = null ): string | false session_name() devuelve el nombre de la sesión actual. Si se proporciona el argumento name , session_name() modificará el nombre de la sesión y devolverá el anterior nombre de la sesión. Si se proporciona un nuevo nombre de sesión name , session_name() modifica la cookie HTTP (y el contenido de salida cuando session.transid está activado). Una vez enviada la cookie HTTP, llamar a session_name() desencadena un E_WARNING . session_name() debe ser llamado antes de session_start() para que la sesión funcione correctamente. El nombre de la sesión se reinicia al valor por defecto, almacenado en session.name al inicio. Por lo tanto, debe llamarse a session_name() para cada petición (y antes de que session_start() sea llamado). Parámetros name El nombre de sesión se utiliza como nombre para las cookies y las URLs (es decir, PHPSESSID ). Solo debe contener caracteres alfanuméricos; debe ser corto y descriptivo (especialmente para los usuarios que tienen activada la alerta de cookies). Si name se proporciona y no es null , el nombre de la sesión actual será reemplazado por este valor. Advertencia Los nombres de sesión no pueden contener solo números, al menos una letra debe estar presente. De lo contrario, se generará un identificador de sesión cada vez. Valores devueltos Devuelve el nombre de la sesión actual. Si se proporciona el argumento name y la función actualiza el nombre de la sesión, entonces el anterior nombre de sesión será devuelto, o false si ocurre un error. Historial de cambios Versión Descripción 8.0.0 name ahora es nullable. 7.2.0 session_name() verifica el estado de la sesión, anteriormente solo verificaba el estado de la cookie. Por lo tanto, las versiones anteriores de session_name() permiten la llamada a session_name() después de session_start() lo que puede causar el fallo de PHP y puede dar lugar a comportamientos extraños. Ejemplos Ejemplo #1 Ejemplo con session_name() <?php /* elige el nombre de sesión: WebsiteID */ $previous_name = session_name ( "WebsiteID" ); echo "El nombre anterior de la sesión era $previous_name <br />" ; ?> Ver también La directiva de configuración session.name Found A Problem? Learn How To Improve This Page • Submit a Pull Request • Report a Bug + add a note User Contributed Notes 9 notes up down 146 Hongliang Qiang ¶ 21 years ago This may sound no-brainer: the session_name() function will have no essential effect if you set session.auto_start to "true" in php.ini . And the obvious explanation is the session already started thus cannot be altered before the session_name() function--wherever it is in the script--is executed, same reason session_name needs to be called before session_start() as documented. I know it is really not a big deal. But I had a quite hard time before figuring this out, and hope it might be helpful to someone like me. up down 65 php at wiz dot cx ¶ 17 years ago if you try to name a php session "example.com" it gets converted to "example_com" and everything breaks. don't use a period in your session name. up down 40 relsqui at chiliahedron dot com ¶ 16 years ago Remember, kids--you MUST use session_name() first if you want to use session_set_cookie_params() to, say, change the session timeout. Otherwise it won't work, won't give any error, and nothing in the documentation (that I've seen, anyway) will explain why. Thanks to brandan of bildungsroman.com who left a note under session_set_cookie_params() explaining this or I'd probably still be throwing my hands up about it. up down 21 Joseph Dalrymple ¶ 14 years ago For those wondering, this function is expensive! On a script that was executing in a consistent 0.0025 seconds, just the use of session_name("foo") shot my execution time up to ~0.09s. By simply sacrificing session_name("foo"), I sped my script up by roughly 0.09 seconds. up down 10 Victor H ¶ 10 years ago As Joseph Dalrymple said, adding session_name do slow down a little bit the execution time. But, what i've observed is that it decreased the fluctuation between requests. Requests on my script fluctuated between 0,045 and 0,022 seconds. With session_name("myapp"), it goes to 0,050 and 0,045. Not a big deal, but that's a point to note. For those with problems setting the name, when session.auto_start is set to 1, you need to set the session.name on php.ini! up down 3 mmulej at gmail dot com ¶ 4 years ago Hope this is not out of php.net noting scope. session_name('name') must be set before session_start() because the former changes ini settings and the latter reads them. For the same reason session_set_cookie_params($options) must be set before session_start() as well. I find it best to do the following. function is_session_started() { if (php_sapi_name() === 'cli') return false; if (version_compare(phpversion(), '5.4.0', '>=')) return session_status() === PHP_SESSION_ACTIVE; return session_id() !== ''; } if (!is_session_started()) { session_name($session_name); session_set_cookie_params($cookie_options); session_start(); } up down 0 tony at marston-home dot demon dot co dot uk ¶ 7 years ago The description that session_name() gets and/or sets the name of the current session is technically wrong. It does nothing but deal with the value originally supplied by the session.name value within the php.ini file. Thus:- $name = session_name(); is functionally equivalent to $name = ini_get('session.name'); and session_name('newname); is functionally equivalent to ini_set('session.name','newname'); This also means that: $old_name = session_name('newname'); is functionally equivalent to $old_name = ini_set('session.name','newname'); The current value of session.name is not attached to a session until session_start() is called. Once session_start() has used session.name to lookup the session_id() in the cookie data the name becomes irrelevant as all further operations on the session data are keyed by the session_id(). Note that changing session.name while a session is currently active will not update the name in any session cookie. The new name does not take effect until the next call to session_start(), and this requires that the current session, which was created with the previous value for session.name, be closed. up down -4 tony at marston-home dot demon dot co dot uk ¶ 7 years ago The description has recently been modified to contain the statement "When new session name is supplied, session_name() modifies HTTP cookie". This is not correct as session_name() has never modified any cookie data. A change in session.name does not become effective until session_start() is called, and it is session_start() that creates the cookie if it does not already exist. See the following bug report for details: https://bugs.php.net/bug.php?id=76413 up down -3 descartavel1+php at gmail dot com ¶ 2 years ago Always try to set the prefix for your session name attribute to either `__Host-` or `__Secure-` to benefit from Browsers improved security. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie#attributes Also, if you have auto_session enabled, you must set this name in session.name in your config (php.ini, htaccess, etc) + add a note Funciones de sesión session_​abort session_​cache_​expire session_​cache_​limiter session_​commit session_​create_​id session_​decode session_​destroy session_​encode session_​gc session_​get_​cookie_​params session_​id session_​module_​name session_​name session_​regenerate_​id session_​register_​shutdown session_​reset session_​save_​path session_​set_​cookie_​params session_​set_​save_​handler session_​start session_​status session_​unset session_​write_​close Copyright © 2001-2026 The PHP Documentation Group My PHP.net Contact Other PHP.net sites Privacy policy ↑ and ↓ to navigate • Enter to select • Esc to close • / to open Press Enter without selection to search using Google | 2026-01-13T09:30:34 |
http://docs.buildbot.net/current/manual/plugins.html#developing-plugins | 2.10. Plugin Infrastructure in Buildbot — Buildbot 4.3.0 documentation Buildbot 1. Buildbot Tutorial 2. Buildbot Manual 2.1. Introduction 2.2. Installation 2.3. Concepts 2.4. Secret Management 2.5. Configuration 2.6. Customization 2.7. Command-line Tool 2.8. Resources 2.9. Optimization 2.10. Plugin Infrastructure in Buildbot 2.10.1. Finding Plugins 2.10.2. Developing Plugins 2.10.3. Plugins of note 2.11. Deployment 2.12. Upgrading 3. Buildbot Development 4. Release Notes 5. Older Release Notes 6. API Indices Buildbot 2. Buildbot Manual 2.10. Plugin Infrastructure in Buildbot View page source 2.10. Plugin Infrastructure in Buildbot Added in version 0.8.11. Plugin infrastructure in Buildbot allows easy use of components that are not part of the core. It also allows unified access to components that are included in the core. The following snippet from buildbot.plugins import kind ... kind . ComponentClass ... allows to use a component of kind kind . Available kind s are: worker workers, described in Workers changes change source, described in Change Sources and Changes schedulers schedulers, described in Schedulers steps build steps, described in Build Steps reporters reporters (or reporter targets), described in Reporters util utility classes. For example, BuilderConfig , Build Factories , ChangeFilter and Locks are accessible through util . Web interface plugins are not used directly: as described in web server configuration section, they are listed in the corresponding section of the web server configuration dictionary. Note If you are not very familiar with Python and you need to use different kinds of components, start your master.cfg file with: from buildbot.plugins import * As a result, all listed above components will be available for use. This is what sample master.cfg file uses. 2.10.1. Finding Plugins Buildbot maintains a list of plugins at https://github.com/buildbot/buildbot/wiki/PluginList . 2.10.2. Developing Plugins Distribute a Buildbot Plug-In contains all necessary information for you to develop new plugins. Please edit https://github.com/buildbot/buildbot/wiki/PluginList to add a link to your plugin! 2.10.3. Plugins of note Plugins were introduced in Buildbot-0.8.11, so as of this writing, only components that are bundled with Buildbot are available as plugins. If you have an idea/need about extending Buildbot, head to How to package Buildbot plugins , create your own plugins and let the world know how Buildbot can be made even more useful. Previous Next © Copyright Buildbot Team Members. Built with Sphinx using a theme provided by Read the Docs . | 2026-01-13T09:30:34 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.