id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,922,262
harlock v0.5.1 released
It is with immense pleasure that I announce that version 0.5.1 off the harlock scripting language is...
0
2024-07-13T13:24:30
https://dev.to/abathargh/harlock-v051-released-15l3
go, embedded, opensource, devops
It is with immense pleasure that I announce that version 0.5.1 off the harlock scripting language is out! Here is the detailed release log, with a list the artifacts to install the language on debian-like systems, or directly a binary for the supported architectures. [Release note + artifacts @github/Abathargh/harlock](https://github.com/Abathargh/harlock/releases/tag/v0.5.1) ## Build from source Notice that you can always compile and install harlock by executing: ```bash go install github.com/Abathargh/harlock/cmd/harlock ``` or: ```bash git clone https://github.com/Abathargh/harlock make install ``` ## Release details This v0.5.1 release is a bugfix one, that solidifies harlock usage within build pipelines. The main issues that were tackled are: - Correctly handling runtime and evaluation errors being raised at the top-level scope to trigger a non-zero error code. This led to silent errors being passed inside of pipelines using harlock. - Drop support for go 1.15+ unsupported targets. - Adding previously missing .exe suffix for windows executable names when cross-compiling for windows on non-windows - Minor fixes to .gitignore and Makefile. Last couple of releases included a new error system which was thoroughly tested and fixing it is the main reason for v0.5.1. ## Usage and new developments I've been using harlock a lot to test the avr_io nim library that I work on, alongside personal projects where I need firmware to be updated over the wire/air with huge success. A nice working project using the language can be found in the bootloader example for avr_io, where it is used to showcase the library capabilities when writing bootloaders for embedded applications. I wrote an in-depth article on how to use harlock for these kind of scenarios on antima at the [following link](https://antima.it/en/harlock-a-small-language-to-handle-hex-and-elf-files/).
abathargh
1,922,263
[𝕰x𝖕𝖊𝖉𝖎𝖆™] How can I speak to someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮? [GUidE~In~✈Detailed™] [USA]
Actually we never wanted to go to the school we always wanHow do I Speak to Someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮? -...
0
2024-07-13T12:50:52
https://dev.to/david_bceaa53a8c4137bffaf/x-how-can-i-speak-to-someone-at-guideindetailed-usa-2gb9
Actually we never wanted to go to the school we always wanHow do I Speak to Someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮? - (Customer Service: A Fast and Easy Guide) Contact 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer 𝕊𝕦𝕡𝕡𝕠𝕣𝕥: Your Easy Guide! To speak with someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮, follow these simple steps: Call for Assistance: Dial 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 to reach 𝗘𝘅𝗽𝗲𝗱𝗶𝗮’s customer 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 directly. Visit 𝗘𝘅𝗽𝗲𝗱𝗶𝗮’s Help Center: Go to 𝗘𝘅𝗽𝗲𝗱𝗶𝗮’s Help Center using your web browser. Log in to Your Account: Sign in to your 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 account to access personalized 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 options. Find “Contact Us”: Navigate to the “Contact Us” 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 section to explore different ways to get in touch. Choose Your 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 Option: Select your issue from the list provided to receive relevant assistance, use Chat or Call: 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service 𝖍𝖊𝖑𝖕𝖑𝖎𝖓𝖊 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞, utilize the live chat feature for immediate help or call 𝗘𝘅𝗽𝗲𝗱𝗶𝗮’s customer service for direct 𝕊𝕦𝕡𝕡𝕠𝕣𝕥. Ensure you have your booking details handy for faster service. For international 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 numbers, visit 𝗘𝘅𝗽𝗲𝗱𝗶𝗮’s official website. This guide ensures you can easily contact 𝗘𝘅𝗽𝗲𝗱𝗶𝗮’s customer 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 for any travel-related inquiries or assistance you may need. 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 is one of the world’s leading online travel agencies, offering a wide range of travel services, including 𝓯𝓵𝓲𝓰𝓱𝓽s, hotels, car rentals, and vacation packages. Despite their user-friendly platform, there may be times when you need to speak with someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service to get assistance with your travel plans or questions about your booking. This guide provides detailed instructions on how to speak with someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service using their phone number 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞, ensuring you can get the help you need fast and easy. Reaching 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service How to Speak with Someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service How do I speak with someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service? This is a common question among travelers who need direct assistance from 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service. The fastest and easiest way to speak with someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service is to call their dedicated phone number, 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞. This number, 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞, is available 24/7, providing round-the-clock 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 to address any issues or concerns you might have. Step-by-Step Guide to Speaking with a Live Representative at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Dial the Number: The first step to speaking with someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service is to dial their phone number, 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞. Ensure you have access to a phone and can spend some time on the call. Navigate the Menu: After dialing the number 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮, you will be greeted by an automated menu system. Listen carefully to the menu options and select the one that best matches your query. To speak with a live representative, choose the option that directs you to 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service. Provide Booking Information: Once connected to a live representative after dialing 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮, you will be asked to provide your name and booking information. This includes details such as your reservation number, travel dates, and destination. Having this information ready will make the process faster and easier. Explain Your Issue: Clearly and concisely explain the reason for your call on 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞, 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 𝓱𝓸𝓽𝓵𝓲𝓷𝓮. Whether you need to book a new travel arrangement, modify an existing booking, or resolve an issue, providing a detailed explanation will help the representative assist you more effectively. Benefits of Speaking with Someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service Fast and Easy Assistance: Speaking directly with an 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service representative on 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞, ensures that your queries and issues are addressed promptly. 24/7 Availability: The 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 customer service number, 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞, is available 24/7, providing 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 whenever you need it. Personalized 𝕊𝕦𝕡𝕡𝕠𝕣𝕥: Live representatives at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service via call on 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 can offer personalized assistance tailored to your specific travel needs and preferences. Resolving Issues with 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service Common Issues and How to Resolve Them 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service is equipped to handle a variety of issues that travelers may encounter. Here are some common problems and how speaking with someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service, by calling 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞, can help resolve them: Booking Changes and Cancellations If you need to change or cancel your booking, calling 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 at 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 is the fastest and easiest way to get it done. The 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service representative will guide you through the process, ensuring that your changes are made accurately and efficiently on 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞. Whether you need to adjust your travel dates, change your destination, or cancel a booking entirely, 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 customer service team at 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 is there to help. 𝖗𝖊𝖋𝖚𝖓𝖉 Requests at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 For 𝖗𝖊𝖋𝖚𝖓𝖉 requests, speaking with someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service is essential to ensure your request is processed correctly, when you call 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞, the representative will verify your booking details and initiate the 𝖗𝖊𝖋𝖚𝖓𝖉 process. Agent at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 will also provide information on the expected timeframe for your 𝖗𝖊𝖋𝖚𝖓𝖉, by calling 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞, you can ensure that your 𝖗𝖊𝖋𝖚𝖓𝖉 request is handled smoothly and efficiently. Travel Disruptions at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Travel disruptions such as 𝓯𝓵𝓲𝓰𝓱𝓽 cancellations, delays, or changes in itinerary can be stressful, by calling 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service at 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞, you can quickly get assistance to find alternative arrangements. 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 representatives can help rebook your 𝓯𝓵𝓲𝓰𝓱𝓽, arrange accommodation, or provide other necessary 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 to minimize the impact of the disruption, call 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 at 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞. For any travel disruption, dialing 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 customer service 𝓱𝓸𝓽𝓵𝓲𝓷𝓮 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 ensures fast and easy resolution. Special Requests at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 If you have special requests, such as dietary requirements, wheelchair assistance, or specific room preferences, speaking with someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service can ensure that these needs are communicated and accommodated by calling 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞, 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 representative will note your requests and coordinate with the relevant service providers to make your travel experience as smooth as possible, just dial 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 to handle any special requests quickly and efficiently. Booking Assistance with 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service How 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service Can Help Booking a 𝓯𝓵𝓲𝓰𝓱𝓽, hotel, or vacation package can sometimes be overwhelming, especially with the multitude of options available, 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service is ready to assist you in finding the best deals and making your travel arrangements - Simply call 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 to get started. 𝓯𝓵𝓲𝓰𝓱𝓽 Bookings When you call 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 for 𝓯𝓵𝓲𝓰𝓱𝓽 bookings, the 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service representative will ask for your travel dates, destination, and preferences. They will then search for the best available 𝓯𝓵𝓲𝓰𝓱𝓽s that match your criteria and help you book your tickets. This personalized service ensures that you get the best value for your money and a 𝓯𝓵𝓲𝓰𝓱𝓽 that suits your schedule. For 𝓯𝓵𝓲𝓰𝓱𝓽 bookings, dialing 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 at𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 is the easiest way to get fast and efficient assistance. Hotel Reservations 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service can also help you find and book the perfect hotel for your trip on 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 by providing details such as your destination, check-in and check-out dates, and any specific preferences (e.g., proximity to landmarks, amenities), the representative can recommend hotels that meet your needs. Someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 can also assist with special requests, such as room upgrades or specific room types, to book your hotel, simply call 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 and get fast and easy help. Vacation Packages at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 For those looking to book a comprehensive vacation package, speaking with someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service can simplify the process, their representative at 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 can help you bundle your 𝓯𝓵𝓲𝓰𝓱𝓽, hotel, and car rental into one package, often at a discounted rate. 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 customer service will also ensure that all components of your package are coordinated, making your travel planning fast and easy, just dial 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 to book your vacation package with ease. Language Assistance with 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service Multilingual 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 understands that travelers come from diverse backgrounds and may speak different languages and to accommodate this, they offer multilingual 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 through their customer service number, 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞. How to Request Language Assistance at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮? When you call 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service at 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞, you can request assistance in your preferred language. The automated menu may provide language options, or you can ask the representative directly for 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 in languages such as Spanish, French, German, and more, just call 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 customer service “𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞. This ensures that you can communicate your needs effectively with 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 and receive the help you need. For multilingual 𝕊𝕦𝕡𝕡𝕠𝕣𝕥, just call 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 and request assistance in your preferred language. Conclusion Speaking with someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service is fast and easy, ensuring that you get the assistance you need for your travel plans, by calling 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞, you will be connected with a knowledgeable and friendly representative who can help with booking new travel arrangements, resolving issues with existing bookings, and answering any questions you may have. Whether you need help booking a 𝓯𝓵𝓲𝓰𝓱𝓽, hotel, or vacation package, or require assistance with cancellations, changes, or special requests, 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service is here to help, call 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 today and speak with someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service for fast and easy assistance! How do I speak to someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮? Can I talk to a live person at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮? Yes, you can talk to a live person at ‘[𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞]’ - 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 by calling their customer service 𝓱𝓸𝓽𝓵𝓲𝓷𝓮 at [𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞] - // [𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞] - . This line is available 24/7 for your convenience. Airline’s phone number to call and talk with a live representative is 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 - // 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 - // 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 - 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 - Select your connection language upon call and choose the IVR option that best suits your query by looking at those that are generated automatically to speak with the available agents at 's customer care department. 𝔼𝕩𝕡𝕖𝕕𝕚𝕒 𝕠𝕡𝕖𝕣𝕒𝕥𝕖𝕤 𝔽𝕝𝕚𝕘𝕙𝕥𝕤 𝕗𝕠𝕣 𝕤𝕖𝕧𝕖𝕣𝕒𝕝 𝕣𝕖𝕟𝕥𝕒𝕝𝕤, 𝕚𝕟𝕔𝕝𝕦𝕕𝕚𝕟𝕘 𝕥𝕙𝕖 𝕌𝕟𝕚𝕥𝕖𝕕 𝕊𝕥𝕒𝕥𝕖𝕤. 𝕀𝕗 𝕪𝕠𝕦 𝕒𝕣𝕖 𝕤𝕠𝕞𝕖𝕠𝕟𝕖 𝕨𝕙𝕠 𝕡𝕝𝕒𝕟𝕤 𝕥𝕠 𝕥𝕣𝕒𝕧𝕖𝕝 𝕨𝕚𝕥𝕙 𝔼𝕩𝕡𝕖𝕕𝕚𝕒 𝕗𝕣𝕠𝕞 𝕠𝕣 𝕥𝕠𝕨𝕒𝕣𝕕𝕤 𝕥𝕙𝕖 ‘𝔻𝕠𝕖𝕤, 𝕪𝕠𝕦 𝕞𝕒𝕪 𝕙𝕒𝕧𝕖 𝕢𝕦𝕖𝕤𝕥𝕚𝕠𝕟𝕤 𝕒𝕓𝕠𝕦𝕥’ℍ𝕠𝕨 𝕕𝕠 𝕀 𝕥𝕒𝕝𝕜 𝕥𝕠 𝕒 - 𝕣𝕖𝕡𝕣𝕖𝕤𝕖𝕟𝕥𝕒𝕥𝕚𝕧𝕖 𝕗𝕒𝕤𝕥? 𝕋𝕙𝕖𝕟, 𝕐𝕠𝕦 𝕔𝕒𝕟 𝕕𝕚𝕒𝕝 𝔼𝕩𝕡𝕖𝕕𝕚𝕒’ 𝕔𝕦𝕤𝕥𝕠𝕞𝕖𝕣 𝕤𝕖𝕣𝕧𝕚𝕔𝕖 𝕡𝕙𝕠𝕟𝕖 𝕟𝕦𝕞𝕓𝕖𝕣 (( 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞- // 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞) 𝕋𝕙𝕚𝕤 𝕒𝕝𝕝𝕠𝕨𝕤 𝕪𝕠𝕦 𝕥𝕠 𝕔𝕠𝕟𝕥𝕒𝕔𝕥 𝕒 𝕞𝕖𝕞𝕓𝕖𝕣 𝕠𝕗 𝕥𝕙𝕖𝕚𝕣 𝕔𝕦𝕤𝕥𝕠𝕞𝕖𝕣 𝕤𝕖𝕣𝕧𝕚𝕔𝕖 𝕤𝕥𝕒𝕗𝕗. 𝔹𝕪 𝕘𝕖𝕥𝕥𝕚𝕟𝕘 𝕚𝕟 𝕥𝕠𝕦𝕔𝕙 𝕨𝕚𝕥𝕙 𝕥𝕙𝕖𝕤𝕖 𝕥𝕖𝕒𝕞 𝕞𝕖𝕞𝕓𝕖𝕣𝕤, 𝕪𝕠𝕦 𝕨𝕚𝕝𝕝 𝕓𝕖 𝕒𝕓𝕝𝕖 𝕥𝕠 𝕒𝕤𝕜 𝕢𝕦𝕖𝕤𝕥𝕚𝕠𝕟𝕤 𝕒𝕟𝕕 𝕘𝕖𝕥 𝕒𝕝𝕝 𝕥𝕙𝕖 𝕚𝕟𝕗𝕠𝕣𝕞𝕒𝕥𝕚𝕠𝕟 𝕪𝕠𝕦 𝕟𝕖𝕖𝕕 𝕥𝕠 𝕖𝕟𝕙𝕒𝕟𝕔𝕖 𝕪𝕠𝕦𝕣 𝕥𝕣𝕚𝕡. How do I talk to a and representative fast? USA Know-How do I talk to a and representative fast? Connecting with 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 customer customer 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 offers several avenues, but for optimal efficiency, dialing their 'Live person at 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 ’ // 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 ’ (Live Persons) stands out. Phone: Access 's customer 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 line at ‘𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞’-𝗘𝘅𝗽𝗲𝗱𝗶𝗮(Live Persons). Steps, you need to browse through for connecting to 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Agent: Dial the 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 customer contact center to speak with their knowledgeable human agents at 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞:𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞.“Upon connection, listen to the IVR responses and opt for one that follows your query”. "Listen further and opt for your option and soon the IVR system will transfer your call to its next available agent to hear you and provide you with a better resolution. How do I speak to a 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 representative immediately? (international) 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 offers a comprehensive customer service line to address all your travel concerns. By calling𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 you can: Make or Change Reservations: Whether you’re booking a new 𝓯𝓵𝓲𝓰𝓱𝓽 or modifying an existing reservation, 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 is the number to call. To speak to a - representative immediately for international inquiries, dial 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞:𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 𝗘𝘅𝗽𝗲𝗱𝗶𝗮. This direct line will connect you with a representative who can assist you promptly with your international travel-relatives or concerns. How do I speak to someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮? The most convenient and fastest way to speak to someone at 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 is by calling their customer service number at 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 is by calling their customer service number at (𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞. Alternatively, you can start a chat conversation online for assistance. If you are unfamiliar with the process, follow these steps: For phone assistance: Dial the provided customer service number: 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 // 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞. Follow the automated prompts or press the appropriate IVR key to speak with live human agents at . Once connected, explain your intention and query to the agent to initiate the conversation. How to speak directly at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 𝒜𝒾𝓇𝓁𝒾𝓃𝑒𝓈? “𝔗𝔬 𝔤𝔢𝔱 𝔞 𝔯𝔢𝔰𝔭𝔬𝔫𝔰𝔢 𝔣𝔯𝔬𝔪 𝔈𝔵𝔭𝔢𝔡𝔦𝔞 𝔄𝔦𝔯𝔩𝔦𝔫𝔢𝔰, 𝔠𝔞𝔩𝔩 𝔱𝔥𝔢𝔦𝔯 𝔠𝔲𝔰𝔱𝔬𝔪𝔢𝔯 𝔰𝔲𝔭𝔭𝔬𝔯𝔱 𝔥𝔬𝔱𝔩𝔦𝔫𝔢 𝔞𝔱 ||𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞||||:iphone:. 𝔉𝔬𝔩𝔩𝔬𝔴 𝔱𝔥𝔢 ℑ𝔙ℜ 𝔭𝔯𝔬𝔪𝔭𝔱𝔰 𝔱𝔬 𝔠𝔬𝔫𝔫𝔢𝔠𝔱 𝔭𝔯𝔬𝔪𝔭𝔱𝔩𝔶 𝔴𝔦𝔱𝔥 𝔞 𝔨𝔫𝔬𝔴𝔩𝔢𝔡𝔤𝔢𝔞𝔟𝔩𝔢 𝔞𝔤𝔢𝔫𝔱 𝔴𝔥𝔬 𝔠𝔞𝔫 𝔞𝔰𝔰𝔦𝔰𝔱 𝔴𝔦𝔱𝔥 𝔯𝔢𝔰𝔢𝔯𝔳𝔞𝔱𝔦𝔬𝔫𝔰, 𝔣𝔩𝔦𝔤𝔥𝔱 𝔦𝔫𝔣𝔬𝔯𝔪𝔞𝔱𝔦𝔬𝔫, 𝔭𝔬𝔩𝔦𝔠𝔦𝔢𝔰, 𝔬𝔯 𝔞𝔫𝔶 𝔬𝔱𝔥𝔢𝔯 𝔮𝔲𝔢𝔯𝔦𝔢𝔰 𝔶𝔬𝔲 𝔪𝔞𝔶 𝔥𝔞𝔳𝔢. 𝔗𝔥𝔢𝔦𝔯 𝔡𝔢𝔡𝔦𝔠𝔞𝔱𝔢𝔡 𝔱𝔢𝔞𝔪 𝔦𝔰 𝔯𝔢𝔞𝔡𝔶 𝔱𝔬 𝔭𝔯𝔬𝔳𝔦𝔡𝔢 𝔭𝔯𝔬𝔪𝔭𝔱 𝔞𝔫𝔡 𝔯𝔢𝔩𝔦𝔞𝔟𝔩𝔢 𝔞𝔰𝔰𝔦𝔰𝔱𝔞𝔫𝔠𝔢. ℭ𝔬𝔫𝔱𝔞𝔠𝔱 𝔱𝔥𝔢𝔪 𝔱𝔬𝔡𝔞𝔶 𝔱𝔬 𝔢𝔫𝔰𝔲𝔯𝔢 𝔞 𝔰𝔱𝔯𝔢𝔰𝔰-𝔣𝔯𝔢𝔢 𝔱𝔯𝔞𝔳𝔢𝔩 𝔢𝔵𝔭𝔢𝔯𝔦𝔢𝔫𝔠𝔢.” How do I speak to 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 customer service? To connect with a live agent at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 𝒜𝒾𝓇𝓁𝒾𝓃𝑒𝓈, you have several convenient options available. You can reach out via their customer service 𝓱𝓸𝓽𝓵𝓲𝓷𝓮 at ||𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞||||:iphone: or opt for the Quick Connect option at ||𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞||||:iphone:. Additionally, you can engage in a live chat directly on their website or utilize their email 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 services. Rest assured, no matter which method you choose, 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 𝒜𝒾𝓇𝓁𝒾𝓃𝑒𝓈 is committed to providing prompt assistance to effectively address your needs. How do I speak to 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 customer service? You’ve got several choices for connecting with a live representative at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 𝒜𝒾𝓇𝓁𝒾𝓃𝑒𝓈. Contact their customer 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 𝓱𝓸𝓽𝓵𝓲𝓷𝓮 at ||𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞||||:iphone: (Live Person) for immediate assistance. Alternatively, you can engage with them using their website’s live chat or email 𝕊𝕦𝕡𝕡𝕠𝕣𝕥. How do I speak to 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 customer service? To connect with 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 𝒜𝒾𝓇𝓁𝒾𝓃𝑒𝓈, explore multiple channels such as their website, mobile app, or customer service 𝓱𝓸𝓽𝓵𝓲𝓷𝓮 at ||𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞||||:iphone:. Booking 𝓯𝓵𝓲𝓰𝓱𝓽s, managing reservations, and accessing travel information are seamlessly available online. As a leading travel agency, 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 understands the importance of customer 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 and offers various channels to assist. For live 𝕊𝕦𝕡𝕡𝕠𝕣𝕥, you can call their customer service 𝓱𝓸𝓽𝓵𝓲𝓷𝓮 at ||𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞||||:iphone:, engage in live chat on their website, or use email 𝕊𝕦𝕡𝕡𝕠𝕣𝕥. How do I speak to 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 customer service? Need live 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 from 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 𝒜𝒾𝓇𝓁𝒾𝓃𝑒𝓈? Dial ||𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞||||:iphone: (Live Person) or ||𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞||||:iphone: (Quick Connect) anytime, day or night. You can also utilize their website’s live chat feature or email 𝕊𝕦𝕡𝕡𝕠𝕣𝕥. Speaking directly with a 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 𝒜𝒾𝓇𝓁𝒾𝓃𝑒𝓈 representative is straightforward. Whether you’re sorting out booking hiccups, adjusting travel plans, or seeking answers to specific inquiries, connecting with a live person can swiftly resolve your concerns. This section provides a detailed, step-by-step guide on how to reach 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 𝒜𝒾𝓇𝓁𝒾𝓃𝑒𝓈 customer service via phone, including the best times to call to minimize your waiting time ted to enjoy.
david_bceaa53a8c4137bffaf
1,922,264
🌍✨ Step into the future and share the power of clean energy with Bitpower Loop! We are committed to innovation and sustainable
Bitpower Loop is more than just a smart energy management company, it’s your trusted energy partner....
0
2024-07-13T12:53:15
https://dev.to/lo_de_d135d79107ce4f48bb2/step-into-the-future-and-share-the-power-of-clean-energy-with-bitpower-loop-we-are-committed-to-innovation-and-sustainable-2h9a
Bitpower Loop is more than just a smart energy management company, it’s your trusted energy partner. Through advanced technology and data analysis, we customize energy optimization solutions for you to ensure that your energy use is more efficient and economical. At the same time, we are also committed to promoting the development of green energy technology and contributing our share to global environmental protection. Join Bitpower Loop and pursue a clean energy future with us! Let us join hands to create a more sustainable world. #BitpowerLoop #cleanenergy #sustainabledevelopment
lo_de_d135d79107ce4f48bb2
1,922,265
Run your program in the kernel space with eBPF
Hi there! Have you heard about eBPF? eBPF is not a new technology, but its usage has been growing in...
0
2024-07-13T12:55:21
https://diogodanielsoaresferreira.github.io/ebpf/
c, programming, linux, architecture
Hi there! Have you heard about eBPF? eBPF is not a new technology, but its usage has been growing in some areas, such as network security, network observability and performance monitoring. However, the implications of allowing users to run user code in kernel space can have a much wider impact than just in those areas. Let's find out how it works! --------------------- To understand eBPF, we must first understand what is the kernel space and the user space in an operative system. The **user space** is where most programs run. The **kernel space** is where the OS runs. The kernel space has privileged access to hardware, such as devices, file access and networks. That's why device drivers usually run on kernel space. However, if a program on the kernel breaks, it can break the whole OS, so most programs run in user space. ![The user space and the kernel space.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8a38naa92lffgzh7tj9q.png) <figcaption>The user space and the kernel space. Source: <a href="https://eunomia.dev/tutorials/0-introduce/" target="_blank">Eunomia</a></figcaption></br> When a program runs on the user space, it can interact with the kernel space through an API - the **system calls**. If a program wants to write to a file, it does not need to access the underlying memory directly; in a similar case, if a program wants to send packets over the network, it does not access the network controller directly; instead, it relies on the system calls, that expose an abstraction of the resources. While the system calls allow for the OS to expose an abstraction of the resources, they also limit what is possible to do with them. A program interaction with the hardware is limited by the operations available through the system calls. Sometimes the system calls can even be slower than accessing the hardware directly. Each system call has a performance penalty of crossing the user/kernel space, and if an application in the user space requires access to the kernel space often, its performance can be heavily penalized. For those reasons, it may make sense to run a program in the kernel space. You have mainly two options to do that: firstly, you can request to add your code to the Linux kernel. This option can take years to be accepted and released to the general public, but makes sense if your program is useful to the kernel of the OS. Secondly, you can use eBPF. ![Comic about eBPF and Linux kernel](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u9pzti19ua6mmmahtz1d.jpg) <figcaption><a href="https://isovalent.com/blog/post/ebpf-documentary-creation-story/" target="_blank">eBPF Comic by Philipp Meier and Thomas Graf</a></figcaption></br> --------------------- BPF (**Berkeley Packet Filter**) is a kernel program that filters packets before they are read by the kernel. This is done with a small virtual machine that can be programmed by applications in the user space. Using BPF, you can perform packet filtering in an extremely efficient way, with the tradeoff of being limited by a small instruction set defined by BPF. eBPF (**extended BPF**) is mainly an extension of the instruction set of the original BPF, with custom data structures (maps), helper functions, and tail calls, among others. With the new extension set, eBPF is not limited to networking use cases such as packet filtering, but can allow to trace and manipulate kernel function calls, syscalls, and other system events. ![Interaction between eBPF and the Linux kernel](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0660w8bmj659d61ab1y0.png) <figcaption>Depiction of the interaction between eBPF and the Linux kernel. Source: <a href="https://oswalt.dev/2021/01/introduction-to-ebpf/" target="_blank">Introduction to eBPF</a></figcaption></br> In this way, you can code your program from the user space to be run in the kernel space, at runtime, and sandboxed. As code is being run in the kernel space, additional measures must be taken to ensure that the code does not introduce problems in the kernel. eBPF performs security checks on the bytecode before loading it into memory, ensuring some problems do not happen, such as memory out-of-bounds. Many features in the virtual machine that can pose security risks or that may expand the attack surface are also disabled by default. With these innovative features, eBPF is now used widely in data centers of large companies, such as Facebook, Google or Netflix, with a focus on network monitoring, performance monitoring, or network observability. -------------------- Let's implement a simple program using eBPF. The program will monitor the deleted files in the system and print their filename. To do that, we will use the [eunomia-bpf tool](https://eunomia.dev/) to help us with the development. The eBPF program is the following: ```c #include "vmlinux.h" #include <bpf/bpf_helpers.h> #include <bpf/bpf_tracing.h> #include <bpf/bpf_core_read.h> char LICENSE[] SEC("license") = "Dual BSD/GPL"; SEC("kprobe/do_unlinkat") int BPF_KPROBE(do_unlinkat, int dfd, struct filename *name) { pid_t pid; const char *filename; pid = bpf_get_current_pid_tgid() >> 32; filename = BPF_CORE_READ(name, name); bpf_printk("KPROBE ENTRY pid = %d, filename = %s\n", pid, filename); return 0; } ``` This code is from the [eunomia tutorial](https://eunomia.dev/tutorials/2-kprobe-unlink/). As you can see, the eBPF code is very similar to C. In reality, eBPF programs are written in a restricted subset of C. eBPF allows the user to write programs (probes) to execute before or after a system call is handled. In this case, we are defining a probe (`kprobe/do_unlinkat`) that executes before the system call `do_unlinkat` is executed. This system call is called when a file is deleted. Almost all functions in the kernel can be probed, including system calls or interrupt handlers. The function takes as parameters the file descriptor and a pointer to the name of the file being removed. In the code, we retrieve the PID of the process executing the system call and the filename, and print them in the kernel log. The kernel log can be accessed in the file `/sys/kernel/debug/tracing/trace_pipe`. -------------------- Let's compile and run the program using the ecc tool. ```bash > ./ecc kprobe-link.bpf.c INFO [ecc_rs::bpf_compiler] Compiling bpf object... INFO [ecc_rs::bpf_compiler] Generating package json.. INFO [ecc_rs::bpf_compiler] Packing ebpf object and config into package.json... > sudo ./ecli run package.json INFO [faerie::elf] strtab: 0x4f9 symtab 0x538 relocs 0x580 sh_offset 0x580 INFO [bpf_loader_lib::skeleton::poller] Running ebpf program... ``` Now that the program is running, let's create and delete a file. ```bash > touch toBeDeleted > rm toBeDeleted ``` Finally, let's see what is in the output file. ```bash > sudo cat /sys/kernel/debug/tracing/trace_pipe <...>-7727 [001] ....1 867.906922: bpf_trace_printk: KPROBE ENTRY pid = 7727, filename = toBeDeleted ``` There may be many log lines, but you should find the indication of the file you just deleted. If you're curious, play other applications and see which files they remove. You may be surprised! Probes are just a single feature of many in eBPF, so if you want to explore more, follow [other tutorials from Eunomia](https://eunomia.dev/tutorials/). -------------------- As you can see, **probing a system call from the user space would not be possible without eBPF**. eBPF opens up a lot of possibilities for exploring the kernel space like it was not possible before. For programs that must interact with the low-level details of the operative system, such as file access, networking processes, hardware access, or other kernel operation, eBPF is a great tool to have under your belt. Thanks for reading!
diogodanielsoaresferreira
1,922,267
Energy Future in Control: A Comprehensive Analysis of the BitPower Loop
In today's rapidly developing technological era, the efficient utilization of energy and sustainable...
0
2024-07-13T12:56:44
https://dev.to/_b0632f7b521969d18faa4/energy-future-in-control-a-comprehensive-analysis-of-the-bitpower-loop-14kd
In today's rapidly developing technological era, the efficient utilization of energy and sustainable development have become the focus of attention for all sectors of society. As a revolutionary and innovative technology, BitPower Loop not only brings new possibilities for energy management, but also injects a strong impetus for environmental protection and economic development. In this article, we will comprehensively analyze BitPower Loop, revealing its unique advantages and wide application prospects. What is BitPower Loop? BitPower Loop is an advanced energy management system that utilizes blockchain technology and smart contracts to achieve efficient scheduling and transparent trading of energy. The system ensures the security, transparency and traceability of energy transactions through a decentralized network architecture, providing users with a trusted energy management platform. Core Benefits of BitPower Loop Efficiency: BitPower Loop utilizes blockchain technology to achieve rapid processing and real-time settlement of energy transactions, greatly improving the efficiency of energy utilization. The automatic execution of smart contracts eliminates the delays and errors of manual operations and ensures the efficient flow of energy. Security: The blockchain's encryption technology provides strong security for BitPower Loop. Every transaction is encrypted and verified, ensuring data integrity and tamperability, so users can trade energy with confidence. Transparency: BitPower Loop's decentralized network structure makes all transaction records publicly available. Users can monitor the flow of energy in real time and understand the source and destination of energy, enhancing the transparency of energy management. Sustainability: BitPower Loop promotes the development of sustainable energy by intelligently scheduling and optimizing energy distribution, maximizing the use of renewable energy and reducing dependence on traditional fossil energy. Application Prospects of BitPower Loop Smart Grid: The application of BitPower Loop in smart grid can realize accurate distribution and dynamic scheduling of energy, and improve the stability and reliability of the power grid. Users can use the platform to monitor and manage the electricity consumption of their homes or businesses in real time and optimize energy consumption. Distributed Energy System: In a distributed energy system, BitPower Loop can connect different energy producers and consumers to realize efficient trading and sharing of energy. Whether it is solar energy, electric vehicle charging stations or home energy storage systems, they can all be seamlessly connected and work together through BitPower Loop. Carbon Emissions Trading: BitPower Loop can also be used for carbon emissions trading, recording and verifying carbon emissions data through blockchain technology to ensure transparent and fair trading, and to promote the realization of global carbon emission reduction goals.@BitPower Loop
_b0632f7b521969d18faa4
1,922,268
[𝕰x𝖕𝖊𝖉𝖎𝖆™] How can I speak to someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮? [GUidE~In~Detailed] [USA]
Actually we never wanted to go to the school we always wanHow do I Speak to Someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮? -...
0
2024-07-13T12:57:45
https://dev.to/david_bceaa53a8c4137bffaf/x-how-can-i-speak-to-someone-at-guideindetailed-usa-3h5n
Actually we never wanted to go to the school we always wanHow do I Speak to Someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮? - (Customer Service: A Fast and Easy Guide) Contact 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer 𝕊𝕦𝕡𝕡𝕠𝕣𝕥: Your Easy Guide! To speak with someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮, follow these simple steps: Call for Assistance: Dial 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 to reach 𝗘𝘅𝗽𝗲𝗱𝗶𝗮’s customer 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 directly. Visit 𝗘𝘅𝗽𝗲𝗱𝗶𝗮’s Help Center: Go to 𝗘𝘅𝗽𝗲𝗱𝗶𝗮’s Help Center using your web browser. Log in to Your Account: Sign in to your 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 account to access personalized 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 options. Find “Contact Us”: Navigate to the “Contact Us” 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 section to explore different ways to get in touch. Choose Your 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 Option: Select your issue from the list provided to receive relevant assistance, use Chat or Call: 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service 𝖍𝖊𝖑𝖕𝖑𝖎𝖓𝖊 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞, utilize the live chat feature for immediate help or call 𝗘𝘅𝗽𝗲𝗱𝗶𝗮’s customer service for direct 𝕊𝕦𝕡𝕡𝕠𝕣𝕥. Ensure you have your booking details handy for faster service. For international 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 numbers, visit 𝗘𝘅𝗽𝗲𝗱𝗶𝗮’s official website. This guide ensures you can easily contact 𝗘𝘅𝗽𝗲𝗱𝗶𝗮’s customer 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 for any travel-related inquiries or assistance you may need. 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 is one of the world’s leading online travel agencies, offering a wide range of travel services, including 𝓯𝓵𝓲𝓰𝓱𝓽s, hotels, car rentals, and vacation packages. Despite their user-friendly platform, there may be times when you need to speak with someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service to get assistance with your travel plans or questions about your booking. This guide provides detailed instructions on how to speak with someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service using their phone number 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞, ensuring you can get the help you need fast and easy. Reaching 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service How to Speak with Someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service How do I speak with someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service? This is a common question among travelers who need direct assistance from 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service. The fastest and easiest way to speak with someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service is to call their dedicated phone number, 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞. This number, 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞, is available 24/7, providing round-the-clock 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 to address any issues or concerns you might have. Step-by-Step Guide to Speaking with a Live Representative at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Dial the Number: The first step to speaking with someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service is to dial their phone number, 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞. Ensure you have access to a phone and can spend some time on the call. Navigate the Menu: After dialing the number 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮, you will be greeted by an automated menu system. Listen carefully to the menu options and select the one that best matches your query. To speak with a live representative, choose the option that directs you to 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service. Provide Booking Information: Once connected to a live representative after dialing 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮, you will be asked to provide your name and booking information. This includes details such as your reservation number, travel dates, and destination. Having this information ready will make the process faster and easier. Explain Your Issue: Clearly and concisely explain the reason for your call on 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞, 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 𝓱𝓸𝓽𝓵𝓲𝓷𝓮. Whether you need to book a new travel arrangement, modify an existing booking, or resolve an issue, providing a detailed explanation will help the representative assist you more effectively. Benefits of Speaking with Someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service Fast and Easy Assistance: Speaking directly with an 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service representative on 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞, ensures that your queries and issues are addressed promptly. 24/7 Availability: The 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 customer service number, 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞, is available 24/7, providing 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 whenever you need it. Personalized 𝕊𝕦𝕡𝕡𝕠𝕣𝕥: Live representatives at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service via call on 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 can offer personalized assistance tailored to your specific travel needs and preferences. Resolving Issues with 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service Common Issues and How to Resolve Them 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service is equipped to handle a variety of issues that travelers may encounter. Here are some common problems and how speaking with someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service, by calling 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞, can help resolve them: Booking Changes and Cancellations If you need to change or cancel your booking, calling 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 at 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 is the fastest and easiest way to get it done. The 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service representative will guide you through the process, ensuring that your changes are made accurately and efficiently on 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞. Whether you need to adjust your travel dates, change your destination, or cancel a booking entirely, 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 customer service team at 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 is there to help. 𝖗𝖊𝖋𝖚𝖓𝖉 Requests at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 For 𝖗𝖊𝖋𝖚𝖓𝖉 requests, speaking with someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service is essential to ensure your request is processed correctly, when you call 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞, the representative will verify your booking details and initiate the 𝖗𝖊𝖋𝖚𝖓𝖉 process. Agent at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 will also provide information on the expected timeframe for your 𝖗𝖊𝖋𝖚𝖓𝖉, by calling 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞, you can ensure that your 𝖗𝖊𝖋𝖚𝖓𝖉 request is handled smoothly and efficiently. Travel Disruptions at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Travel disruptions such as 𝓯𝓵𝓲𝓰𝓱𝓽 cancellations, delays, or changes in itinerary can be stressful, by calling 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service at 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞, you can quickly get assistance to find alternative arrangements. 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 representatives can help rebook your 𝓯𝓵𝓲𝓰𝓱𝓽, arrange accommodation, or provide other necessary 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 to minimize the impact of the disruption, call 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 at 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞. For any travel disruption, dialing 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 customer service 𝓱𝓸𝓽𝓵𝓲𝓷𝓮 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 ensures fast and easy resolution. Special Requests at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 If you have special requests, such as dietary requirements, wheelchair assistance, or specific room preferences, speaking with someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service can ensure that these needs are communicated and accommodated by calling 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞, 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 representative will note your requests and coordinate with the relevant service providers to make your travel experience as smooth as possible, just dial 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 to handle any special requests quickly and efficiently. Booking Assistance with 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service How 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service Can Help Booking a 𝓯𝓵𝓲𝓰𝓱𝓽, hotel, or vacation package can sometimes be overwhelming, especially with the multitude of options available, 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service is ready to assist you in finding the best deals and making your travel arrangements - Simply call 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 to get started. 𝓯𝓵𝓲𝓰𝓱𝓽 Bookings When you call 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 for 𝓯𝓵𝓲𝓰𝓱𝓽 bookings, the 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service representative will ask for your travel dates, destination, and preferences. They will then search for the best available 𝓯𝓵𝓲𝓰𝓱𝓽s that match your criteria and help you book your tickets. This personalized service ensures that you get the best value for your money and a 𝓯𝓵𝓲𝓰𝓱𝓽 that suits your schedule. For 𝓯𝓵𝓲𝓰𝓱𝓽 bookings, dialing 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 at𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 is the easiest way to get fast and efficient assistance. Hotel Reservations 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service can also help you find and book the perfect hotel for your trip on 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 by providing details such as your destination, check-in and check-out dates, and any specific preferences (e.g., proximity to landmarks, amenities), the representative can recommend hotels that meet your needs. Someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 can also assist with special requests, such as room upgrades or specific room types, to book your hotel, simply call 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 and get fast and easy help. Vacation Packages at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 For those looking to book a comprehensive vacation package, speaking with someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service can simplify the process, their representative at 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 can help you bundle your 𝓯𝓵𝓲𝓰𝓱𝓽, hotel, and car rental into one package, often at a discounted rate. 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 customer service will also ensure that all components of your package are coordinated, making your travel planning fast and easy, just dial 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 to book your vacation package with ease. Language Assistance with 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service Multilingual 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 understands that travelers come from diverse backgrounds and may speak different languages and to accommodate this, they offer multilingual 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 through their customer service number, 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞. How to Request Language Assistance at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮? When you call 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service at 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞, you can request assistance in your preferred language. The automated menu may provide language options, or you can ask the representative directly for 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 in languages such as Spanish, French, German, and more, just call 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 customer service “𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞. This ensures that you can communicate your needs effectively with 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 and receive the help you need. For multilingual 𝕊𝕦𝕡𝕡𝕠𝕣𝕥, just call 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 and request assistance in your preferred language. Conclusion Speaking with someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service is fast and easy, ensuring that you get the assistance you need for your travel plans, by calling 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞, you will be connected with a knowledgeable and friendly representative who can help with booking new travel arrangements, resolving issues with existing bookings, and answering any questions you may have. Whether you need help booking a 𝓯𝓵𝓲𝓰𝓱𝓽, hotel, or vacation package, or require assistance with cancellations, changes, or special requests, 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service is here to help, call 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 today and speak with someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Customer Service for fast and easy assistance! How do I speak to someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮? Can I talk to a live person at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮? Yes, you can talk to a live person at ‘[𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞]’ - 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 by calling their customer service 𝓱𝓸𝓽𝓵𝓲𝓷𝓮 at [𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞] - // [𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞] - . This line is available 24/7 for your convenience. Airline’s phone number to call and talk with a live representative is 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 - // 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 - // 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 - 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 - Select your connection language upon call and choose the IVR option that best suits your query by looking at those that are generated automatically to speak with the available agents at 's customer care department. 𝔼𝕩𝕡𝕖𝕕𝕚𝕒 𝕠𝕡𝕖𝕣𝕒𝕥𝕖𝕤 𝔽𝕝𝕚𝕘𝕙𝕥𝕤 𝕗𝕠𝕣 𝕤𝕖𝕧𝕖𝕣𝕒𝕝 𝕣𝕖𝕟𝕥𝕒𝕝𝕤, 𝕚𝕟𝕔𝕝𝕦𝕕𝕚𝕟𝕘 𝕥𝕙𝕖 𝕌𝕟𝕚𝕥𝕖𝕕 𝕊𝕥𝕒𝕥𝕖𝕤. 𝕀𝕗 𝕪𝕠𝕦 𝕒𝕣𝕖 𝕤𝕠𝕞𝕖𝕠𝕟𝕖 𝕨𝕙𝕠 𝕡𝕝𝕒𝕟𝕤 𝕥𝕠 𝕥𝕣𝕒𝕧𝕖𝕝 𝕨𝕚𝕥𝕙 𝔼𝕩𝕡𝕖𝕕𝕚𝕒 𝕗𝕣𝕠𝕞 𝕠𝕣 𝕥𝕠𝕨𝕒𝕣𝕕𝕤 𝕥𝕙𝕖 ‘𝔻𝕠𝕖𝕤, 𝕪𝕠𝕦 𝕞𝕒𝕪 𝕙𝕒𝕧𝕖 𝕢𝕦𝕖𝕤𝕥𝕚𝕠𝕟𝕤 𝕒𝕓𝕠𝕦𝕥’ℍ𝕠𝕨 𝕕𝕠 𝕀 𝕥𝕒𝕝𝕜 𝕥𝕠 𝕒 - 𝕣𝕖𝕡𝕣𝕖𝕤𝕖𝕟𝕥𝕒𝕥𝕚𝕧𝕖 𝕗𝕒𝕤𝕥? 𝕋𝕙𝕖𝕟, 𝕐𝕠𝕦 𝕔𝕒𝕟 𝕕𝕚𝕒𝕝 𝔼𝕩𝕡𝕖𝕕𝕚𝕒’ 𝕔𝕦𝕤𝕥𝕠𝕞𝕖𝕣 𝕤𝕖𝕣𝕧𝕚𝕔𝕖 𝕡𝕙𝕠𝕟𝕖 𝕟𝕦𝕞𝕓𝕖𝕣 (( 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞- // 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞) 𝕋𝕙𝕚𝕤 𝕒𝕝𝕝𝕠𝕨𝕤 𝕪𝕠𝕦 𝕥𝕠 𝕔𝕠𝕟𝕥𝕒𝕔𝕥 𝕒 𝕞𝕖𝕞𝕓𝕖𝕣 𝕠𝕗 𝕥𝕙𝕖𝕚𝕣 𝕔𝕦𝕤𝕥𝕠𝕞𝕖𝕣 𝕤𝕖𝕣𝕧𝕚𝕔𝕖 𝕤𝕥𝕒𝕗𝕗. 𝔹𝕪 𝕘𝕖𝕥𝕥𝕚𝕟𝕘 𝕚𝕟 𝕥𝕠𝕦𝕔𝕙 𝕨𝕚𝕥𝕙 𝕥𝕙𝕖𝕤𝕖 𝕥𝕖𝕒𝕞 𝕞𝕖𝕞𝕓𝕖𝕣𝕤, 𝕪𝕠𝕦 𝕨𝕚𝕝𝕝 𝕓𝕖 𝕒𝕓𝕝𝕖 𝕥𝕠 𝕒𝕤𝕜 𝕢𝕦𝕖𝕤𝕥𝕚𝕠𝕟𝕤 𝕒𝕟𝕕 𝕘𝕖𝕥 𝕒𝕝𝕝 𝕥𝕙𝕖 𝕚𝕟𝕗𝕠𝕣𝕞𝕒𝕥𝕚𝕠𝕟 𝕪𝕠𝕦 𝕟𝕖𝕖𝕕 𝕥𝕠 𝕖𝕟𝕙𝕒𝕟𝕔𝕖 𝕪𝕠𝕦𝕣 𝕥𝕣𝕚𝕡. How do I talk to a and representative fast? USA Know-How do I talk to a and representative fast? Connecting with 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 customer customer 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 offers several avenues, but for optimal efficiency, dialing their 'Live person at 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 ’ // 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 ’ (Live Persons) stands out. Phone: Access 's customer 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 line at ‘𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞’-𝗘𝘅𝗽𝗲𝗱𝗶𝗮(Live Persons). Steps, you need to browse through for connecting to 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 Agent: Dial the 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 customer contact center to speak with their knowledgeable human agents at 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞:𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞.“Upon connection, listen to the IVR responses and opt for one that follows your query”. "Listen further and opt for your option and soon the IVR system will transfer your call to its next available agent to hear you and provide you with a better resolution. How do I speak to a 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 representative immediately? (international) 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 offers a comprehensive customer service line to address all your travel concerns. By calling𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 you can: Make or Change Reservations: Whether you’re booking a new 𝓯𝓵𝓲𝓰𝓱𝓽 or modifying an existing reservation, 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 is the number to call. To speak to a - representative immediately for international inquiries, dial 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞:𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 𝗘𝘅𝗽𝗲𝗱𝗶𝗮. This direct line will connect you with a representative who can assist you promptly with your international travel-relatives or concerns. How do I speak to someone at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮? The most convenient and fastest way to speak to someone at 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 is by calling their customer service number at 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 is by calling their customer service number at (𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞. Alternatively, you can start a chat conversation online for assistance. If you are unfamiliar with the process, follow these steps: For phone assistance: Dial the provided customer service number: 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞 // 𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞. Follow the automated prompts or press the appropriate IVR key to speak with live human agents at . Once connected, explain your intention and query to the agent to initiate the conversation. How to speak directly at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 𝒜𝒾𝓇𝓁𝒾𝓃𝑒𝓈? “𝔗𝔬 𝔤𝔢𝔱 𝔞 𝔯𝔢𝔰𝔭𝔬𝔫𝔰𝔢 𝔣𝔯𝔬𝔪 𝔈𝔵𝔭𝔢𝔡𝔦𝔞 𝔄𝔦𝔯𝔩𝔦𝔫𝔢𝔰, 𝔠𝔞𝔩𝔩 𝔱𝔥𝔢𝔦𝔯 𝔠𝔲𝔰𝔱𝔬𝔪𝔢𝔯 𝔰𝔲𝔭𝔭𝔬𝔯𝔱 𝔥𝔬𝔱𝔩𝔦𝔫𝔢 𝔞𝔱 ||𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞||||:iphone:. 𝔉𝔬𝔩𝔩𝔬𝔴 𝔱𝔥𝔢 ℑ𝔙ℜ 𝔭𝔯𝔬𝔪𝔭𝔱𝔰 𝔱𝔬 𝔠𝔬𝔫𝔫𝔢𝔠𝔱 𝔭𝔯𝔬𝔪𝔭𝔱𝔩𝔶 𝔴𝔦𝔱𝔥 𝔞 𝔨𝔫𝔬𝔴𝔩𝔢𝔡𝔤𝔢𝔞𝔟𝔩𝔢 𝔞𝔤𝔢𝔫𝔱 𝔴𝔥𝔬 𝔠𝔞𝔫 𝔞𝔰𝔰𝔦𝔰𝔱 𝔴𝔦𝔱𝔥 𝔯𝔢𝔰𝔢𝔯𝔳𝔞𝔱𝔦𝔬𝔫𝔰, 𝔣𝔩𝔦𝔤𝔥𝔱 𝔦𝔫𝔣𝔬𝔯𝔪𝔞𝔱𝔦𝔬𝔫, 𝔭𝔬𝔩𝔦𝔠𝔦𝔢𝔰, 𝔬𝔯 𝔞𝔫𝔶 𝔬𝔱𝔥𝔢𝔯 𝔮𝔲𝔢𝔯𝔦𝔢𝔰 𝔶𝔬𝔲 𝔪𝔞𝔶 𝔥𝔞𝔳𝔢. 𝔗𝔥𝔢𝔦𝔯 𝔡𝔢𝔡𝔦𝔠𝔞𝔱𝔢𝔡 𝔱𝔢𝔞𝔪 𝔦𝔰 𝔯𝔢𝔞𝔡𝔶 𝔱𝔬 𝔭𝔯𝔬𝔳𝔦𝔡𝔢 𝔭𝔯𝔬𝔪𝔭𝔱 𝔞𝔫𝔡 𝔯𝔢𝔩𝔦𝔞𝔟𝔩𝔢 𝔞𝔰𝔰𝔦𝔰𝔱𝔞𝔫𝔠𝔢. ℭ𝔬𝔫𝔱𝔞𝔠𝔱 𝔱𝔥𝔢𝔪 𝔱𝔬𝔡𝔞𝔶 𝔱𝔬 𝔢𝔫𝔰𝔲𝔯𝔢 𝔞 𝔰𝔱𝔯𝔢𝔰𝔰-𝔣𝔯𝔢𝔢 𝔱𝔯𝔞𝔳𝔢𝔩 𝔢𝔵𝔭𝔢𝔯𝔦𝔢𝔫𝔠𝔢.” How do I speak to 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 customer service? To connect with a live agent at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 𝒜𝒾𝓇𝓁𝒾𝓃𝑒𝓈, you have several convenient options available. You can reach out via their customer service 𝓱𝓸𝓽𝓵𝓲𝓷𝓮 at ||𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞||||:iphone: or opt for the Quick Connect option at ||𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞||||:iphone:. Additionally, you can engage in a live chat directly on their website or utilize their email 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 services. Rest assured, no matter which method you choose, 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 𝒜𝒾𝓇𝓁𝒾𝓃𝑒𝓈 is committed to providing prompt assistance to effectively address your needs. How do I speak to 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 customer service? You’ve got several choices for connecting with a live representative at 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 𝒜𝒾𝓇𝓁𝒾𝓃𝑒𝓈. Contact their customer 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 𝓱𝓸𝓽𝓵𝓲𝓷𝓮 at ||𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞||||:iphone: (Live Person) for immediate assistance. Alternatively, you can engage with them using their website’s live chat or email 𝕊𝕦𝕡𝕡𝕠𝕣𝕥. How do I speak to 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 customer service? To connect with 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 𝒜𝒾𝓇𝓁𝒾𝓃𝑒𝓈, explore multiple channels such as their website, mobile app, or customer service 𝓱𝓸𝓽𝓵𝓲𝓷𝓮 at ||𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞||||:iphone:. Booking 𝓯𝓵𝓲𝓰𝓱𝓽s, managing reservations, and accessing travel information are seamlessly available online. As a leading travel agency, 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 understands the importance of customer 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 and offers various channels to assist. For live 𝕊𝕦𝕡𝕡𝕠𝕣𝕥, you can call their customer service 𝓱𝓸𝓽𝓵𝓲𝓷𝓮 at ||𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞||||:iphone:, engage in live chat on their website, or use email 𝕊𝕦𝕡𝕡𝕠𝕣𝕥. How do I speak to 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 customer service? Need live 𝕊𝕦𝕡𝕡𝕠𝕣𝕥 from 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 𝒜𝒾𝓇𝓁𝒾𝓃𝑒𝓈? Dial ||𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞||||:iphone: (Live Person) or ||𝟏-𝟴𝟞𝟘=𝟡𝟜𝟞=𝟴𝟯𝟙𝟞||||:iphone: (Quick Connect) anytime, day or night. You can also utilize their website’s live chat feature or email 𝕊𝕦𝕡𝕡𝕠𝕣𝕥. Speaking directly with a 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 𝒜𝒾𝓇𝓁𝒾𝓃𝑒𝓈 representative is straightforward. Whether you’re sorting out booking hiccups, adjusting travel plans, or seeking answers to specific inquiries, connecting with a live person can swiftly resolve your concerns. This section provides a detailed, step-by-step guide on how to reach 𝗘𝘅𝗽𝗲𝗱𝗶𝗮 𝒜𝒾𝓇𝓁𝒾𝓃𝑒𝓈 customer service via phone, including the best times to call to minimize your waiting time ted to enjoy.
david_bceaa53a8c4137bffaf
1,922,269
React Tailwind Portfolio Template
I built a React Tailwind Portfolio template for all the developers looking to build portfolios. You...
0
2024-07-13T12:59:35
https://dev.to/devgancode/react-tailwind-portfolio-template-5cdk
webdev, javascript, beginners, programming
I built a React Tailwind Portfolio template for all the developers looking to build portfolios. You don't need to make everything from scratch you can use this template [GitHub Repo](https://github.com/ganeshpatil386386/React-Portfolio-Template) which includes the below features. - Complete Responsiveness - Dark Mode - Home Project Blogs YouTube Sections - Deployed on Vercel Future Enhancement📍 - Implement Hashnode & YouTube API - Search component for Blogs section - Nice Ui/UX design 📌Here's a complete guide on YouTube to start with the template! Support to my Channel! {% youtube https://youtu.be/H9wRx9Ld6Nk?si=UKjNh6ZP-_0kXfqA %} Recent Articles! 1. [How to Deploy React App on Firebase](https://dev.to/devgancode/deploy-react-app-on-firebase-netlify-vercel-2abf) 2. [OpenSource Next Tailwind Blog Application Deploy on Vercel](https://dev.to/devgancode/next-tailwind-blog-app-4ogb) 3. [What is Community Management](https://dev.to/devgancode/decoding-community-management-5hl8)
devgancode
1,922,273
How to unblock CapCut with VPN easily in 2024
This Blog was Originally Posted to EonVPN Blog Can’t access CapCut? It is quite understood by CapCut...
0
2024-07-13T13:04:08
https://eonvpn.com/blog/vpn-for-capcut/
freevpn, vpnforwindows, unblockwebsites, secureonlineactivity
**This Blog was Originally Posted to [EonVPN Blog](https://eonvpn.com/blog/vpn-for-capcut/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution)** Can’t access CapCut? It is quite understood by CapCut users worldwide that it is not available in all regions due to unreasonable regulations, company plans, licensing complications, technology, and political aspects. Fret not, unblocking CapCut with a VPN is unbelievably easy. Select the right VPN service, sign up, download and install the VPN, configure it, and then select a VPN server through which CapCut is available. Have an active VPN connection when you are using CapCut, and if the program does not start or functions abnormally, then fix it. This serves the purpose of providing a continuous flow when using CapCut, irrespective of the place one is in. **Why is CapCut not available everywhere?** CapCut might not be available everywhere due to several reasons: - **Regulatory Restrictions:** There are some countries that have regulatory policies that may limit apps associated with data privacy, security, and content, meaning that CapCut may not be present in some countries. - **Market Strategy:** The authors could also cover the idea that developers may initially target specific markets to provide better support and adapt to them before going international. - **Licensing Issues:** The app may include options that are controlled by local licensing and regulatory requirements and may be unavailable in some regions of the world. - **Technical and Infrastructure Limitations:** CapCut might not be compatible with all operating systems or devices, limiting its availability to certain users. - **Political Reasons:** Sometimes, it is also due to political reasons relating to the relationship between countries and their political policies concerning the internet and app downloads in their nations. **How do I download CapCut using a VPN?** It is unearthing that accessing CapCut using a VPN is relatively simple and can be used to unlock those regions where CapCut may be restricted. Here’s a simple guide to help you get started: - **Choose a Reliable VPN:** It is best to secure yourself with a reliable VPN provider such as [EonVPN](https://eonvpn.com/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution), which provides safety measures and many server options. This will guarantee the freedom of CapCut as well as the protection of your operations on the social network. - **Install the VPN App:** On your computers, tablets, or any device that has an app store, just look for the VPN app to download, install, and [create an account](https://eonvpn.com/blog/create-account-on-eonvpn/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution) on it. EonVPN has a simple app that supports Windows as an OP. - **Connect to a Suitable Server:** Launched the VPN software and launched CapCut and signed in to the server of any country where CapCut is available, like the USA or UK. This step is very vital as it ensures that it does not look like you are browsing from another location entirely. - **Maintain the VPN Connection:** Ensure that the VPN connection remains active all throughout when utilizing CapCut to access the associated sites. In certain circumstances, your connection to the VPN might drop. This would render you unable to access the application in question, so it is recommended to use a [kill switch VPN](https://eonvpn.com/blog/what-is-vpn-kill-switch/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution). - **Download CapCut:** After connecting to a VPN, open the application in either the Google Play Store or the Apple App Store, type CapCut on the search bar, and install the application. The VPN would enable you to overcome all the restraints that regional restrictions may impose, providing access to all the functionalities of CapCut. - **Start Editing:** After downloading CapCut, you are free to use it to edit your videos, and none of it will be hindered by geography or function. Thus, through these steps, users can easily unblock CapCut with the help of a VPN so that they are able to make the most of this excellent video editing software regardless of the country they may be living in. **Final thoughts** In conclusion, when choosing a VPN for CapCut, it is crucial to consider factors such as server locations, speed, security features, and user-friendliness to ensure a seamless video editing experience. CapCut allows and ensures seamless and easy video editing features even in regions where it is not easily accessible by using EonVPN. **FAQs** **Is the CapCut application safe?** CapCut, developed by ByteDance, benefits from the reputation of a major tech company but has faced scrutiny over data privacy. Reviewing its privacy policy and requested permissions is crucial for assessing safety. Stick to official app stores for downloading and keep the app updated. Users should stay informed about any potential privacy issues. **Is using a VPN for CapCut legal?** Using a VPN for CapCut is generally legal, as VPNs are legal in most countries. However, it’s essential to check the laws of your specific country, as some regions restrict or regulate VPN usage. Additionally, using a VPN may violate CapCut’s terms of service. Always review both local laws and app policies before proceeding. **Where is CapCut available?** CapCut is available in many countries worldwide, including the United States, most of Europe, and parts of Asia. However, its availability can vary due to regional restrictions and app store policies. Some countries may have [geo-restrictions](https://eonvpn.com/blog/bypass-geo-blocking/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution) on downloading or using CapCut due to regulatory or political reasons. Always check your local app store to confirm availability.
e0nvpn
1,922,275
How to center a Div in HTML and CSS?
Although it's a typical activity in web development, centering a div might be difficult for novices....
0
2024-07-13T13:09:38
https://www.nilebits.com/blog/2024/07/how-to-center-a-div-in-html-and-css/
webdev, javascript, html, css
Although it's a typical activity in web development, centering a div might be difficult for novices. It's critical to comprehend the many techniques for centering a div either horizontally, vertically, or both. This post will walk you through a number of methods to accomplish this, along with explanations and code samples. Introduction An essential component of making designs that are aesthetically pleasing and well-balanced is centering components on a web page. Being able to center a div is essential, regardless of the complexity of the user interface you're creating, even for simple webpages. This post will discuss many approaches—both conventional and cutting-edge—for centering a div within [HTML](https://www.nilebits.com/blog/2024/07/was-dom-invented-with-html/) and [CSS](https://www.linkedin.com/pulse/boost-your-css-skills-5-unknown-properties-esraa-shaheen-lzhbf/). Why Center a Div? Centering a div can enhance the layout and readability of your webpage. It helps in creating a balanced design and ensures that the content is easily accessible to users. Whether it's a text box, image, or a form, centering these elements can make your website look more professional and organized. Methods to Center a Div There are several methods to center a div in HTML and CSS. We'll cover the following techniques: Using margin: auto; Using Flexbox Using Grid Layout Using CSS Transform Using Text-Align Using Position and Negative Margin Each method has its advantages and use cases. Let's dive into each one with detailed explanations and code examples. 1. Using margin: auto; The margin: auto; method is one of the simplest ways to center a div horizontally. It works by setting the left and right margins to auto, which evenly distributes the available space on both sides of the div. Horizontal Centering ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Center a Div Horizontally</title> <style> .center-horizontally { width: 50%; margin: 0 auto; background-color: #f0f0f0; text-align: center; padding: 20px; border: 1px solid #ccc; } </style> </head> <body> <div class="center-horizontally"> This div is centered horizontally. </div> </body> </html> ``` In the above example, the div is centered horizontally using margin: 0 auto;. The width of the div is set to 50%, so it takes up half of the available space, with equal margins on both sides. Vertical Centering To center a div vertically using margin: auto;, you need to set the height of the parent container and the div itself. This method is not as straightforward as horizontal centering. ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Center a Div Vertically</title> <style> .container { height: 100vh; display: flex; justify-content: center; align-items: center; } .center-vertically { width: 50%; background-color: #f0f0f0; text-align: center; padding: 20px; border: 1px solid #ccc; } </style> </head> <body> <div class="container"> <div class="center-vertically"> This div is centered vertically. </div> </div> </body> </html> ``` In this example, we use a flex container to center the div vertically. The height: 100vh; ensures that the container takes up the full height of the viewport. The display: flex;, justify-content: center;, and align-items: center; properties align the div both horizontally and vertically within the container. 2. Using Flexbox Flexbox is a modern layout model that provides an efficient way to align and distribute space among items in a container. It simplifies the process of centering elements, both horizontally and vertically. Horizontal Centering ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Center a Div Horizontally with Flexbox</title> <style> .flex-container { display: flex; justify-content: center; } .center-flex-horizontally { width: 50%; background-color: #f0f0f0; text-align: center; padding: 20px; border: 1px solid #ccc; } </style> </head> <body> <div class="flex-container"> <div class="center-flex-horizontally"> This div is centered horizontally with Flexbox. </div> </div> </body> </html> ``` In this example, we use Flexbox to center the div horizontally. The display: flex; and justify-content: center; properties of the container ensure that the div is centered. Vertical Centering ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Center a Div Vertically with Flexbox</title> <style> .flex-container { display: flex; justify-content: center; align-items: center; height: 100vh; } .center-flex-vertically { width: 50%; background-color: #f0f0f0; text-align: center; padding: 20px; border: 1px solid #ccc; } </style> </head> <body> <div class="flex-container"> <div class="center-flex-vertically"> This div is centered vertically with Flexbox. </div> </div> </body> </html> ``` In this example, we use Flexbox to center the div vertically. The align-items: center; property of the container ensures that the div is centered vertically within the container. Centering Both Horizontally and Vertically ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Center a Div with Flexbox</title> <style> .flex-container { display: flex; justify-content: center; align-items: center; height: 100vh; } .center-flex { width: 50%; background-color: #f0f0f0; text-align: center; padding: 20px; border: 1px solid #ccc; } </style> </head> <body> <div class="flex-container"> <div class="center-flex"> This div is centered both horizontally and vertically with Flexbox. </div> </div> </body> </html> ``` In this example, we use both justify-content: center; and align-items: center; to center the div horizontally and vertically within the container. 3. Using Grid Layout CSS Grid Layout is another powerful layout system that allows you to create complex layouts with ease. It provides a straightforward way to center elements. Horizontal Centering ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Center a Div Horizontally with Grid</title> <style> .grid-container { display: grid; place-items: center; height: 100vh; } .center-grid-horizontally { width: 50%; background-color: #f0f0f0; text-align: center; padding: 20px; border: 1px solid #ccc; } </style> </head> <body> <div class="grid-container"> <div class="center-grid-horizontally"> This div is centered horizontally with Grid. </div> </div> </body> </html> ``` In this example, we use CSS Grid Layout to center the div horizontally. The place-items: center; property centers the div both horizontally and vertically, but since we are focusing on horizontal centering, it achieves the desired result. Vertical Centering ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Center a Div Vertically with Grid</title> <style> .grid-container { display: grid; place-items: center; height: 100vh; } .center-grid-vertically { width: 50%; background-color: #f0f0f0; text-align: center; padding: 20px; border: 1px solid #ccc; } </style> </head> <body> <div class="grid-container"> <div class="center-grid-vertically"> This div is centered vertically with Grid. </div> </div> </body> </html> ``` In this example, we use CSS Grid Layout to center the div vertically. The place-items: center; property centers the div both horizontally and vertically. Centering Both Horizontally and Vertically ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Center a Div with Grid</title> <style> .grid-container { display: grid; place-items: center; height: 100vh; } .center-grid { width: 50%; background-color: #f0f0f0; text-align: center; padding: 20px; border: 1px solid #ccc; } </style> </head> <body> <div class="grid-container"> <div class="center-grid"> This div is centered both horizontally and vertically with Grid. </div> </div> </body> </html> ``` In this example, the place-items: center; property centers the div both horizontally and vertically within the container. 4. Using CSS Transform CSS Transform allows you to manipulate elements' appearance and position. You can use the transform property to center a div. Horizontal Centering ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Center a Div Horizontally with Transform</title> <style> .center-transform-horizontally { width: 50%; position: absolute; left: 50%; transform: translateX(-50%); background-color: #f0f0f0; text-align: center; padding: 20px; border: 1px solid #ccc; } </style> </head> <body> <div class="center-transform-horizontally"> This div is centered horizontally with Transform. </div> </body> </html> ``` In this example, the left: 50%; and transform: translateX(-50%); properties center the div horizontally. The position: absolute; property positions the div relative to its nearest positioned ancestor. Vertical Centering ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Center a Div Vertically with Transform</title> <style> .center-transform-vertically { width: 50%; position: absolute; top: 50%; transform: translateY(-50%); background-color: #f0f0f0; text-align: center; padding: 20px; border: 1px solid #ccc; } </style> </head> <body> <div class="center-transform-vertically"> This div is centered vertically with Transform. </div> </body> </html> ``` In this example, the top: 50%; and transform: translateY(-50%); properties center the div vertically. The position: absolute; property positions the div relative to its nearest positioned ancestor. Centering Both Horizontally and Vertically ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Center a Div with Transform</title> <style> .center-transform { width: 50%; position: absolute; top: 50%; left: 50%; transform: translate(-50%, -50%); background-color: #f0f0f0; text-align: center; padding: 20px; border: 1px solid #ccc; } </style> </head> <body> <div class="center-transform"> This div is centered both horizontally and vertically with Transform. </div> </body> </html> ``` In this example, the top: 50%;, left: 50%;, and transform: translate(-50%, -50%); properties center the div both horizontally and vertically. The position: absolute; property positions the div relative to its nearest positioned ancestor. 5. Using Text-Align The text-align property is often used to center text, but it can also be used to center block elements within a container. Horizontal Centering ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Center a Div Horizontally with Text-Align</title> <style> .container { text-align: center; } .center-text-align { display: inline-block; width: 50%; background-color: #f0f0f0; text-align: center; padding: 20px; border: 1px solid #ccc; } </style> </head> <body> <div class="container"> <div class="center-text-align"> This div is centered horizontally with Text-Align. </div> </div> </body> </html> ``` In this example, the container has text-align: center;, and the div has display: inline-block;. This centers the div horizontally within the container. 6. Using Position and Negative Margin Using position and negative margins is another method to center a div both horizontally and vertically. Centering Both Horizontally and Vertically ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Center a Div with Position and Negative Margin</title> <style> .center-position { width: 50%; height: 200px; position: absolute; top: 50%; left: 50%; margin-top: -100px; /* Half of the height */ margin-left: -25%; /* Half of the width */ background-color: #f0f0f0; text-align: center; padding: 20px; border: 1px solid #ccc; } </style> </head> <body> <div class="center-position"> This div is centered both horizontally and vertically with Position and Negative Margin. </div> </body> </html> ``` In this example, the top: 50%; and left: 50%; properties position the div in the middle of the container. The margin-top: -100px; and margin-left: -25%; properties center the div by offsetting it by half of its height and width, respectively. Conclusion Centering a div in HTML and CSS can be accomplished using various methods. Each technique has its strengths and is suitable for different scenarios. Whether you choose to use margin: auto;, Flexbox, Grid Layout, CSS Transform, Text-Align, or Position and Negative Margin, understanding these methods will help you create balanced and visually appealing designs. By mastering these techniques, you can enhance the layout and readability of your web pages, making them more user-friendly and professional. Experiment with these methods to find the one that best suits your needs and the specific requirements of your projects. References [MDN Web Docs - CSS: Cascading Style Sheets](https://developer.mozilla.org/en-US/docs/Web/CSS) [CSS-Tricks - A Complete Guide to Flexbox](https://css-tricks.com/snippets/css/a-guide-to-flexbox/) [CSS-Tricks - A Complete Guide to Grid](https://css-tricks.com/snippets/css/complete-guide-grid/) [W3Schools - CSS](https://www.w3schools.com/css/) [MDN Web Docs - Using CSS Flexible Boxes](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Flexible_Box_Layout/Basic_Concepts_of_Flexbox) [MDN Web Docs - CSS Grid Layout](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Grid_Layout) By following this guide, you can center a div with confidence, regardless of the complexity of your layout. Happy coding!
amr-saafan
1,922,277
Testing a React Application with Vitest and React Testing Library
This guide covers how to set up testing for a React application using Vitest, focusing on building an...
27,978
2024-07-14T00:30:00
https://dev.to/manjushsh/testing-a-react-application-with-vitest-and-react-testing-library-c40
webdev, react, testing, vite
This guide covers how to set up testing for a React application using Vitest, focusing on building an accordion component and applying Test Driven Development (TDD) principles. We will be using default counter app provided by vite. We are going to configure vitest and implement test cases. I will be using pnpm as package manager for this but you can use yarn or npm as well. Here are some test cases we are going to write for App (Counter App) component using React Testing Library and Vitest: 1. Ensure the logos are rendered correctly with the correct links. 2. Ensure the initial count is displayed correctly. 3. Ensure the count increments when the button is clicked. 4. Ensure the text content is displayed correctly. ## Initializing the React Project We can [initialize the vite project](https://vitejs.dev/guide/#scaffolding-your-first-vite-project) using the following commands and fill in project details: ```bash ## For pnpm: pnpm create vite # For NPM npm create vite@latest ## For yarn yarn create vite cd <YOUR_PROJECT_NAME> pnpm install ``` This should have your project setup. You can run project and access it locally. ## Adding dependencies for Testing: Install Vitest, jsdom and @testing-library/react as a development dependencies: ```bash pnpm install -D vitest jsdom @testing-library/react ``` ## Updating Configurations: ### Setup `setupTests` file: Create a test setup file at `setupTests.ts` in the `tests` folder of your project or in the src directory, whichever you prefer. Implement it as follows: ```typescript import { afterEach } from 'vitest' import { cleanup } from '@testing-library/react' import '@testing-library/jest-dom/vitest' afterEach(() => { cleanup(); }) ``` ### Updating `vite.config.ts`: One advantage of Vitest is its use of the same configuration as Vite. This alignment ensures that the test environment mirrors the build environment, enhancing the reliability of the tests. We will update the configuration to the following code to include js-dom, which facilitates testing: ```typescript test: { globals: true, environment: 'jsdom', setupFiles: './src/tests/setupTests.ts', } ``` So final vite config will look something like this: ```typescript /// <reference types="vitest" /> /// <reference types="vite/client" /> import { defineConfig } from 'vite' import react from '@vitejs/plugin-react-swc' // https://vitejs.dev/config/ export default defineConfig({ plugins: [react()], test: { globals: true, environment: 'jsdom', setupFiles: './src/tests/setupTests.ts', }, }) ``` ## Add the Test Script: We need to add the vitest command to the `package.json` to start the testing script. ```json "scripts": { // ...rest of scripts "test": "vitest" }, ``` once this is done, we can implement test cases for our component. ## Writing Tests for the Counter App: Create `App.test.tsx` in tests folder with following contents: ```typescript import { describe, it, expect } from 'vitest' import { render, screen, fireEvent } from '@testing-library/react' import App from '../App' describe('App component', () => { it('renders Vite and React logos with correct links', () => { render(<App />) const viteLogo = screen.getByRole('img', { name: /Vite logo/i }); const reactLogo = screen.getByAltText('React logo') expect(viteLogo).toBeInTheDocument() expect(reactLogo).toBeInTheDocument() expect(viteLogo.closest('a')).toHaveAttribute('href', 'https://vitejs.dev') expect(reactLogo.closest('a')).toHaveAttribute('href', 'https://react.dev') }) it('renders the initial count correctly', () => { render(<App />) const button = screen.getByRole('button', { name: /count is 0/i }) expect(button).toBeInTheDocument() }) it('increments count when button is clicked', () => { render(<App />) const button = screen.getByRole('button', { name: /count is 0/i }) fireEvent.click(button) expect(button).toHaveTextContent('count is 1') }) it('renders text content correctly', () => { render(<App />) const title = screen.getByText('Vite \+ React') const editText = screen.getByText((content, _) => content.includes('Edit') && content.includes('save to test HMR')) const docsText = screen.getByText('Click on the Vite and React logos to learn more') expect(title).toBeInTheDocument() expect(editText).toBeInTheDocument() expect(docsText).toBeInTheDocument() }) }) ``` ## Running Tests: You can run tests by running: ```bash pnpm run test ``` From the code example provided, we have demonstrated how Vitest integrates seamlessly with the React Testing Library. As previously mentioned, these libraries work in harmony: Vitest facilitates the creation of test suites, test cases, and test execution, while the React Testing Library enables thorough testing of React components by simulating user interactions with our application. You can access full project here: [![GitHub](https://img.shields.io/badge/github-%23121011.svg?style=for-the-badge&logo=github&logoColor=white)](https://github.com/manjushsh/vite-react-ts-vitest-template)
manjushsh
1,922,278
NEKMINNIT...
Hello!!!! to the Wide World of Dev. Looks to be a pretty cool landscape and have found some good help...
0
2024-07-13T13:20:13
https://dev.to/aitchi/nekminnit-18n1
Hello!!!! to the Wide World of Dev. Looks to be a pretty cool landscape and have found some good help here already so my outlook is a positive One for the DEV Community! Look forward to knocking heads with you all soon!!!!
aitchi
1,922,293
New lightbox package here!
Hello everyone, I’m very excited to introduce you to my open-source project: @duccanhole/lightbox ....
0
2024-07-13T13:20:29
https://dev.to/coderduck/new-lightbox-package-here-5c95
opensource, npm, vanilla, javascript
Hello everyone, I’m very excited to introduce you to my open-source project: `@duccanhole/lightbox` . This simple lightbox file viewer supports various types of files and was developed using vanilla JavaScript, with no dependencies. As this is my first time publishing an npm package as an open-source developer, your feedback will be incredibly valuable to help me improve this project. You can check out the project [here](https://github.com/duccanhole/lightbox-js). Thank you for reading, and have a great day!
coderduck
1,922,313
Go versus Rust in 2024: Measuring the Best with 15 Benchmarks for Everyday Tasks
This post idea was born in last several public and personal discussions with highlighting a lof of...
0
2024-07-13T13:24:38
https://dev.to/paulnixer/go-versus-rust-in-2024-measuring-the-best-with-15-benchmarks-for-everyday-tasks-52a6
go, rust, programming, bench
This post idea was born in last several public and personal discussions with highlighting a lof of technical, political, personal and religious aspects. Both programming languages got success in last decade, but released in different times: Go in 2009 and Rust in 2015. ![Go logo, ©Wikipedia](https://upload.wikimedia.org/wikipedia/commons/thumb/0/05/Go_Logo_Blue.svg/800px-Go_Logo_Blue.svg.png) Some people think that Go and Rust are not direct competitors, but this is not quite true: they cross very often: console tools, desktop applications, web services, and more. The only non-crossing area is embedded, but here Rust is not very strong due to static linking and strong competition from C/C++. This means that in many cases you will have to choose between Go and Rust as the main language for your next project. ![Rust logo, ©Wikipedia](https://upload.wikimedia.org/wikipedia/commons/thumb/d/d5/Rust_programming_language_black_logo.svg/800px-Rust_programming_language_black_logo.svg.png) The tests were not selected by code complexity or level of extravagance, the main pattern is popular tasks. Even in radically different projects like machine learning, networking, or audio processing, you can't escape the main building block: basic math like addition, string concatenation, sorting, hashing, parsing, and more. So let's dive deep and figure out what the code looks like and which is faster. Time is the most important here - the faster the better. Check the benchmarks and final score at [Nix Sanctuary](https://nixsanctuary.com/go-versus-rust-in-2024-measuring-the-best-with-15-benchmarks-for-everyday-tasks/). Don't forget to subscribe and never miss [new NS programming posts](https://nixsanctuary.com/tag/programming/).
paulnixer
1,922,314
dinesh pawar
arerer din esh pawar
0
2024-07-13T13:30:03
https://dev.to/dineshpawar07/dinesh-pawar-4ikm
javascript
arerer din esh pawar
dineshpawar07
1,922,315
Gitleaks: Find forgotten codes in your repositories
One of the problems you can face either when you are new to programming or when you have more...
0
2024-07-13T13:30:55
https://dev.to/tkouleris/gitleaks-find-forgotten-codes-in-your-repositories-24d3
git, github, security
One of the problems you can face either when you are new to programming or when you have more experience is to forget somewhere in your code or in a file that you upload to git, codes or other data that should not be public. Obviously, no one will inform you that somewhere in public you have exposed your email username and password or the token with which you request data from some service and they are charging you for it. Not even git will tell you, when you commit your code, that you are going to make a big mistake. Gitleaks was developed for this purpose. Gitleaks is a fast, lightweight and open source scanner for git repositories that can alert you abbout forgotten passwords or tokens. You can either run it autonomously in one of your repositories or integrate it automatically so that when you commit will inform you if it found a code leak. The tool is available for linux, mac and windows. - Official page [here](https://gitleaks.io/) - github repository [here](https://github.com/gitleaks/gitleaks) - video demonstration [here](https://www.youtube.com/watch?v=hux-W8PAxYE&ab_channel=geralexgr)
tkouleris
1,922,318
JOJOBET GÜNCEL GİRİŞ ADRESİ
JOJOBET GÜNCEL GİRİŞ Jojobet'e giriş yapabilmek için aşağıda ki turuncu yazılı cümleye...
0
2024-07-13T13:32:18
https://dev.to/jojokrali/jojobet-guncel-giris-adresi-39be
[JOJOBET GÜNCEL GİRİŞ](https://cutt.ly/eehgIe1q) Jojobet'e giriş yapabilmek için aşağıda ki turuncu yazılı cümleye tıklayınız. -> [JOJOBET GÜNCEL GİRİŞ ADRESİ](https://cutt.ly/eehgIe1q) <- -> [JOJOBET GÜNCEL GİRİŞ ADRESİ](https://cutt.ly/eehgIe1q) <- -> [JOJOBET GÜNCEL GİRİŞ ADRESİ](https://cutt.ly/eehgIe1q) <- -> [JOJOBET GÜNCEL GİRİŞ ADRESİ](https://cutt.ly/eehgIe1q) <- -> [JOJOBET GÜNCEL GİRİŞ ADRESİ](https://cutt.ly/eehgIe1q) <- Jojobet casino ve canlı bahis sitesi resmi Tumblr blog adresidir. Buradan Jojobet yeni giriş adresini öğrenebilir ve Jojobet sitesine giriş yapabilirsiniz.
jojokrali
1,922,319
Linear Regression, Regression: Supervised Machine Learning
What is Regression? Definition and Purpose Regression is a statistical method...
0
2024-07-13T13:32:28
https://dev.to/harshm03/linear-regression-regression-supervised-machine-learning-3mek
machinelearning, datascience, python, tutorial
### What is Regression? #### Definition and Purpose **Regression** is a statistical method used in machine learning and data science to understand relationships between variables. It involves modeling the relationship between a dependent variable (target) and one or more independent variables (predictors). The main purpose of regression is to predict or estimate the value of the dependent variable based on the values of the independent variables. #### Key Objectives: - **Prediction**: Forecasting future values based on historical data. - **Estimation**: Determining the strength and form of the relationship between variables. - **Understanding Relationships**: Identifying which independent variables are significant predictors of the dependent variable. ### Types of Regression **1. Linear Regression** - **Simple Linear Regression**: Models the relationship between two variables by fitting a linear equation to observed data. - **Equation**: `y = mx + b` - **Purpose**: Predicts the dependent variable `y` based on the independent variable `x`. - **Multiple Linear Regression**: Extends simple linear regression to include multiple independent variables. - **Equation**: `y = b0 + b1x1 + b2x2 + ... + bnxn` - **Purpose**: Predicts the dependent variable `y` based on several independent variables `x1`, `x2`, ..., `xn`. **2. Polynomial Regression** - **Description**: Models the relationship between the dependent and independent variables as an nth degree polynomial. - **Equation**: `y = b0 + b1x + b2x^2 + ... + bnx^n` - **Purpose**: Captures the non-linear relationship between variables. ### Ordinary Least Squares (OLS) Method OLS is a method for estimating the unknown parameters in a linear regression model. It minimizes the sum of the squared differences between observed and predicted values. **Equation**: The linear model for OLS can be represented as: `y = w0 + w1x1 + w2x2 + ... + wnxn` where: - `y` is the dependent variable - `x1, x2, ..., xn` are the independent variables - `w0, w1, w2, ..., wn` are the coefficients (parameters) to be estimated **Objective**: Minimize the cost function: `Cost(OLS) = Σ(yi - ŷi)^2` where: - `yi` is the actual value - `ŷi` is the predicted value ### Cost Function and Loss Minimization in Linear Regression #### Cost Function The **cost function** in linear regression quantifies the error between the predicted values and the actual values of the dependent variable. It measures how well the model's predictions align with the actual data. The most commonly used cost function in linear regression is the **Mean Squared Error (MSE)**, but there are other cost functions that can also be applied. **1. Mean Squared Error (MSE)**: The MSE is the average of the squared differences between the actual and predicted values. It is defined as: `MSE = (1/n) * Σ (yi - ŷi)^2` where: - `n` is the number of data points, - `yi` is the actual value, - `ŷi` is the predicted value. The MSE penalizes larger errors more significantly due to the squaring of the differences, making it sensitive to outliers. The goal of linear regression is to find the model parameters (coefficients) that minimize this cost function. **2. Root Mean Squared Error (RMSE)**: The RMSE is the square root of the MSE, providing an error metric in the same units as the dependent variable. It is defined as: `RMSE = √(MSE)` This metric is also sensitive to outliers and is commonly used for model evaluation. **3. Mean Absolute Error (MAE)**: The MAE measures the average magnitude of the errors in a set of predictions, without considering their direction (i.e., whether the predictions are above or below the actual values). It is defined as: `MAE = (1/n) * Σ |yi - ŷi|` The MAE is less sensitive to outliers compared to MSE and RMSE, making it a robust alternative for certain datasets. #### Loss Minimization (Optimization) **Loss minimization** involves finding the values of the model parameters that result in the lowest possible cost function value. This process is also known as **optimization**. The most common method for loss minimization in linear regression is the **Gradient Descent** algorithm. ##### Gradient Descent Gradient Descent is an iterative optimization algorithm used to minimize the cost function. It adjusts the model parameters in the direction of the steepest descent of the cost function. **Steps of Gradient Descent**: 1. **Initialize Parameters**: Start with initial values for the model parameters (e.g., coefficients `b0`, `b1`, ..., `bn`). 2. **Calculate Gradient**: Compute the gradient of the cost function with respect to each parameter. The gradient is the partial derivative of the cost function. 3. **Update Parameters**: Adjust the parameters in the opposite direction of the gradient. The adjustment is controlled by the **learning rate** (`α`), which determines the size of the steps taken towards the minimum. 4. **Repeat**: Iterate the process until the cost function converges to a minimum value (or a pre-defined number of iterations is reached). **Parameter Update Rule**: For each parameter `bj`: `bj = bj - α * (∂/∂bj) MSE` where: - `α` is the learning rate - `(∂/∂bj) MSE` is the partial derivative of the MSE with respect to `bj` The partial derivative of the MSE with respect to `bj` is calculated as: `(∂/∂bj) MSE = -(2/n) * Σ (yi - ŷi) * xij` where: - `xij` is the value of the `j`th independent variable for the `i`th data point ### Overfitting vs. Underfitting #### Overfitting - **Definition**: Overfitting occurs when a model learns the training data too well, capturing noise and outliers rather than the underlying pattern. As a result, the model performs exceptionally well on training data but poorly on unseen or validation data. - **Characteristics**: - High accuracy on training data. - Poor generalization to new data. - Complexity of the model is too high (e.g., too many parameters or a very flexible model). - **Causes**: - Too many features relative to the number of observations. - Excessive model complexity (e.g., high-degree polynomial regression). - Insufficient training data. - **Solutions**: - Use simpler models (reduce complexity). - Employ regularization techniques (e.g., Lasso, Ridge). - Use cross-validation to tune hyperparameters. - Increase training data if possible. #### Underfitting - **Definition**: Underfitting occurs when a model is too simple to capture the underlying trend in the data. This leads to poor performance on both training and validation datasets. - **Characteristics**: - Low accuracy on both training and validation data. - The model fails to learn the relationships in the data. - **Causes**: - Insufficient model complexity (e.g., linear model for a non-linear relationship). - Too few features used in the model. - Poor feature selection or engineering. - **Solutions**: - Increase model complexity (e.g., use a higher-degree polynomial or more sophisticated algorithms). - Add relevant features or perform feature engineering. - Remove overly simplistic assumptions in the model. ### Bias-Variance Trade-Off The **bias-variance trade-off** is a fundamental concept in machine learning that describes the trade-off between two sources of error that affect the performance of predictive models: bias and variance. #### Bias - **Definition**: Bias refers to the error due to overly simplistic assumptions in the learning algorithm. It represents the model's inability to capture the underlying patterns of the data. - **Characteristics**: - High bias can lead to **underfitting**, where the model is too simple to capture the complexity of the data. - Models with high bias tend to have consistent errors across different datasets. - **Examples**: Linear regression on non-linear data. #### Variance - **Definition**: Variance refers to the error due to excessive sensitivity to fluctuations in the training data. It captures how much the model's predictions would vary if it were trained on different datasets. - **Characteristics**: - High variance can lead to **overfitting**, where the model learns noise and outliers in the training data instead of the underlying distribution. - Models with high variance perform well on training data but poorly on unseen data. - **Examples**: High-degree polynomial regression on a small dataset. #### The Trade-Off - **Balancing Act**: The challenge in machine learning is to find a model that minimizes both bias and variance. A model with low bias and low variance is ideal but often hard to achieve. - **Effect of Complexity**: - As model complexity increases, bias decreases and variance increases. - Conversely, as model complexity decreases, bias increases and variance decreases. The goal is to achieve a balance where the total error (comprised of bias, variance, and irreducible error due to noise in the data) is minimized. This often involves techniques such as cross-validation, regularization, and careful feature selection to tune the model appropriately for the given dataset. ### Simple Linear Regression Simple linear regression is a statistical method that models the relationship between two variables by fitting a linear equation to observed data. This example uses a simulated dataset to represent the relationship between the size of a house (in square feet) and its price (in thousands of dollars), incorporating natural variations to reflect real-life scenarios. #### Python Code Example **1. Import Libraries** ```python import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error, r2_score ``` This block imports the necessary libraries for data manipulation, plotting, and machine learning. **2. Generate Sample Data** ```python np.random.seed(42) # For reproducibility square_footage = np.array([1500, 1600, 1700, 1800, 1900, 2000, 2100, 2200, 2300, 2400, 2500, 2600, 2700]) price = np.array([300, 320, 340, 360, 380, 400, 420, 440, 460, 480, 500, 520, 540]) + np.random.normal(0, 20, 13) # Adding noise ``` This block generates sample data for house sizes and prices, introducing random noise to simulate real-world pricing variations. **3. Prepare Features and Target Variables** ```python X = square_footage.reshape(-1, 1) # Square footage y = price # Price in thousands ``` This block prepares the features (square footage) and the target variable (house price). **4. Print Features and Target Variables** ```python print("Square Footage (X):", X) print("House Price (y):", y) ``` `Output:` ``` Square Footage (X): [[1500] [1600] [1700] [1800] [1900] [2000] [2100] [2200] [2300] [2400] [2500] [2600] [2700]] House Price (y): [309.93428306 317.23471398 352.95377076 390.46059713 375.31693251 395.31726086 451.58425631 455.34869458 450.61051228 490.85120087 490.73164614 510.68540493 544.83924543] ``` **5. Split the Dataset** ```python X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) ``` This block splits the dataset into training and testing sets for model evaluation. **6. Create and Train the Model** ```python model = LinearRegression() model.fit(X_train, y_train) ``` This block initializes the linear regression model and trains it using the training dataset. **7. Make Predictions** ```python y_pred = model.predict(X_test) ``` This block uses the trained model to make predictions on the test set. **8. Evaluate the Model** ```python mse = mean_squared_error(y_test, y_pred) r2 = r2_score(y_test, y_pred) print(f'Mean Squared Error: {mse:.2f}') print(f'R-squared: {r2:.2f}') ``` `Output:` ``` Mean Squared Error: 57.99 R-squared: 0.99 ``` **9. Plot the Results** ```python plt.scatter(X, y, color='blue', label='Actual Prices') plt.plot(X_test, y_pred, color='red', linewidth=2, label='Fitted Line') plt.title('Simple Linear Regression: House Price Prediction') plt.xlabel('Square Footage (sq ft)') plt.ylabel('Price (in thousands)') plt.legend() plt.grid() plt.show() ``` This block creates a scatter plot of the actual prices versus the predicted prices to visualize the fit of the model. `Output:` ![Simple linear regression](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9fce0zj3y92q8ejiijte.png) This structured approach provides a comprehensive understanding of how to implement and evaluate simple linear regression, using a realistic dataset that accounts for variations in housing prices based on square footage. ### Multiple Linear Regression Multiple linear regression is a statistical technique that models the relationship between a dependent variable and multiple independent variables. This example incorporates two features: the size of a house (in square feet) and the number of bathrooms. We analyze how both factors influence house prices. #### Python Code Example **1. Import Libraries** ```python import numpy as np from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error, r2_score ``` This block imports the necessary libraries for data manipulation and machine learning. **2. Generate Sample Data** ```python np.random.seed(42) # For reproducibility square_footage = np.array([1500, 1600, 1700, 1800, 1900, 2000, 2100, 2200, 2300, 2400, 2500, 2600, 2700]) num_bathrooms = np.array([1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5]) price = np.array([300, 320, 340, 360, 380, 400, 420, 440, 460, 480, 500, 520, 540]) + np.random.normal(0, 20, 13) # Adding noise ``` This block generates sample data for house sizes, number of bathrooms, and prices, introducing random noise to simulate real-world pricing variations. **3. Prepare Features and Target Variables** ```python X = np.column_stack((square_footage, num_bathrooms)) # Features: square footage and number of bathrooms y = price # Price in thousands ``` This block prepares the features (square footage and number of bathrooms) and the target variable (house price). **4. Print Features and Target Variables** ```python print("Features (X):", X) print("House Price (y):", y) ``` `Output:` ``` Features (X): [[1500 1] [1600 1] [1700 2] [1800 2] [1900 2] [2000 3] [2100 3] [2200 3] [2300 4] [2400 4] [2500 4] [2600 5] [2700 5]] House Price (y): [309.93428306 317.23471398 352.95377076 390.46059713 375.31693251 395.31726086 451.58425631 455.34869458 450.61051228 490.85120087 490.73164614 510.68540493 544.83924543] ``` **5. Split the Dataset** ```python X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) ``` This block splits the dataset into training and testing sets for model evaluation. **6. Create and Train the Model** ```python model = LinearRegression() model.fit(X_train, y_train) ``` This block initializes the multiple linear regression model and trains it using the training dataset. **7. Make Predictions** ```python y_pred = model.predict(X_test) ``` This block uses the trained model to make predictions on the test set. **8. Evaluate the Model** ```python mse = mean_squared_error(y_test, y_pred) r2 = r2_score(y_test, y_pred) print(f'Mean Squared Error: {mse:.2f}') print(f'R-squared: {r2:.2f}') ``` `Output:` ``` Mean Squared Error: 64.16 R-squared: 0.99 ``` This structured approach demonstrates how to implement and evaluate multiple linear regression, using a realistic dataset that accounts for variations in housing prices based on both square footage and the number of bathrooms. ### Polynomial Regression Polynomial regression is a regression analysis technique where the relationship between the independent variable and the dependent variable is modeled as an nth degree polynomial. In this example, we will model the relationship between the size of a house (in square feet) and its price (in thousands of dollars) using a 3rd degree polynomial. #### Python Code Example **1. Import Libraries** ```python import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error, r2_score ``` This block imports the necessary libraries for data manipulation, plotting, and machine learning. **2. Generate Sample Data** ```python np.random.seed(42) # For reproducibility square_footage = np.array([1500, 1600, 1700, 1800, 1900, 2000, 2100, 2200, 2300, 2400, 2500, 2600, 2700]) price = np.array([300, 320, 340, 360, 380, 400, 420, 440, 460, 480, 500, 520, 540]) + np.random.normal(0, 20, 13) # Adding noise ``` This block generates sample data for house sizes and prices, introducing random noise to simulate real-world pricing variations. **3. Prepare Features and Target Variables** ```python X = square_footage.reshape(-1, 1) # Reshape for sklearn y = price # Price in thousands ``` This block prepares the features (square footage) and the target variable (house price). **4. Split the Dataset** ```python X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) ``` This block splits the dataset into training and testing sets for model evaluation. **5. Create Polynomial Features** ```python poly = PolynomialFeatures(degree=3) X_poly_train = poly.fit_transform(X_train) X_poly_test = poly.transform(X_test) print("Polynomial Features (X_poly_train):", X_poly_train) print("Polynomial Features (X_poly_test):", X_poly_test) ``` `Output:` ``` Polynomial Features (X_poly_train): [[1.0000e+00 2.3000e+03 5.2900e+06 1.2167e+10] [1.0000e+00 2.0000e+03 4.0000e+06 8.0000e+09] [1.0000e+00 1.7000e+03 2.8900e+06 4.9130e+09] [1.0000e+00 1.6000e+03 2.5600e+06 4.0960e+09] [1.0000e+00 2.7000e+03 7.2900e+06 1.9683e+10] [1.0000e+00 1.9000e+03 3.6100e+06 6.8590e+09] [1.0000e+00 2.2000e+03 4.8400e+06 1.0648e+10] [1.0000e+00 2.5000e+03 6.2500e+06 1.5625e+10] [1.0000e+00 1.8000e+03 3.2400e+06 5.8320e+09] [1.0000e+00 2.1000e+03 4.4100e+06 9.2610e+09]] Polynomial Features (X_poly_test): [[1.0000e+00 2.6000e+03 6.7600e+06 1.7576e+10] [1.0000e+00 2.4000e+03 5.7600e+06 1.3824e+10] [1.0000e+00 1.5000e+03 2.2500e+06 3.3750e+09]] ``` **6. Create and Train the Model** ```python model = LinearRegression() model.fit(X_poly_train, y_train) ``` This block initializes the polynomial regression model and trains it using the transformed training dataset. **7. Make Predictions** ```python y_pred = model.predict(X_poly_test) ``` This block uses the trained model to make predictions on the test set. **8. Evaluate the Model** ```python mse = mean_squared_error(y_test, y_pred) r2 = r2_score(y_test, y_pred) print(f'Mean Squared Error: {mse:.2f}') print(f'R-squared: {r2:.2f}') ``` `Output:` ``` Mean Squared Error: 300.06 R-squared: 0.96 ``` **9. Plot the Results** ```python plt.scatter(X, y, color='blue', label='Actual Prices') X_grid = np.arange(min(X), max(X), 1).reshape(-1, 1) y_grid = model.predict(poly.transform(X_grid)) plt.plot(X_grid, y_grid, color='red', linewidth=2, label='Fitted Polynomial Curve') plt.title('Polynomial Regression: House Price Prediction') plt.xlabel('Square Footage (sq ft)') plt.ylabel('Price (in thousands)') plt.legend() plt.grid() plt.show() ``` This block creates a scatter plot of the actual prices versus the predicted prices and visualizes the fitted polynomial curve. `Output:` ![Polynomial regression](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7b4m97byux3exmfccw3p.png) This structured approach demonstrates how to implement and evaluate polynomial regression using a realistic dataset that captures the non-linear relationship between house size and price. By incorporating polynomial features, we enhance prediction accuracy and better model complex scenarios where simple linear regression may not suffice. ### Combined Multiple Linear and Polynomial Regression In this example, we will implement a combined approach where we use multiple linear regression for the size of the house (in square feet) and polynomial regression for the number of bathrooms, allowing us to model the relationship with price (in thousands of dollars) using polynomial features for the bathroom count up to degree 3. #### Python Code Example **1. Import Libraries** ```python import numpy as np from sklearn.model_selection import train_test_split from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error, r2_score ``` This block imports the necessary libraries for data manipulation and machine learning. **2. Generate Sample Data** ```python np.random.seed(42) # For reproducibility square_footage = np.array([1500, 1600, 1700, 1800, 1900, 2000, 2100, 2200, 2300, 2400, 2500, 2600, 2700]) bathrooms = np.array([1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5]) price = np.array([300, 320, 340, 360, 380, 400, 420, 440, 460, 480, 500, 520, 540]) + np.random.normal(0, 20, 13) # Adding noise ``` This block generates sample data for house sizes, number of bathrooms, and prices, introducing random noise to simulate real-world pricing variations. **3. Prepare Features and Target Variables** ```python X = np.column_stack((square_footage, bathrooms)) # Combine features y = price # Price in thousands print("Features (X):", X) ``` `Output:` ``` Features (X): [[1500 1] [1600 1] [1700 2] [1800 2] [1900 2] [2000 3] [2100 3] [2200 3] [2300 4] [2400 4] [2500 4] [2600 5] [2700 5]] ``` **4. Split the Dataset** ```python X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) ``` This block splits the dataset into training and testing sets for model evaluation. **5. Create Polynomial Features for Bathrooms** ```python poly = PolynomialFeatures(degree=3, include_bias=False) X_poly_bathrooms_train = poly.fit_transform(X_train[:, 1].reshape(-1, 1)) # Only bathrooms X_poly_bathrooms_test = poly.transform(X_test[:, 1].reshape(-1, 1)) # Combine square footage with polynomial features of bathrooms X_poly_train = np.column_stack((X_train[:, 0], X_poly_bathrooms_train)) X_poly_test = np.column_stack((X_test[:, 0], X_poly_bathrooms_test)) print("Polynomial Features (X_poly_train):", X_poly_train) print("Polynomial Features (X_poly_test):", X_poly_test) ``` `Output:` ``` Polynomial Features (X_poly_train): [[2.30e+03 4.00e+00 1.60e+01 6.40e+01] [2.00e+03 3.00e+00 9.00e+00 2.70e+01] [1.70e+03 2.00e+00 4.00e+00 8.00e+00] [1.60e+03 1.00e+00 1.00e+00 1.00e+00] [2.70e+03 5.00e+00 2.50e+01 1.25e+02] [1.90e+03 2.00e+00 4.00e+00 8.00e+00] [2.20e+03 3.00e+00 9.00e+00 2.70e+01] [2.50e+03 4.00e+00 1.60e+01 6.40e+01] [1.80e+03 2.00e+00 4.00e+00 8.00e+00] [2.10e+03 3.00e+00 9.00e+00 2.70e+01]] Polynomial Features (X_poly_test): [[2.60e+03 5.00e+00 2.50e+01 1.25e+02] [2.40e+03 4.00e+00 1.60e+01 6.40e+01] [1.50e+03 1.00e+00 1.00e+00 1.00e+00]] ``` **6. Create and Train the Model** ```python model = LinearRegression() model.fit(X_poly_train, y_train) ``` This block initializes the combined regression model and trains it using the transformed training dataset. **7. Make Predictions** ```python y_pred = model.predict(X_poly_test) ``` This block uses the trained model to make predictions on the test set. **8. Evaluate the Model** ```python mse = mean_squared_error(y_test, y_pred) r2 = r2_score(y_test, y_pred) print(f'Mean Squared Error: {mse:.2f}') print(f'R-squared: {r2:.2f}') ``` `Output:` ``` Mean Squared Error: 199.75 R-squared: 0.98 ``` This structured approach effectively combines multiple features with polynomial transformations, providing a comprehensive understanding of how to implement and evaluate the model. ### Evaluating Linear Regression Model Evaluating a linear regression model involves assessing how well it predicts the dependent variable using various metrics and techniques. Here are some key methods for evaluation: #### 1. Performance Metrics - **Mean Squared Error (MSE)**: Measures the average squared difference between predicted and actual values. Lower values indicate better model performance. - Formula: `MSE = (1/n) * Σ (yi - ŷi)^2` ```python from sklearn.metrics import mean_squared_error mse = mean_squared_error(y_test, y_pred) print(f'Mean Squared Error: {mse}') ``` - **Root Mean Squared Error (RMSE)**: The square root of MSE, providing an error metric in the same units as the dependent variable. It is also sensitive to outliers. - Formula: `RMSE = √(MSE)` ```python import numpy as np rmse = np.sqrt(mse) print(f'Root Mean Squared Error: {rmse}') ``` - **Mean Absolute Error (MAE)**: Measures the average absolute differences between predicted and actual values. It is less sensitive to outliers than MSE. - Formula: `MAE = (1/n) * Σ |yi - ŷi|` ```python from sklearn.metrics import mean_absolute_error mae = mean_absolute_error(y_test, y_pred) print(f'Mean Absolute Error: {mae}') ``` #### 2. Cross-Validation Cross-validation is a robust technique for assessing the performance of a machine learning model by splitting the dataset into multiple parts and validating the model on different subsets of the data. Here are common cross-validation techniques: - **K-Fold Cross-Validation**: The dataset is split into `k` subsets. The model is trained on `k-1` subsets and validated on the remaining subset. This process is repeated `k` times, each time with a different subset as the validation set. The average performance metric over the `k` folds provides a more reliable evaluation. ```python from sklearn.model_selection import KFold, cross_val_score kf = KFold(n_splits=5, shuffle=True, random_state=42) scores = cross_val_score(model, X, y, cv=kf, scoring='neg_mean_squared_error') print(f'Cross-Validation MSE: {np.mean(-scores)}') ``` - **Leave-One-Out Cross-Validation (LOOCV)**: A special case of K-Fold cross-validation where `k` equals the number of data points. Each data point is used once as a validation set, and the remaining data points are used for training. This method is computationally intensive but useful for small datasets. ```python from sklearn.model_selection import LeaveOneOut loo = LeaveOneOut() scores = cross_val_score(model, X, y, cv=loo, scoring='neg_mean_squared_error') print(f'Leave-One-Out Cross-Validation MSE: {np.mean(-scores)}') ``` - **Stratified K-Fold Cross-Validation**: Similar to K-Fold cross-validation but ensures that each fold is representative of the overall class distribution. This method is particularly useful for imbalanced datasets. ```python from sklearn.model_selection import StratifiedKFold skf = StratifiedKFold(n_splits=5) scores = cross_val_score(model, X, y, cv=skf, scoring='neg_mean_squared_error') print(f'Stratified K-Fold Cross-Validation MSE: {np.mean(-scores)}') ``` By using these evaluation methods and cross-validation techniques, practitioners can assess the effectiveness of their linear regression model, ensuring it generalizes well to unseen data. ### Regularization in Regression Regularization is a technique used in regression analysis to prevent overfitting and improve model generalization by adding a penalty term to the loss function. This penalty discourages overly complex models by constraining the size of the coefficients, which helps manage the bias-variance tradeoff. The two most common forms of regularization in regression are L1 regularization (Lasso) and L2 regularization (Ridge). #### L2 Regularization (Ridge Regression) **Concept**: L2 regularization adds a penalty equal to the square of the magnitude of coefficients to the loss function. This is known as the L2 norm. **Loss Function**: The modified loss function for Ridge regression can be represented as: `Loss = Σ(yi - ŷi)^2 + λ * Σ(wj^2)` Where: - `yi` is the actual value. - `ŷi` is the predicted value. - `wj` are the model coefficients. - `λ` is the regularization parameter that controls the strength of the penalty. **Effects**: - Ridge regression shrinks the coefficients towards zero but does not set them exactly to zero. As a result, all features remain in the model, making it suitable for situations with many predictors, especially when multicollinearity is present. - The quadratic penalty means that larger coefficients are penalized more heavily, promoting stability in predictions. **Coefficient Plotting**: When visualizing coefficients, Ridge regression shows a smooth decrease in coefficient values as the regularization parameter increases, resulting in more balanced coefficients without dropping any variables. #### L1 Regularization (Lasso Regression) **Concept**: L1 regularization adds a penalty equal to the absolute value of the magnitude of coefficients to the loss function, known as the L1 norm. **Loss Function**: The modified loss function for Lasso regression is expressed as: `Loss = Σ(yi - ŷi)^2 + λ * Σ|wj|` Where: - `yi` is the actual value. - `ŷi` is the predicted value. - `wj` are the model coefficients. - `λ` is the regularization parameter. **Effects**: - Lasso regression can shrink some coefficients to exactly zero, effectively performing variable selection. This is beneficial in creating simpler, more interpretable models. - The linear penalty allows for certain coefficients to be excluded from the model, which can be especially useful when dealing with high-dimensional data. **Coefficient Plotting**: In Lasso regression, as the regularization parameter increases, we typically observe that some coefficients drop to zero quickly, creating a sparse model where only the most significant features retain non-zero coefficients.
harshm03
1,922,320
How SQL Enhances Your Data Science Skills
Welcome to the third installment of my series on data science. Over the past two weeks, I explored...
0
2024-07-13T13:36:18
https://dev.to/mesfin_t/how-sql-enhances-your-data-science-skills-bkl
sql, datascience, dataanalysis
Welcome to the third installment of my series on data science. Over the past two weeks, I explored some fundamental aspects of data science. In my first post, I [discussed key concepts and tools in data science](https://dev.to/mesfin_t/my-journey-to-learn-data-science-and-machine-learning-3a29). Last week, [I discussed into the power of Python for data analysis](https://dev.to/mesfin_t/my-experience-with-python-for-data-analysis-14jo). This week, let’s dive into another critical skill for any data scientist: SQL (Structured Query Language). <h2>Why SQL is Essential for Data Science</h2> SQL is the backbone of data manipulation and retrieval in relational databases. Mastering SQL empowers data scientists to efficiently query, manipulate, and analyze data stored in databases. Here’s why SQL is indispensable: **1. Efficient Data Retrieval:** SQL allows you to quickly retrieve data from large datasets using simple queries. **2. Data Manipulation:** With SQL, you can perform complex data transformations such as filtering, aggregating, and joining data from multiple tables. **3. Data Cleaning:** SQL helps in cleaning data by identifying and handling missing or inconsistent values. **4. Integration with Other Tools:** SQL integrates seamlessly with other data science tools and languages like Python, R, and BI tools, enabling smooth workflows. <h2>Key SQL Concepts for Data Scientists</h2> Here are some fundamental SQL concepts and how they enhance data analysis: **1. SELECT Statement:** The `SELECT` statement is the cornerstone of SQL, used to fetch data from a database. Example. ```sql SELECT * FROM employees; ``` This query retrieves all records from the `employees` table. **2. WHERE Clause:** The `WHERE` clause filters records based on specific conditions. Example. ```sql SELECT * FROM employees WHERE department = 'Sales'; ``` **3. JOIN Operations** Joins are used to combine rows from two or more tables based on a related column. Example: ```sql SELECT e.name, d.department_name FROM employees e JOIN departments d ON e.department_id = d.department_id; ``` This query retrieves employee names along with their corresponding department names. **4. Aggregate Functions** Aggregate functions perform calculations on a set of values and return a single value. Example: ```sql SELECT department, COUNT(*) AS employee_count FROM employees GROUP BY department; ``` This query counts the number of employees in each department. <h2>Practical Examples and Visualizations</h2> Let’s look at some practical SQL examples and how they can be visualized. Example 1: Employee Distribution by Department SQL Query: ```sql SELECT department, COUNT(*) AS employee_count FROM employees GROUP BY department; ``` Visualization: Example 2: Average Salary by Department SQL Query: ```sql SELECT department, AVG(salary) AS average_salary FROM employees GROUP BY department; ``` <h2>How SQL Enhances Data Science Skills</h2> **1. Data Exploration:** SQL enables thorough data exploration by allowing you to query and understand data distributions, trends, and anomalies. **2. Data Preparation:** Efficiently prepare data for analysis by cleaning and transforming datasets directly within the database. **3. Data Integration:** Combine data from multiple sources and tables to create comprehensive datasets for analysis. **4. Performance Optimization:** Learn to optimize queries for better performance, which is crucial when dealing with large datasets. <h2>Conclusion</h2> SQL is a powerful tool that complements other data science skills, making it easier to handle and analyze data effectively. By mastering SQL, you can enhance your ability to retrieve, manipulate, and analyze data, thereby improving your overall data science capabilities. Stay tuned for next week’s post, where we’ll explore another exciting topic in our data science journey. Happy querying!
mesfin_t
1,922,321
Luminous Wicks: Enchanting Aromas and Elegant Candles with Wix Studio
This is a submission for the [Wix Studio Challenge]. What I Built I have developed a...
0
2024-07-13T13:37:38
https://dev.to/syed_nasreen_ebac74a250d1_42/luminous-wicks-enchanting-aromas-and-elegant-candles-with-wix-studio-1da9
devchallenge, wixstudiochallenge, webdev, javascript
This is a submission for the [Wix Studio Challenge]. ## What I Built I have developed a dynamic ecommerce website from scratch using the Wix platform, specializing in offering a curated selection of candles and aromas. Leveraging Wix's powerful design tools and customizable features, I created a visually appealing and intuitive online store. The website showcases a diverse range of high-quality products, each chosen for their unique scents and therapeutic benefits. With a focus on user experience, I implemented seamless navigation and engaging product displays to enhance customer interaction and satisfaction. This project highlights my ability to utilize Wix's robust capabilities to build a sophisticated ecommerce solution tailored to meet the needs of candle and aroma enthusiasts. ## Demo https://syednasreen1506.wixstudio.io/candles ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0zk4n3fdaf85jl514q2h.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u642v1uutamlnp2tvc96.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/478stpo8w3ulm4ssh3p6.png) ## Development Journey My journey with the candles and aromas ecommerce project began by curating a specialized product collection through thorough market research. Using the versatile Wix platform, I swiftly designed and launched a visually captivating and user-friendly ecommerce website from scratch. Wix's intuitive tools enabled me to optimize performance and enhance user experience, ensuring seamless navigation and highlighting the unique qualities of each product. This project highlights Wix's effectiveness in empowering me to create a professional online shopping experience tailored for candle and aroma enthusiasts. <!-- Which APIs and Libraries did you utilize? --> • Integrated a fully functional store for showcasing products. • Implemented a dynamic blog to engage customers with updates and insights. • Established a loyalty program to reward and retain returning visitors. • Developed a members area for exclusive content and personalized user experiences.
syed_nasreen_ebac74a250d1_42
1,922,322
Você comete este erro no Left Join?
Um exemplo comum de erro ao usar o Left Join
0
2024-07-13T13:41:20
https://dev.to/airtoncarneiro/voce-comete-este-erro-no-left-join-2c6p
sql
--- title: Você comete este erro no Left Join? published: true description: Um exemplo comum de erro ao usar o Left Join tags: sql --- # Você comete este erro no Left Join? ![Imagem de capa do artigo](https://media-exp1.licdn.com/dms/image/C4D12AQG4ox7yv7p6EQ/article-cover_image-shrink_423_752/0/1651578328606?e=2147483647&v=beta&t=ZK8wOw_90WeE1xBBqlyhK_iFUYo9tZ1536HyvMVMUQc) Imagina que está chegando o dia das crianças e que sua empresa irá promover uma confraternização entre **todos os funcionários e também** oferecer um espaço de brincadeiras **para os filhos** daqueles. Então, você recebe como missão entregar a relação de todos os funcionários e seus respectivos filhos menores de 18 anos. Vamos supor que temos as seguintes tabelas: [![Exibição das duas tabelas usadas](https://media-exp1.licdn.com/dms/image/C4D12AQEf8ioRWKt2Zg/article-inline_image-shrink_1500_2232/0/1651581016945?e=2147483647&v=beta&t=IBjlCbKlhJfJbuXyqhAK_gAr_r-f9uXJx0AGvBmd6QY)](https://dbfiddle.uk/?rdbms=sqlserver_2016&fiddle=70d84946a39ee6ae487e93fa4ec7fbf2) Você começa a fazer a consulta SQL e sabe que nem todos os funcionários possuem filhos. Assim, você usará o _left join e monta a seguinte query:_ ``` sql SELECT   F.nome  func\_nome   ,D.nome depend\_nome   ,D.idade depend\_idade FROM FUNCIONARIO F LEFT JOIN DEPENDENTE   ON D.func\_id = F.id WHERE  D.idade < 18 ``` Por incrível que parece, já vi muitas consultas com este tipo de erro. Se você não percebeu o erro, vejamos o retorno da query: [![tabela de funcionários](https://media-exp1.licdn.com/dms/image/C4D12AQFX8h7E-5HrgQ/article-inline_image-shrink_1500_2232/0/1651582362950?e=2147483647&v=beta&t=gw7GcXNksafjuNB0Mes4Yq-T4nzmAvDNWFlSzifFW4U)](https://dbfiddle.uk/?rdbms=sqlserver_2016&fiddle=70d84946a39ee6ae487e93fa4ec7fbf2) Como percebido, temos o funcionário _José Bonifácio_ na lista e o _Pe. José de Anchieta,_ não. O resultado esperado seria: [![tabela de dependentes](https://media-exp1.licdn.com/dms/image/C4D12AQFv_3WyLbzzLw/article-inline_image-shrink_1500_2232/0/1651582381903?e=2147483647&v=beta&t=secJjHusk3oUMmXeKdq5PxgGvYtaL2cYW754qBGcKZY)](https://dbfiddle.uk/?rdbms=sqlserver_2016&fiddle=70d84946a39ee6ae487e93fa4ec7fbf2) Por que o erro acontece? Porque quando levamos algum campo **idade** (que pertence à tabela do _LEFT JOIN_) para a cláusula _WHERE_, o SGBD passa a fazer um INNER JOIN. Vejamos o plano de execução gerado para esta consulta: ![plano de execução para inner join](https://media-exp1.licdn.com/dms/image/C4D12AQF67eKAciJdAA/article-inline_image-shrink_1500_2232/0/1651583202846?e=2147483647&v=beta&t=nobJsh05_sZTYfdvbki-tgAxxVvuCPtXQxPIPTZZz3A) Grifado em amarelo está como o BD "monta" a consulta. Ela usou um _INNER JOIN_! Assim, a consulta certa seria: ``` sql SELECT   F.nome   AS 'func\_nome'   ,D.nome  AS 'depend\_nome'   ,D.idade AS 'depend\_idade' FROM FUNCIONARIO F LEFT JOIN DEPENDENTE D   ON D.func\_id = F.id      AND D.idade < 18 ``` Que gera o seguinte plano de execução: ![plano de execução para left join](https://media-exp1.licdn.com/dms/image/C4D12AQE-bxE9WpWTyg/article-inline_image-shrink_1500_2232/0/1651583434146?e=2147483647&v=beta&t=a7DS4UDrIn1p8U66XaWyhBwYsEzNASkdkGjiAs7xQZQ) Concluindo, refaço a pergunta: Você comete este erro no _Left Join_? Caso queria ver os exemplos na prática, acesse este [link](git@github.com:airtoncarneiro/my-dev.to.git). v.3.01
airtoncarneiro
1,922,323
What is the Page Object Model (POM), and how does it benefit Selenium automation testing? #InterviewQuestion
Interview Question: What is the Page Object Model (POM), and how does it benefit Selenium...
28,054
2024-07-13T13:41:52
https://dev.to/codegreen/what-is-the-page-object-model-pom-and-how-does-it-benefit-selenium-automation-testing-interviewquestion-2ddp
java, selenium
###Interview Question: What is the Page Object Model (POM), and how does it benefit Selenium automation testing? Discuss a specific project where you implemented POM and its impact on test maintenance and scalability. Page Object Model (POM) in Selenium Automation Testing ------------------------------------------------------ **Page Object Model (POM)** is a design pattern in Selenium WebDriver that helps in enhancing test maintenance and scalability by abstracting web elements and actions on a web page into reusable classes called Page Objects. Benefits of using POM: * **Code Reusability:** Page Objects encapsulate web elements and related methods, making them reusable across multiple tests. * **Easy Maintenance:** Changes to the UI are confined to the Page Objects, reducing maintenance efforts as updates are localized. * **Improved Scalability:** POM promotes structured test development, making it easier to add new tests and scale automation efforts. * **Enhanced Readability:** Tests become more readable and understandable, as business logic and page interactions are separated. **Example:** Suppose we have a Login Page with username, password fields, and a login button. Here’s how a Page Object might look in Java: ----------------------------------------------------- ### LoginPage.java In this example, we'll separate the WebElement locators into a separate class and use @FindBy annotations for clarity and maintainability. **LoginPageElements.java** ```java class LoginPageElements { WebDriver driver; @FindBy(id = "username") WebElement usernameField; @FindBy(id = "password") WebElement passwordField; @FindBy(id = "loginButton") WebElement loginButton; public LoginPageElements(WebDriver driver) { this.driver = driver; PageFactory.initElements(driver, this); } } ``` **LoginPage.java:** ```java import org.openqa.selenium.WebDriver; import org.openqa.selenium.WebElement; import org.openqa.selenium.support.FindBy; import org.openqa.selenium.support.PageFactory; public class LoginPage { private WebDriver driver; private LoginPageElements elements; public LoginPage(WebDriver driver) { this.driver = driver; this.elements = new LoginPageElements(driver); PageFactory.initElements(driver, this); } public void enterUsername(String username) { elements.usernameField.sendKeys(username); } public void enterPassword(String password) { elements.passwordField.sendKeys(password); } public void clickLoginButton() { elements.loginButton.click(); } } ``` **Explanation:** * `LoginPageElements.java`: This class stores the WebElement locators using `@FindBy` annotations. It initializes elements using `PageFactory.initElements` to initialize WebElements. * `LoginPage.java`: This class initializes the WebDriver and LoginPageElements. It contains methods to interact with the login page elements. * `@FindBy` annotations help in locating elements without the need for `driver.findElement` calls, improving code readability and reducing duplication.
codegreen
1,922,324
AWStuff: Dedicated Instance vs. Dedicated Host
Introduction Hello everyone, Today, I will try to summarize a bit complicated concept in...
0
2024-07-14T07:00:38
https://dev.to/shameel/awstuff-dedicated-instance-vs-dedicated-host-327e
aws, ec2, learning, todayilearned
## Introduction Hello everyone, Today, I will try to summarize a bit complicated concept in easiest possible terms. The concept is in the domain of EC2 (Elastic Compute Cloud) which is most widely used service in AWS. This service allows you to rent your own computer in the cloud where you can run windows, linux or any other OS. But it has different types and flavors as well. One of the concept that confuses us is between dedicated instance vs. dedicated host in EC2. So, let's explore that! ## Video Explanation {% youtube e2gKY6YYjyc %} ## Dedicated Instance Vs. Dedicated Host Lets begin with the official documentation: ### Dedicated Instance > By default, EC2 instances run on shared tenancy hardware. This means that multiple AWS accounts might share the same physical hardware. > Dedicated Instances are EC2 instances that run on hardware that's dedicated to a single AWS account. This means that Dedicated Instances are physically isolated at the host hardware level from instances that belong to other AWS accounts, even if those accounts are linked to a single payer account. However, Dedicated Instances might share hardware with other instances from the same AWS account that are not Dedicated Instances. Source: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-instance.html ### Dedicated Host > Amazon EC2 Dedicated Hosts allow you to use your eligible software licenses from vendors such as Microsoft and Oracle on Amazon EC2, so that you get the flexibility and cost effectiveness of using your own licenses, but with the resiliency, simplicity and elasticity of AWS. An Amazon EC2 Dedicated Host is a physical server fully dedicated for your use, so you can help address corporate compliance requirements. Source: https://aws.amazon.com/ec2/dedicated-hosts/ ### Difference? From the docs, they both look like something similar, right? But that is not the case. Let me try and explain it... So, in general, whenever you go for EC2 instance, AWS will give you any hardware where your instance will be deployed and then you can use OS (Windows/Linux). You do not get the control over hardware at all. Note that the hardware is very huge like CPU, RAM and other stuff so it will be shared with other instances as well. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ruv94gvtur1a74uoo7ao.png) When you choose to go for **Dedicated Instance** then you get your instance dedicated on one hardware which is not currently occupied by any one else. Every time you stop and start your instance, **you will not be given the same hardware** but you will be given an isolated hardware to your own AWS account. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a4jhkba7ekaibzgbjhiy.png) When you choose to go for **Dedicated Host**, you lock down for a particular hardware where you can deploy your instances. Whenever you stop and start your instance, **you will get same hardware** for your instances. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hmfstuopaoqhvlv611po.png) ## Conclusion In this article, I tried to explain the difference in easiest terms in AWS EC2 for dedicated host and dedicated instance. In dedicated instance, you get isolated to one hardware but every time you stop/start the instance then you get a different hardware but in dedicated instance, you get same hardware after you stop and start the instance. Hope the explanation was clear enough :) Happy clouding! 🚀 **Follow me for more such content**: {% cta https://www.youtube.com/@ShameelUddin123 %} 🚀 Follow on YouTube {% endcta %} {% cta https://www.linkedin.com/in/shameeluddin/ %} 🚀 Follow on LinkedIn {% endcta %} {% cta https://github.com/Shameel123 %} 🚀 Follow on GitHub {% endcta %}
shameel
1,922,325
Shopping Site
This is a submission for the Wix Studio Challenge . What I Built using wix studio to...
0
2024-07-13T13:46:04
https://dev.to/karthik_n/shopping-site-4gdm
devchallenge, wixstudiochallenge, webdev, javascript
*This is a submission for the [Wix Studio Challenge ](https://dev.to/challenges/wix).* ## What I Built using wix studio to build this website ## Demo [preview](https://karthikdevacc.wixstudio.io/fashion-store) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f2vcwibj2onz389zb6fn.png)
karthik_n
1,922,326
I am developing a Text Input Component based on Skia and Canvas
visit github TextMagic is the next generation text component. Unlike native input and textarea...
0
2024-07-13T13:49:33
https://dev.to/gezilinll/i-am-developing-a-text-input-component-based-on-skia-and-canvas-1l04
javascript, webdev
[visit github](https://github.com/gezilinll/TextMagic) TextMagic is the next generation text component. Unlike native input and textarea components, it supports richer text effects and typesetting capabilities. By controlling text layout autonomously, it ensures consistent text display across different platforms and browsers. TextMagic follows a modular design approach, offering both an integrated component(@text-magic) for seamless integration and standalone components for specific needs: @text-magic/input for text input and @text-magic/renderer for text typesetting and rendering. If anyone shares an interest in text or related fields, I welcome discussion and collaboration. I'm also in the process of learning in this area and would appreciate more feedback and assistance.
gezilinll
1,922,327
How to Integrate Next-Auth with Your Next.js Application
Recently, I migrated a frontend application from an older version of Next.js to Next.js 14. During...
0
2024-07-16T12:50:07
https://dev.to/nagatodev/how-to-integrate-next-auth-with-your-nextjs-application-4mn9
Recently, I migrated a frontend application from an older version of Next.js to Next.js 14. During this process, I decided to upgrade the session handling approach. After extensive research, I chose to use NextAuth.js, now known as Auth.js. In the application, server-side authentication with AWS Cognito is utilized, and I wanted to maintain this setup. The goal was to seamlessly integrate authentication tokens and user data retrieved from successful authentications into Auth.js, ensuring session availability across the entire app and enabling authorization enforcement for different pages. In this article, I'll guide you through the process of successfully integrating Auth.js with your Next.js application using the app router. We will cover both the credentials (email and password) approach and Google social login. **Note:** Next-Auth v5 is still in beta mode, so I wouldn't fully recommend integrating it into a production application just yet. However, the final decision is up to you. ## Prerequisites 1. Familiarity with the basics of React.js 2. Brief experience with Next.js app router 3. Basic understanding of Typescript, as we'll be using Next.js with TypeScript You don't need to have prior experience with Next-Auth.js (now Auth.js) to follow this article. **Note:** For the rest of the article, I’ll be using `next-auth` and `Auth.js` interchangeably. **Let's get started!!** ------------------------------------------------------ **1. Create your Next.js application** Start by generating a new Next.js application: ``` npx create-next-app@latest ``` After running the command above, follow the prompts, accepting the defaults for TypeScript and Tailwind CSS as we'll be using TypeScript and will apply minor styling with Tailwind. Also choose `Yes` for the import alias and accept the default alias configuration. **2. Install Auth.js** Install the latest version of auth.js using npm or yarn ``` npm install next-auth@beta ``` or ``` yarn add next-auth@beta ``` **3. Generate and Configure Your Environment Variables** Create a .env.local file and add your Auth.js secret. Auth.js uses this for encryption of your JWTs and cookies. ``` touch .env.local ``` Generate the secret string by running any of the two commands: ``` # Linux/Mac terminal openssl rand -base64 33 # Alternatively, use `auth` to generate the secret npx auth secret ``` Add the generated secret to your `.env.local` file: ``` AUTH_SECRET="your-secret" ``` You can escape running into major issues when you don't set this in your development environment. But it is compulsory to set it in the production environment. You should also add your backend URL here ``` NEXT_PUBLIC_BASE_URL="your-backend-url" ``` **4. Create your interface and types.** To manage your types effectively, create a new directory named `types` in your root directory (or in your /src directory if you are using it), then inside it, add two new files: `login.ts` and `user.ts`. ``` ├── app ├── public ├── types │ ├── login.ts │ ├── user.ts ├── .env.local └── ... ``` Your app directory should look something like this with some other files and folders which are not shown here. **`login.ts`** Add the following types for different authentication scenarios: ``` type CredentialsType = { username: string; password: string; }; type SocialCredentialsType = { auth_code: string; }; export type { CredentialsType, SocialCredentialsType }; ``` `CredentialsType`: Defines the structure for email and password login credentials. `SocialCredentialsType`: Defines the structure for social login credentials, specifically for Google authentication. **`user.ts`** Define interface and type for user data to manage session details: ``` interface UserType { id: string; name: string; email: string; avatar: string; premiumSubscription: boolean; accessToken: string; refreshToken: string; subId: string; } type UserResponseType = { id: string; name: string; email: string; avatar: string; premium_subscription: boolean; access_token: string; refresh_token: string; sub_id: string; }; export type { UserType, UserResponseType }; ``` - `UserType`: Interface specifying the expected properties for user session management. - `UserResponseType`: Type specifying the format of user data as received from the backend. **5. Create your Auth.js configuration.** Create a new file named `auth.ts` in your root directory. If you are using the `src/` directory approach, place the file directly within the `src/` folder. ``` ├── app ├── public ├── types │ ├── login.ts │ ├── user.ts ├── .env.local ├── auth.ts └── ... ``` **`auth.ts`** At the top of the file import the necessary libraries and types for configuring Auth.js. ``` // library imports import NextAuth from "next-auth"; import CredentialsProvider from "next-auth/providers/credentials"; // types imports import type { NextAuthConfig, Session, User } from "next-auth"; import type { UserType, UserResponseType } from "@/types/user"; import { AdapterUser } from "next-auth/adapters"; import { CredentialsType, SocialCredentialsType } from "@/types/login"; import { JWT } from "next-auth/jwt"; ``` Next, we have to modify some of the types in the next-auth library with some of our own properties. ``` declare module "next-auth" { interface User extends UserType {} } declare module "next-auth/adapters" { interface AdapterUser extends UserType {} } declare module "next-auth/jwt" { interface JWT extends UserType {} } ``` Here, we used the TypeScript module augmentation feature to extend the `User`, `AdapterUser`, `JWT` interfaces from NextAuth with properties defined in `UserType` from our `user.ts` file. This ensures that our custom user properties are recognized throughout the Auth.js configuration. Next we declare `authOptions` object which we'll use to initialize `NextAuth` at the end. ``` const authOptions = { providers: [ // Add authentication providers here (e.g., CredentialsProvider, GoogleProvider) ], callbacks: { // Add custom authentication callbacks here (e.g., signIn, signOut, jwt, session) }, pages: { // Customize authentication-related pages (e.g., signIn, error) }, session: { // Configure session options (e.g., JWT settings, session management) }, } satisfies NextAuthConfig; export const { handlers, auth, signIn, signOut } = NextAuth(authOptions); ``` **A.providers:** accepts an array of providers such as CredentialsProvider, GoogleProvider, e.t.c. **B.callbacks:** specifies custom callback functions that can be triggered at various points in the authentication process. **C.pages:** auth.js has default authentication-related pages for signIn, error, signOut e.t.c. But in this block , we can override it with our customized pages by specifying their paths. **D.session:** Configures session options, such as how sessions are handled and stored. e.g jwt or database approach - Finally, we initialize `NextAuth` with the `authOptions` configuration and destructure the returned methods (handlers, auth, signIn, signOut) for use in our application. - `handlers`: Middleware for handling NextAuth requests. - `auth`: A universal method to interact with Auth.js in your Next.js app. - `signIn`: Method to trigger sign-in. - `signOut`: Method to trigger sign-out. The code in the `auth.ts` file should look like this now ``` // library imports import NextAuth from "next-auth"; import CredentialsProvider from "next-auth/providers/credentials"; // types imports import type { NextAuthConfig, Session, User } from "next-auth"; import type { UserType, UserResponseType } from "@/types/user"; import { AdapterUser } from "next-auth/adapters"; import { CredentialsType, SocialCredentialsType } from "@/types/login"; import { JWT } from "next-auth/jwt"; declare module "next-auth" { interface User extends UserType {} } declare module "next-auth/adapters" { interface AdapterUser extends UserType {} } declare module "next-auth/jwt" { interface JWT extends UserType {} } const authOptions = { providers: [ ], callbacks: { }, pages: { }, session: { }, } satisfies NextAuthConfig; export const { handlers, auth, signIn, signOut } = NextAuth(authOptions); ``` Now, let's complete the `authOptions` object with the necessary values. **A. providers**: We will use `CredentialsProvider` for handling authentication on the server side by making requests to our server URL. We will set it up twice: once for email and password login, and once for social login with Google. To distinguish between the two, we specify unique id values. ``` providers: [ CredentialsProvider({ id: "credentials", name: "Credentials", authorize: async (credentials) => { try { const user = await fetchUser( `${process.env.NEXT_PUBLIC_BASE_URL}/auth/login`, { username: typeof credentials.username === "string" ? credentials.username : "", password: typeof credentials.password === "string" ? credentials.password : "", } ); return user ? createUser(user) : null; } catch (error) { console.error("Error during authentication", error); return null; } }, }), CredentialsProvider({ id: "social", name: "Custom Social Login", authorize: async (credentials) => { try { const user = await fetchUser( `${process.env.NEXT_PUBLIC_BASE_URL}/auth/social_login`, { auth_code: typeof credentials.authCode === "string" ? credentials.authCode : "", } ); return user ? createUser(user) : null; } catch (error) { console.error("Error during authentication", error); return null; } }, }), ], ``` - **`Email and Password Login`**: - The first `CredentialsProvider` is configured for email and password login. - It makes a request to `${process.env.NEXT_PUBLIC_BASE_URL}/auth/login` with the provided credentials via the fetchUser function(replace this URL with your backend URL). - If a user is found, `createUser(user)` is called to format the user data. - **`Social Login with Google`**: - The second `CredentialsProvider` is configured for social login using Google. - It makes a request to `${process.env.NEXT_PUBLIC_BASE_URL}/auth/social_login` with the provided auth code via the `fetchUser` function(replace this URL with your backend URL). - If a user is found, `createUser(user)` is called to format the user data. Each provider has a unique `id` to distinguish between the two methods of authentication. By using `CredentialsProvider` twice with different `id` values, we can manage multiple authentication methods seamlessly. Here are the definitions of the `fetchUser` and `createUser` functions: ``` // Function to authenticate and fetch user details async function fetchUser( url: string, body: CredentialsType | SocialCredentialsType ) { try { const res = await fetch(url, { method: "POST", headers: { "Content-Type": "application/json", }, body: JSON.stringify(body), }); const user = await res.json(); if (res.ok && user) { return user; } else { console.error(`Failed to fetch user: ${res.status} ${res.statusText}`); return null; } } catch (error) { console.error(`Error during fetch: ${error}`); return null; } } // Function to create a user object function createUser(user: UserResponseType) { const userObject: UserType = { id: user.id, name: user.name, email: user.email, avatar: user.avatar, premiumSubscription: user.premium_subscription, accessToken: user.access_token, refreshToken: "", //add subId from the auth service here subId: "", // add refresh token here }; return userObject; } ``` **B. callbacks**: The `callbacks` section allows us to customize the behavior of the authentication process. Here, we’ll add two async functions: `jwt` and `session`. ``` callbacks: { async jwt({ token, user }: { token: JWT; user: User }) { // Add the user properties to the token after signing in if (user) { token.id = user.id as string; token.avatar = user.avatar; token.name = user.name; token.email = user.email; token.premiumSubscription = user.premiumSubscription; token.accessToken = user.accessToken; token.subId = user.subId; token.refreshToken = user.refreshToken; } return token; }, async session({ session, token }: { session: Session; token: JWT }) { // Create a user object with token properties const userObject: AdapterUser = { id: token.id, avatar: token.avatar, name: token.name, premiumSubscription: token.premiumSubscription, accessToken: token.accessToken, subId: token.subId, refreshToken: token.refreshToken, email: token.email ? token.email : "", // Ensure email is not undefined emailVerified: null, // Required property, set to null if not used }; // Add the user object to the session session.user = userObject; return session; }, }, ``` - **`jwt callback`**: - The jwt function is called during the authentication process to add user properties to the JWT token. When a user signs in, their properties, such as id, avatar, name, email, accessToken, subId, and refreshToken, are added to the token. This token is then used to securely manage user sessions. - **`session callback`**: - The session function is called whenever a session is accessed to add user data to the session object. A user object is created using the properties from the token. This user object is then added to the session, ensuring that user data is available whenever the session is accessed. The email property is ensured to be a string, and emailVerified is set to null if not used. **C. pages**: In the `pages` section of the `authOptions` object, we define paths to our custom pages that will override the default authentication-related pages provided by Auth.js. This allows us to customize the user experience for specific authentication actions. ``` pages: { signIn: "/auth/login", // Custom sign-in page // error: "/auth/error", // Custom error page }, ``` - **`signIn`**: Specifies the path to a custom sign-in page. By setting this to `/auth/login`, we direct users to our custom login page instead of the default Auth.js sign-in page. - **`error`**: Optionally, we can specify a custom error page. This is commented out here, but if we wanted to use a custom error page, we could set its path like so: error: `/auth/error`. **D. session**: In the session section of the `authOptions` object, we specify how sessions should be managed. In this case, we are using JSON Web Tokens (JWT) as the session strategy. ``` session: { strategy: "jwt", }, ``` - **`strategy`**: By setting this to "jwt", we ensure that sessions are managed using JSON Web Tokens. This means that session data is encoded and stored in a JWT, which is then sent to the client. Here is what the entire file should look like now ``` // library imports import NextAuth from "next-auth"; import CredentialsProvider from "next-auth/providers/credentials"; // types imports import type { NextAuthConfig, Session, User } from "next-auth"; import type { UserType, UserResponseType } from "@/types/user"; import { AdapterUser } from "next-auth/adapters"; import { CredentialsType, SocialCredentialsType } from "@/types/login"; import { JWT } from "next-auth/jwt"; // Modify NextAuth types with custom properties declare module "next-auth" { interface User extends UserType {} } declare module "next-auth/adapters" { interface AdapterUser extends UserType {} } declare module "next-auth/jwt" { interface JWT extends UserType {} } const authOptions = { providers: [ CredentialsProvider({ id: "credentials", name: "Credentials", authorize: async (credentials) => { try { const user = await fetchUser( `${process.env.NEXT_PUBLIC_BASE_URL}/auth/login`, { username: typeof credentials.username === "string" ? credentials.username : "", password: typeof credentials.password === "string" ? credentials.password : "", } ); return user ? createUser(user) : null; } catch (error) { console.error("Error during authentication", error); return null; } }, }), CredentialsProvider({ id: "social", name: "Custom Social Login", authorize: async (credentials) => { try { const user = await fetchUser( `${process.env.NEXT_PUBLIC_BASE_URL}/auth/social_login`, { auth_code: typeof credentials.authCode === "string" ? credentials.authCode : "", } ); return user ? createUser(user) : null; } catch (error) { console.error("Error during authentication", error); return null; } }, }), ], callbacks: { async jwt({ token, user }: { token: JWT; user: User }) { // Add the user properties to the token after signing in if (user) { token.id = user.id as string; token.avatar = user.avatar; token.name = user.name; token.email = user.email; token.premiumSubscription = user.premiumSubscription; token.accessToken = user.accessToken; token.subId = user.subId; token.refreshToken = user.refreshToken; } return token; }, async session({ session, token }: { session: Session; token: JWT }) { // Create a user object with token properties const userObject: AdapterUser = { id: token.id, avatar: token.avatar, name: token.name, premiumSubscription: token.premiumSubscription, accessToken: token.accessToken, subId: token.subId, refreshToken: token.refreshToken, email: token.email ? token.email : "", // Ensure email is not undefined emailVerified: null, // Required property, set to null if not used }; // Add the user object to the session session.user = userObject; return session; }, }, pages: { signIn: "/auth/login", // Custom sign-in page // error: "/auth/error", // Custom error page }, session: { strategy: "jwt", }, } satisfies NextAuthConfig; // Function to authenticate and fetch user details async function fetchUser( url: string, body: CredentialsType | SocialCredentialsType ) { try { const res = await fetch(url, { method: "POST", headers: { "Content-Type": "application/json", }, body: JSON.stringify(body), }); const user = await res.json(); if (res.ok && user) { return user; } else { console.error(`Failed to fetch user: ${res.status} ${res.statusText}`); return null; } } catch (error) { console.error(`Error during fetch: ${error}`); return null; } } // Function to create a user object function createUser(user: UserResponseType) { const userObject: UserType = { id: user.id, name: user.name, email: user.email, avatar: user.avatar, premiumSubscription: user.premium_subscription, accessToken: user.access_token, refreshToken: "", //add subId from the auth service here subId: "", // add refresh token here }; return userObject; } export const { handlers, auth, signIn, signOut } = NextAuth(authOptions); ``` **6. Utilize the configuration in the Next.js API route.** To integrate the `NextAuth` configuration into a Next.js API route, follow these steps: Create a new file named `route.ts` in the app directory at the following path: ``` ├── app │ ├── api │ │ ├── auth │ │ │ ├── [...nextauth] │ │ │ │ ├── route.ts ├── auth.ts └── ... ``` In the `route.ts` file, add the following lines of code: ``` // library imports import { handlers } from "@/auth"; export const { GET, POST } = handlers; ``` At the top, we import the `handlers` object from `auth.ts`, then destructure it to extract the `GET` and `POST` methods and exports them. When a request is made to the `/api/auth` endpoint, the `route.ts` file delegates the request to the appropriate handler (`GET` or `POST`) provided by NextAuth. These handlers manage the authentication logic, session management, and other related tasks based on the configuration we defined earlier in `auth.ts`. We are done with setting up `Auth.js`. Now let's use the magic we created and see it in action ------------------------------------------------------ **7. Using Auth.js in our Next.js Application** Now that we have completed the setup for `Auth.js`, it's time to see it in action. We'll integrate authentication into our application by creating email and password sign-in, social login and protected pages. **A. Create the authentication components** First, create a components directory in the root directory of your app (in the `src/` directory if you are using this approach). Then add `auth` and `protected` directories in it. - Create a new file named `login.tsx` in the `components/auth` directory and a new file named `profile.tsx` in the `components/profile` directory: ``` ├── app ├── components │ ├── auth │ │ ├── login.tsx │ ├── profile │ │ ├── profile.tsx ``` - **`login.tsx`**: Here we will add the following code to create a sign-in form and a google login button. ``` "use client"; // library imports import React, { useState, useEffect } from "react"; import Link from "next/link"; import { useSearchParams } from "next/navigation"; export default function SignIn() { const searchParams = useSearchParams(); const googleLogin = "Your google login url which will contain your client id and redirectURI"; const [username, setUsername] = useState(""); const [password, setPassword] = useState(""); const [authenticated, setAuthenticated] = useState(false); const [error, setError] = useState(""); useEffect(() => { if (authenticated) { // Redirect to previous page or home page const next = searchParams.get("next") || "/"; window.location.href = next; } }, [authenticated]); const handleSubmit = async (e: React.FormEvent<HTMLFormElement>) => { e.preventDefault(); try { const res = await fetch("/api/login", { method: "POST", headers: { "Content-Type": "application/json", }, body: JSON.stringify({ username, password, type: "credentials" }), }); if (res.ok) { setAuthenticated(true); } else { // handle error state here setError("Invalid credentials"); } } catch (error) { // handle error state here console.error("Error during sign-in", error); setError("Internal server error"); } }; return ( <div className="mx-auto w-[200px] h-full border-red-100"> <div> <p className="text-xl w-full flex justify-center mt-3 mb-5">Sign In</p> <form onSubmit={handleSubmit}> <label> Username: <input type="text" className="w-full rounded-sm" value={username} onChange={(e) => setUsername(e.target.value)} /> </label> <label> Password: <input className="w-full rounded-sm" type="password" value={password} onChange={(e) => setPassword(e.target.value)} /> </label> <button className="w-full flex justify-center bg-teal-500 text-white mt-3 rounded-md" type="submit" > Sign In </button> {error && <p style={{ color: "red" }}>{error}</p>} </form> </div> <div className="my-2"> <div className="flex justify-center"> or </div> </div> <div className="w-full bg-red-700 rounded-md mb-2"> <Link href={googleLogin} className="flex "> <p className="w-full text-white flex justify-center"> Sign in with Google </p> </Link> </div> </div> ); } ``` We'll send a POST request to an API route (/api/login) to handle authentication. If the login is successful, the user will be redirected to the next page or home page. If the `Sign in with Google` button is clicked, the user will be redirected to the `google-login` page Next, we will create the `/api/login` endpoint to handle the authentication logic on the server side. Create a new folder named `login` in the `app/api` directory and create a `route.ts` file in it. ``` ├── app │ ├── api │ │ ├── auth │ │ ├── login │ │ │ ├── route.ts ├── components ├── auth.ts └── ... ``` - **`route.ts`**: ``` // library imports import { NextResponse, NextRequest } from "next/server"; // internal imports import { signIn } from "@/auth"; export async function POST(req: NextRequest, res: NextResponse) { const data = await req.json(); const { username, password, type } = data; try { const result = type === "credentials" ? await signIn("credentials", { redirect: false, username, password }) : await signIn("social", { redirect: false, authCode: username }); // handle the result of the sign-in attempt if (!result || result.error) { return NextResponse.json({ error: "Invalid credentials" }); } else { return NextResponse.json({ success: true }); } } catch (error) { console.error("Error during sign-in", error); return NextResponse.error(); } } ``` Here, we handle authentication for both credentials and social login types. Based on the `type` of login, we call the signIn function with either credentials (`username and password`) or social login (`auth_code`). Next, we will create the social login page. Create a new file named `socialAuth.tsx` in the `components/auth` directory. ``` ├── app ├── components │ ├── auth │ │ ├── login.tsx │ │ ├── socialAuth.tsx │ ├── profile ``` - **`socialAuth.tsx`**: Add the following code to socialAuth.tsx. ``` "use client"; // library imports import React, { useState, useEffect } from "react"; import Link from "next/link"; import { useRouter, useSearchParams } from "next/navigation"; export default function SocialAuth() { const googleLogin = "Your google login url which will contain your client id and redirectURI"; const router = useRouter(); const searchParams = useSearchParams(); const [authSuccess, setAuthSuccess] = useState(true); const [tokenStatus, setTokenStatus] = useState(false); useEffect(() => { // check query string for authentication code if (authSuccess || tokenStatus) { const url = window.location.href; const code = url.match(/\?code=(.*)/); if (!tokenStatus && authSuccess) { if (code) { authenticateUser(code[1]); } else { router.push("/auth/login"); } } else if (tokenStatus) { // Redirect to previous page or home page const next = searchParams.get("next") || "/"; router.push(next); } else { router.push("/auth/login"); } } }, [tokenStatus, authSuccess]); const authenticateUser = async (code: string) => { try { const res = await fetch("/api/login", { method: "POST", headers: { "Content-Type": "application/json", }, body: JSON.stringify({ username: code, type: "social" }), }); if (res.ok) { setTokenStatus(true); } else { // handle error state here setAuthSuccess(false); } } catch (error) { // handle error state here setAuthSuccess(false); } }; return ( <> <div> {authSuccess ? ( <div> <h1>Authenticating...</h1> </div> ) : ( <div> <h1> {" "} An error occurred while attempting to authenticate your account with Google{" "} </h1> <div> <div> <Link href={googleLogin}>Please try again</Link> </div> </div> </div> )} </div> </> ); } ``` This ensures that users can authenticate via social login providers like Google and handles the authentication flow smoothly. **B. Create the Authentication Pages** Next, let's create the custom `sign-in` and `social login` pages that utilize the components we created above. Create a new directory called `auth` in the `app/` directory. Inside `auth`, create folders named `login` and `social`, and then add a `page.tsx` file in both folders. ``` ├── app │ ├── auth │ │ ├── login │ │ │ ├── page.tsx │ │ ├── social │ │ │ ├── page.tsx ``` - In `login/page.tsx`, add the following code: ``` //component imports import SignIn from "@/components/auth/login"; export default function LoginPage() { return <SignIn />; } ``` - In `social/page.tsx`, add the following code: ``` //component imports import SocialAuth from "@/components/auth/socialAuth"; export default function SocialAuthPage() { return <SocialAuth />; } ``` We are now set with authentication, and you can confirm the state of what we have done so far. - Attempt to navigate to this url `http://localhost:3000/api/auth/signin`. You should be redirected back to the custom sign-in page. This redirection occurs because of the URL path we specified in the `pages` section of our `authOptions` object while setting up the configuration in `auth.ts`. **B. Create protected page** The final step is to create a protected page that users cannot access unless they are authenticated. -**`profile.tsx`** Let's add the code below to our empty file `profile.tsx` in the components directory. ``` "use client"; // library imports import React, { useState, useEffect } from "react"; import { useSession } from "next-auth/react"; import { useRouter, usePathname } from "next/navigation"; export default function Profile() { const router = useRouter(); const pathname = usePathname(); const { data: session } = useSession(); const baseUrl = process.env.NEXT_PUBLIC_BASE_URL; const [loadingProfile, setLoadingProfile] = useState(false); useEffect(() => { if (session?.user?.accessToken) { // fetch user profile if access token is available getUserProfile(session.user.accessToken); } else { // Redirect to `/login` if no access token or no session router.push("/auth/login?next=" + pathname); } }, []); const getUserProfile = (token: string) => { setLoadingProfile(true); fetch(`${baseUrl}/auth/user_profile`, { headers: { Authorization: "Bearer " + token }, }) .then((response) => { setLoadingProfile(false); }) .catch((error) => { // handle error here console.error(error); }); }; return ( <> {" "} {loadingProfile ? ( <p>Loading...</p> ) : ( <div> <p>User Profile</p> <p>Name: {session?.user?.name}</p> <p>Email: {session?.user?.email}</p> </div> )} </> ); } ``` Here we use the `useSession` hook to check if a user session with an access token exists. If the token is present , we go ahead and make a call to the backend route `/auth/user_profile` . In my case i have a protected route in the backend which an unauthenticated user cannot access without a token. While waiting for the profile data to load, a loading indicator is displayed. Once loaded, the component displays the user's name and email. If there is no active session or access token, it redirects the user back to the login page to authenticate. Finally, we create the page which utilizes the `profile.tsx` component above. Create a new folder in the app directory named `profile` and in it a `page.tsx` file. - **`page.tsx`**: ``` // library imports import { SessionProvider } from "next-auth/react"; // internal imports import { auth } from "@/auth"; //component imports import Profile from "@/components/profile/profile"; export default async function ProfilePage() { const session = await auth(); return ( <SessionProvider session={session}> <Profile /> </SessionProvider> ); } ``` First we import the `SessionProvider` component from `next-auth/react` to serve as a context provider for managing the user session state throughout the application. Additionally, we import the `auth` method from `auth.js`. Since this operation requires asynchronous behavior, we utilize a server-side component, using `await` to fetch the session using the `auth` method. Subsequently, we pass this session object obtained from auth to the `SessionProvider` component which acts as a wrapper around the `Profile` component. We have to wrap the component with Session Provider here if we want the session to be accessible within `profile.tsx` using the `useSession` hook, similar to how we handled authentication state above. Now if we try to access the `/profile` route, we would get redirected to the `/auth/login` page since we are not logged in. Once we login, we would be redirected back to the profile page, where we can see our user data. ------------------------------------------------------ Congratulations , you have successfully integrated your next js application using the app router with auth.js v5. ------------------------------------------------------ You might have noticed that in the profile.tsx component, we check the session for a token and then redirect the user back to the login page if the token is not present. This means that Next.js already starts rendering our page before redirecting to the login page if the token is absent. If you pay attention, you'll also notice this in the UI. An alternative to this approach is to use middleware to handle authentication. Next.js has a middleware feature that is triggered when a user attempts to access a route. We can use this to protect our authenticated routes. This way, the protected page is not rendered if an unauthenticated user attempts to access it, as the middleware intercepts the request and redirects the user to the login route. This ensures we have a central location to check the session and redirect users if they are unauthenticated, thereby improving the user experience with the UI. **8. Adding a Middleware** Create a new file called `middleware.ts` in your root directory. If you are using the `src/` directory approach, place the file directly within the `src/` folder. ``` ├── app ├── public ├── types ├── .env.local ├── auth.ts ├── middleware.ts └── ... ``` -**`middleware.ts`** In the file add the following lines of code: ``` import { NextResponse } from "next/server"; import { auth } from "@/auth"; export default auth((req) => { const currentPath = req.nextUrl.pathname; // Redirect to login page if user is not authenticated if (!req.auth) { return NextResponse.redirect( new URL(`/auth/login?next=${currentPath}`, req.url) ); } }); // Manage list of protected routes export const config = { matcher: ["/profile/:path*", "/another-protected-route/:path*"], }; ``` This middleware checks if the user is authenticated before allowing access to specified routes. If the user is not authenticated, they are redirected to the login page. The config object defines the routes that should trigger this middleware, ensuring that only protected routes are affected. By setting up this middleware, you centralize the authentication logic, making your codebase cleaner and more maintainable as your application grows. **`Note`**: In my application, I have several authenticated and unauthenticated routes, which is why I am not triggering the middleware with all requests and instead maintaining a protected routes list. The little downside to this is that you have to manage a list of all your protected routes. This might end up becoming a long list and could become difficult to manage, or someone might implement a protected page and forget to update the middleware. If your base path is the only unprotected route, your job will be much easier. It's been a long ride, but we have finally come to the end of this tutorial. With what you have learned, I believe you can seamlessly integrate `Auth.js v5` into your `next.js application. Here is the link to the [github repo](https://github.com/Faruqt/next-14-authjs) for this project. Cheers!!! If you have any questions, feel free to drop them as a comment or send me a message on [LinkedIn](https://www.linkedin.com/in/faruq-abdulsalam/) and I'll ensure I respond as quickly as I can. Ciao 👋
nagatodev
1,922,328
How to Handle Frames and Windows in Selenium WebDriver #InterviewQuestion
Interview Question: Handling Frames and Windows in Selenium WebDriver Handling Frames and...
28,054
2024-07-13T13:55:20
https://dev.to/codegreen/how-to-hhandling-frames-and-windows-in-selenium-webdriver-interviewquestion-9c1
java, selenium, interview
Interview Question: Handling Frames and Windows in Selenium WebDriver Handling Frames and Windows in Selenium WebDriver ------------------------------------------------- **Handling Frames:** Frames in HTML are used to divide a web page into multiple sections, where each section can load its own HTML content. To interact with elements inside a frame using Selenium WebDriver with Java, you need to switch the WebDriver focus to that frame. **Example Scenario:** ```java // Assume 'driver' is an instance of WebDriver // 1. Switch to a frame by index driver.switchTo().frame(0); // 2. Switch to a frame by name or ID driver.switchTo().frame("frameNameOrId"); // 3. Switch to a frame by WebElement WebElement frameElement = driver.findElement(By.id("frameId")); driver.switchTo().frame(frameElement); // 4. Switch to the parent frame (i.e., switch back to the previous frame level) driver.switchTo().parentFrame(); // 5. Switch to the default content (i.e., switch back to the main document) driver.switchTo().defaultContent(); ``` **Handling Multiple Windows/Tabs:** When a web application opens a new window or tab, Selenium WebDriver treats each window or tab as a separate window handle. To switch between these windows or tabs, you can use the window handles provided by WebDriver. **Example Scenario:** ```java // Assume 'driver' is an instance of WebDriver // Get all window handles Set<String> windowHandles = driver.getWindowHandles(); // Switch to a new window/tab for (String handle : windowHandles) { driver.switchTo().window(handle); // Perform actions on the new window/tab } ``` **Challenges Faced:** One common challenge is synchronizing WebDriver actions when dealing with frames and multiple windows. For example, when switching between frames or windows, WebDriver may need to wait for the new content to load, which can lead to synchronization issues if not handled properly. **Resolution:** To address synchronization issues, I implemented explicit waits using WebDriverWait and ExpectedConditions in Selenium. This ensures that WebDriver waits until certain conditions (like element visibility or presence) are met before proceeding with the next action, thus preventing synchronization errors.
codegreen
1,922,329
Build a ChatGPT on AWS
Introduction ChatGPT is based on OpenAI's GPT (Generative Pre-trained Transformer) architecture. It...
0
2024-07-13T14:02:03
https://dev.to/rashmitha_v_d0cfc20ba7152/build-a-chatgpt-on-aws-ll4
**_Introduction_** ChatGPT is based on OpenAI's GPT (Generative Pre-trained Transformer) architecture. It leverages deep learning to understand and generate human-like text based on the input it receives. Building a ChatGPT involves setting up a similar environment where your model can learn from data and interact with users in a conversational manner. _Service used:_ API Gateway lambda function Bedrock IAM Role 1. _**Create a bedrock:**_ It provides a foundation models: - base models - custom models - imported models the request access should be provided before using. click on base model - no access to any model _Step 1:_ To providing access click on 'enable all models' _Step 2:_ uncheck the Anthropic models. 2.**_create an IAM role_** - click roles - choose lambda function as service. - add permissions (bedrockFullAccess)(cloudwatchlogFullAccess) - provide role name. 3._**create a lambda function:**_ - create a function name. - set the run time - choose existing role. function is created and click configuration to increase the timeout. this is our request body the AI model ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6lzoyscxlqzyay8ohrq2.jpg) create a lambda code and paste it on the lambda_function. 4.**_create a API Gateway:_** -create a API gateway to create a endpoints - click on restAPI - provide a name. click regional and create a resource. - create a method. - choose method type "POST" - check the lambda proxy - choose the lambda function. - click create the method. _Deploy the APT_ - deploy the API to a new stage the API endpoint is create! this API endpoint will call the lambda function to generate the response. - copy the lambda endpoint and paste it in the POSTMAN tool. - select the method as post.
rashmitha_v_d0cfc20ba7152
1,922,330
Mastering Loops and Conditional Statements in C Programming
C Programming – For those who are new to programming, one of the essential languages is C. Since they...
0
2024-07-14T04:30:00
https://dev.to/code_passion/mastering-loops-and-conditional-statements-in-c-programming-3mke
c, programming, tutorial, webdev
[C Programming](https://skillivo.in/c-programming-loops-conditional-statements/) – For those who are new to programming, one of the essential languages is C. Since they are the foundation of most programs, understanding loops and conditional statements is essential. This blog post will discuss some standard loop and condition techniques in C programming that all newcomers should be familiar with. **Introduction to Conditional Statements and Loops in C Programming** Certain code blocks can be executed based on conditions thanks to conditional statements. If a condition is true, the if statement assesses it and then runs a block of code. You can check multiple criteria with the else if statement, and it also gives a default action in the event that none of the circumstances are met. **1. Positive number program** ``` #include <stdio.h> int main() { int num = 10; if (num > 0) { printf("Number is positive.\n"); } else if (num < 0) { printf("Number is negative.\n"); } else { printf("Number is zero.\n"); } return 0; } ``` ([Read more](https://skillivo.in/c-programming-loops-conditional-statements/) about positive number in c) **2. Reversing a Number** ``` #include <stdio.h> int RevNum(int num) { int R = 0; // Reversing the number while (num != 0) { int remainder = num % 10; R = R * 10 + remainder; num /= 10; } return R; } int main() { int num; printf("Enter a number: "); scanf("%d", &num); printf("Reversed number: %d\n", RevNum(num)); return 0; } ``` ([Read more](https://skillivo.in/c-programming-loops-conditional-statements/) about Reversing number in c) **3. Armstrong Number** ``` #include <stdio.h> #include <math.h> // Function to calculate the number of digits in a number int countDigits(int num) { int count = 0; while (num != 0) { num /= 10; ++count; } return count; } // Function to check if a number is an Armstrong number int isArmstrong(int num) { int No, remainder, result = 0, n = 0, power; No = num; // Count number of digits n = countDigits(num); // Calculate result while (No != 0) { remainder = No % 10; // Power of remainder with respect to the number of digits power = round(pow(remainder, n)); result += power; No /= 10; } // Check if num is an Armstrong number if (result == num) return 1; // Armstrong number else return 0; // Not an Armstrong number } int main() { int num; printf("Enter a number: "); scanf("%d", &num); if (isArmstrong(num)) printf("%d is an Armstrong number = ", num); else printf("%d is not an Armstrong number = ", num); return 0; } ``` ([Read more](https://skillivo.in/c-programming-loops-conditional-statements/) about Armstrong number in c) **4. Palindrome number** ``` #include <stdio.h> // Function to check if a number is palindrome or not int P(int num) { int i = 0, no = num; // Reversing the number while (num != 0) { int remainder = num % 10; i = i * 10 + remainder; num /= 10; } // Checking if the reversed number is equal to the original number if (no == i) return 1; // Palindrome no else return 0; // Not a palindrome end if } int main() { int num; printf("Enter a number: "); scanf("%d", &num); if (P(num)) printf("%d palindrome no.\n", num); else printf("%d is not a palindrome no .\n", num); end if return 0; } ``` ([Read more](https://skillivo.in/c-programming-loops-conditional-statements/) about Palindrome number in c) **Conclusion** These programs are crucial for novices to comprehend as they illustrate basic C programming ideas. Effective understanding of these ideas will be aided by practice and experimenting with these examples.
code_passion
1,922,331
Velo Credit Loan App Customer Care Helpline Number +91) //-9831170350-}= 7864967058 call now.
Velo Credit Loan App Customer Care Helpline Number +91) //-9831170350-}= 7864967058 call now.Velo...
0
2024-07-13T14:04:01
https://dev.to/fkbsj_kumar_7304c1cccac43/velo-credit-loan-app-customer-care-helpline-number-91-9831170350-7864967058-call-now-31m9
webdev
Velo Credit Loan App Customer Care Helpline Number +91) //-9831170350-}= 7864967058 call now.Velo Credit Loan App Customer Care Helpline Number +91) //-9831170350-}= 7864967058 call now.Velo Credit Loan App Customer Care Helpline Number +91) //-9831170350-}= 7864967058 call now.
fkbsj_kumar_7304c1cccac43
1,922,333
Velo Credit Loan App Customer Care Helpline Number +91) //-9831170350-}= 7864967058 call now.ppp
Velo Credit Loan App Customer Care Helpline Number +91) //-9831170350-}= 7864967058 call now.Velo...
0
2024-07-13T14:04:50
https://dev.to/fkbsj_kumar_7304c1cccac43/velo-credit-loan-app-customer-care-helpline-number-91-9831170350-7864967058-call-nowppp-511p
beginners
Velo Credit Loan App Customer Care Helpline Number +91) //-9831170350-}= 7864967058 call now.Velo Credit Loan App Customer Care Helpline Number +91) //-9831170350-}= 7864967058 call now.Velo Credit Loan App Customer Care Helpline Number +91) //-9831170350-}= 7864967058 call now. ![Uploading image](...)
fkbsj_kumar_7304c1cccac43
1,922,335
API Security Fundamentals: Key Practices for Developers
Introduction In the contemporary realm of software creation, APIs (Application Programming...
0
2024-07-13T14:11:12
https://dev.to/api4ai/api-security-fundamentals-key-practices-for-developers-57fa
ai, api, security, oauth
#Introduction In the contemporary realm of software creation, APIs (Application Programming Interfaces) are indispensable. They function as the connectors that enable various software systems to communicate and work together effortlessly. Whether it's allowing mobile apps to retrieve data from servers, supporting third-party integrations, or managing microservices within a broader architecture, APIs form the core of today's interconnected digital environment. Nevertheless, this growing dependence on APIs also introduces notable security challenges. Vulnerable APIs can be prime targets for cyber attackers, resulting in data breaches, unauthorized access, and service interruptions. Prominent incidents have highlighted the dangers and repercussions of insecure APIs, such as compromised user information, financial setbacks, and harm to an organization's reputation. For developers, grasping and applying strong API security practices is essential. At [API4AI](API4AI), we possess extensive experience in API development and a deep understanding of these challenges (since it's our specialty). Having designed and secured numerous APIs across diverse industries, API4AI has garnered substantial knowledge and proficiency in maintaining API security. This blog post is an effort to share this invaluable expertise with the developer community. The aim of this blog post is to furnish developers with a detailed guide on best practices for API security. By informing developers about the most effective methods for protecting their APIs, we seek to reduce the risks associated with API vulnerabilities. This post will provide practical tips and techniques, covering areas such as authentication and authorization, input validation, and encryption, to ensure your APIs stay secure and robust against potential threats. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jg77vl3hpn896dz2c8kk.png) #Understanding API Security Threats ## Common API Vulnerabilities As APIs become more essential to software applications, they also become prime targets for a variety of security threats. Recognizing these common vulnerabilities is the first step toward securing your APIs. **Injection Attacks (SQL, NoSQL, Command Injection)** Injection attacks happen when an attacker sends harmful data to an API, tricking it into running unintended commands. This can result in unauthorized data access, data corruption, or even total system compromise. SQL and NoSQL injections are typical examples, targeting databases by injecting harmful queries. Command injections involve inserting arbitrary commands into the system, potentially taking over the server. **Broken Authentication and Session Management** APIs that do not properly authenticate users or manage sessions can let attackers gain unauthorized access to sensitive information. Weak authentication methods, such as using simple API keys without additional verification, and poor session management practices, like not invalidating tokens after logout, can leave APIs vulnerable to exploitation. **Cross-site Scripting (XSS)** XSS attacks occur when an attacker injects malicious scripts into API responses, which are then executed by the user's browser. This can lead to stolen session cookies, redirection to malicious sites, and the execution of unwanted actions on behalf of the user. **Insecure Direct Object References (IDOR)** IDOR vulnerabilities occur when an API exposes internal details, such as database keys or file names, in a manner that allows attackers to access unauthorized data. For example, if an API exposes user IDs in the URL without proper access controls, an attacker can manipulate the URL to access other users' data. ## Case Studies of API Security Breaches **Facebook (2018)** In 2018, Facebook [revealed a flaw in its API](https://techcrunch.com/2018/09/28/everything-you-need-to-know-about-facebooks-data-breach-affecting-50m-users/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAABTv1ltcA4mbjCQPod6ogD9EOoHAV3TyHlZ7tYJgz7joURMODU7Z-v_YTbqS_j84XYKQLKK_EpeZvWyWsKgza-Qh3xkOkDod0lFo_3_RoEYbiUxtZQvHpKgmbX1vZgyJyxMCv0z_Apy1M6zljvLQj6MuuMR6FquVqFvpN-oY0Z7t) that exposed access tokens for almost 50 million users. Attackers exploited a vulnerability in the "View As" feature, allowing them to steal access tokens by injecting malicious code. This incident underscored the necessity of robust access token management and regular security audits. **T-Mobile (2018)** T-Mobile encountered a major breach when attackers exploited an API endpoint that lacked proper authentication. This enabled the attackers to access [personal information of 2.3 million customers](https://www.bitdefender.com/blog/hotforsecurity/2-3-million-t-mobile-customers-exposed-following-data-breach/), including names, billing ZIP codes, phone numbers, email addresses, and account numbers. The breach emphasized the importance of stringent authentication and authorization measures. **GitHub (2020)** In 2020, GitHub experienced a security incident where [attackers used stolen OAuth tokens](https://cyware.com/news/intrusions-abused-github-and-gitlab-oauth-tokens-from-git-analytics-firm-waydev-8d5a62a9) to access private repositories. This breach highlighted the risks associated with third-party integrations and underscored the importance of securing OAuth implementations and regularly rotating tokens. **Lessons Learned** These real-world examples highlight the crucial need for strong API security measures. Key lessons include: - **Regular Security Audits**: Conduct frequent security audits to identify and address vulnerabilities before they can be exploited. - **Strong Authentication Mechanisms**: Implement robust authentication and session management practices to prevent unauthorized access. - **Input Validation**: Ensure all inputs are properly validated and sanitized to prevent injection attacks. - **Access Controls**: Enforce strict access controls to prevent insecure direct object references (IDOR) and unauthorized data access. - **Security Best Practices**: Stay updated on security best practices and emerging threats to continuously enhance API security. By learning from these incidents and understanding common vulnerabilities, developers can better safeguard their APIs against potential threats and ensure the security of their applications. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cy75f2yjlbl27lgr7xi1.png) #Best Practices for API Security Protecting your APIs is a complex task that demands a thorough approach. Applying best practices across different facets of API development and upkeep can greatly diminish the likelihood of security breaches. Here are some crucial areas to concentrate on: ## Authentication and Authorization **Implementing Robust Authentication Mechanisms** - **OAuth**: OAuth is a widely accepted protocol for authorization, enabling users to grant third-party applications access to their resources without revealing their credentials. Utilize OAuth 2.0 to ensure secure authorization for your APIs. - **JWT**: JSON Web Tokens (JWT) are compact, URL-safe tokens used to represent claims between two parties. Employ JWT for secure, token-based authentication. - **API Keys**: Though less secure than OAuth or JWT, API keys can be used for basic authentication. Ensure they are paired with other security measures and not exposed in code or URLs. **Role-based Access Control (RBAC) and Principle of Least Privilege** - **RBAC**: Implement Role-Based Access Control to ensure users only access the resources they need. Clearly define roles and permissions, assigning users to roles based on their duties. - **Principle of Least Privilege**: Apply the principle of least privilege by giving users the minimal level of access necessary for their tasks. Regularly review and adjust permissions to prevent privilege escalation. ## Input Validation and Sanitization **Validating and Sanitizing Inputs to Prevent Injection Attacks** - **Input Validation**: Validate all inputs against a predefined list of acceptable values. Reject any input that does not match the expected formats or ranges. - **Sanitization**: Apply sanitization techniques to remove or neutralize potentially harmful elements from user inputs. This helps prevent injection attacks and ensures data integrity. **Using Libraries and Frameworks for Input Validation** - Utilize reliable libraries and frameworks that offer robust input validation and sanitization functions. Examples include the OWASP ESAPI library and the built-in validation features of frameworks like Spring and Express.js. ## Encryption and Data Protection - **Using HTTPS for Encrypting Data in Transit**: Always employ HTTPS to encrypt data transferred between clients and servers. This prevents man-in-the-middle attacks and ensures the confidentiality and integrity of the data. - **Encrypting Sensitive Data at Rest**: Use strong encryption algorithms to encrypt sensitive data stored on servers. This ensures the protection of data even if the storage medium is compromised. - **Secure Encryption Key Management**: Implement secure key management practices to store and handle encryption keys safely. Utilize hardware security modules (HSMs) or cloud-based key management services to protect keys. ## Rate Limiting and Throttling - **Applying Rate Limiting to Prevent Abuse and DDoS Attacks**: Apply rate limiting to control the number of requests a client can make within a specific time frame. This helps prevent misuse and mitigates the risk of DDoS attacks. -** Methods for Setting and Enforcing Rate Limits**: Determine rate limits based on your application's usage patterns and capacity. Utilize API gateways or load balancers to enforce these limits and provide appropriate error responses when the limits are exceeded. ## Secure API Design and Development - **Integrating Security from the API Design Phase**: Integrate security considerations right from the beginning of the API design process. Utilize secure coding practices and design patterns to mitigate common vulnerabilities. - **Minimizing Data Exposure**: Limit the data included in API responses to only what is necessary. Use filtering and projection techniques to exclude sensitive information. - **Implementing API Gateways for Enhanced Security**: Use API gateways to add an additional security layer. Gateways can manage authentication, rate limiting, logging, and other security functions, centralizing and simplifying these tasks. ## Monitoring and Logging - **Setting Up Comprehensive Logging and Monitoring to Detect Suspicious Activity**: Establish thorough logging and monitoring systems to track API usage and identify unusual behavior. This includes recording all access attempts, both successful and failed, and monitoring for patterns that may indicate security threats. - **Best Practices for Logging Sensitive Information**: Mask sensitive information in logs to avoid exposure. Refrain from logging sensitive data such as passwords or personal identifiers directly. Instead, use placeholders or hashes where necessary. ## Regular Security Testing - **Conducting Routine Security Audits and Penetration Tests**: Regularly perform security audits and penetration tests to identify and address vulnerabilities. Engage third-party security experts for impartial evaluations. - **Using Automated Tools for Ongoing Security Assessments**: Deploy automated security testing tools to continuously scan for vulnerabilities. Integrate these tools into your development workflow to identify issues early. - **Integrating Security Testing into the CI/CD Pipeline**: Incorporate security testing into your CI/CD pipeline to ensure that security checks are part of the build and deployment processes. This helps maintain a high level of security throughout the development lifecycle. By adhering to these best practices, developers can significantly bolster the security of their APIs, safeguarding their applications and users from potential threats. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kxrunvcadukh3uzyogj8.png) #Tools and Technologies for API Security Beyond adopting best practices, utilizing the appropriate tools and technologies can greatly improve your API security. Here's a summary of essential security libraries, frameworks, and tools, along with guidelines for effectively using third-party APIs. ## Security Libraries and Frameworks **OWASP (Open Web Application Security Project)** **Overview**: [OWASP](https://owasp.org/) is a nonprofit organization dedicated to enhancing software security. It provides numerous resources, including guidelines, tools, and libraries. **Popular Libraries**: - **[OWASP ESAPI](https://owasp.org/www-project-enterprise-security-api/) (Enterprise Security API**): This library offers a suite of security controls to defend against common security issues like injection attacks, XSS, and more. - **[OWASP ZAP](https://www.zaproxy.org/) (Zed Attack Proxy)**: A tool designed to identify vulnerabilities in web applications, including APIs. **Spring Security** **Overview**: [Spring Security](https://spring.io/projects/spring-security) is a robust and adaptable authentication and access control framework for Java applications, part of the broader Spring ecosystem. **Features**: - Offers comprehensive security services for Java EE-based enterprise applications. - Supports OAuth, JWT, and various other authentication mechanisms. - Integrates easily with Spring applications for seamless security implementation. **Express Rate Limit** **Overview**: A [straightforward middleware for Express.js](https://www.npmjs.com/package/express-rate-limit) applications to enable rate limiting. **Features**: - Protects against DDoS attacks by limiting repeated requests to public APIs and endpoints. - Easy to configure and integrate with existing Express.js applications. ## API Security Tools **Postman** **Overview**: [Postman](https://www.postman.com/) is a collaborative platform for API development, providing tools for building, testing, and monitoring APIs. **Features**: - **Security Testing**: Enables the creation and execution of security tests within the API development workflow. - **Environment Management**: Manages different environments and configurations, ensuring consistent security testing throughout various stages of development. **Burp Suite** **Overview**: [Burp Suite](https://portswigger.net/burp) is a widely-used tool for web vulnerability scanning, favored by security experts. **Features**: - **Scanner**: Performs automated scans to detect various security vulnerabilities, including those in APIs. - **Proxy**: Intercepts and modifies API requests and responses to identify security weaknesses. - **Extensibility**: Allows for plugins and extensions to enable customized security testing. **OWASP ZAP** **Overview**: [ZAP](https://www.zaproxy.org/ is a popular open-source tool for detecting security vulnerabilities in web applications. **Features**: - **Active and Passive Scanning**: Detects security issues by examining HTTP requests and responses. - **Automation**: Enables automation through scripting and integration with CI/CD pipelines. - **API Testing**: Tailored for API testing and includes multiple plugins to enhance functionality. #Best Practices for Utilizing Third-Party APIs **Assessing the Security of Third-Party APIs** - **Reputation and Track Record**: Select APIs from well-known providers with a solid history of security. Investigate their security track record and look for any previous security issues. - **Documentation and Policies**: Examine the API documentation and security policies. Ensure they adhere to industry standards and best security practices. - **Vulnerability Disclosures**: Confirm that the provider has a clear and transparent process for disclosing and resolving vulnerabilities. **Implementing Additional Security Measures for Third-Party APIs** - **API Gateways**: Deploy API gateways to enhance security. They can enforce security policies, manage authentication, and monitor traffic to and from third-party APIs. - **Rate Limiting and Quotas**: Apply rate limiting to control the number of requests sent to third-party APIs, protecting both your application and the third-party service from overuse. - **Data Encryption**: Ensure that data exchanged with third-party APIs is encrypted both in transit and at rest. Utilize HTTPS and other encryption standards. - **Regular Audits**: Periodically audit the security of the third-party APIs you use. Stay informed about their security advisories and promptly apply patches or updates. - **Fallback Mechanisms**: Design your application to handle potential failures if a third-party API becomes unavailable or compromised. This includes implementing fallback mechanisms and alternative data sources. By leveraging these tools and following these best practices, developers can significantly enhance the security of their APIs, protect sensitive data, and ensure the integrity of their applications. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3c7pp6ly84fgpjhdh0xt.png) #Keeping Up-to-Date with API Security API security is a constantly changing domain, necessitating that developers remain informed and continually enhance their knowledge and skills. Here are ways to stay current with the latest advancements and best practices in API security. ## Staying Informed on Security News and Updates **Sources for Keeping Up-to-Date** **Security Blogs and Websites**: Regularly follow security-focused blogs and websites, such as: - [Krebs on Security](https://krebsonsecurity.com/): A well-known cybersecurity blog by Brian Krebs. - [The Hacker News](https://thehackernews.com/): Provides updates on the latest cybersecurity threats and solutions. - [SecurityWeek](https://www.securityweek.com/): Offers extensive news on security trends and incidents. **Vendor Security Bulletins**: Subscribe to security bulletins from the vendors of your API tools and technologies. These bulletins provide critical updates and patches for known vulnerabilities. **[OWASP](https://owasp.org/news/)**: The Open Web Application Security Project is an essential resource for staying informed about web and API security threats. Regularly visit their website for new guidelines, tools, and reports. **[CVE Database](https://www.cvedetails.com/)**: The Common Vulnerabilities and Exposures (CVE) database is a comprehensive repository of publicly disclosed cybersecurity vulnerabilities. Regularly check for new CVEs that may affect the technologies you use. ## Engaging with the Developer Community **Participating in Forums and Online Communities** - **Stack Overflow**: Engage in discussions on API security, ask questions, and share your expertise. - **Reddit**: Subreddits like [r/netsec](https://www.reddit.com/r/netsec/) are great for staying updated and interacting with other professionals. - **GitHub**: Follow and contribute to repositories related to security tools and projects. **Attending Conferences and Meetups** - **Security Conferences**: Attend major industry events such as [Black Hat](https://www.blackhat.com/), [DEF CON](https://defcon.org/, and the [RSA Conference](https://www.rsaconference.com/) to learn from experts and network with peers. - **API-Specific Events**: Participate in API-focused events like [API World](https://apiworld.co/) and [Nordic APIs](https://nordicapis.com/) to stay current on best practices and emerging trends. **Becoming Part of Security-Focused Communities** - **Meetup Groups**: Join local or online meetups that concentrate on cybersecurity or API development. These groups frequently organize talks, workshops, and networking events. - **Professional Associations**: Consider becoming a member of associations such as the Information Systems Security Association (ISSA) or the International Association of Computer Science and Information Technology (IACSIT) to gain access to valuable resources and professional growth opportunities. ## Continuous Learning and Improvement **Highlighting the Need for Ongoing Education and Training** - **Online Courses and Certifications**: Enroll in online courses and pursue certifications to enhance your knowledge of API security. Platforms such as [Coursera](https://www.coursera.org/courseraplus/?utm_medium=sem&utm_source=gg&utm_campaign=B2C_EMEA__coursera_FTCOF_courseraplus&campaignid=20858197888&adgroupid=156245795749&device=c&keyword=coursera&matchtype=e&network=g&devicemodel=&adposition=&creativeid=692160334961&hide_mobile_promo&term=%7Bterm%7D&gad_source=1&gclid=Cj0KCQjw7ZO0BhDYARIsAFttkChD8MBQjaK0vY-geReVmFlTHXRLfaLZaMDNzMsec5337nrRNgjXlVIaAt3gEALw_wcB), [Udemy](https://www.udemy.com/), and [Pluralsight](https://www.pluralsight.com/) offer courses on cybersecurity and secure coding practices. Certifications like Certified Information Systems Security Professional ([CISSP](https://www.isc2.org/certifications/cissp)) and Certified Secure Software Lifecycle Professional ([CSSLP](https://www.isc2.org/certifications/csslp)) are valuable credentials. - **Webinars and Workshops**: Attend webinars and workshops conducted by security experts and organizations. These sessions provide practical insights and hands-on experience with the latest tools and techniques. - **Books and Publications**: Read books on API security and related subjects. Noteworthy titles include "[API Security in Action](https://www.amazon.com/API-Security-Action-Neil-Madden/dp/1617296023)" by Neil Madden. **Practicing Security-Focused Development** - **Hackathons and Capture the Flag (CTF) Competitions**: Participate in hackathons and CTF competitions to sharpen your security skills in a practical, hands-on setting. These events mimic real-world security challenges and foster creative problem-solving. - **Code Reviews and Peer Learning**: Conduct regular code reviews with an emphasis on security. Promote a culture of continuous learning within your team by sharing knowledge and experiences related to API security. **Staying Proactive** - **Regular Self-Evaluation**: Periodically evaluate your knowledge and skills in API security. Identify areas that need improvement and seek out resources to fill these gaps. - **Adopting a Security-First Approach**: Foster a mindset where security is a top priority throughout the development lifecycle. Remain vigilant and proactive in addressing potential security issues. By implementing these strategies, developers can stay ahead of emerging threats, continually enhance their skills, and contribute to creating secure, resilient APIs. #Conclusion ## Summary of Key Points In this blog post, we've discussed the critical best practices for API security that every developer should adopt to protect their applications. Here’s a concise recap of the key points covered: **Understanding API Security Threats**: Identify common vulnerabilities such as injection attacks, broken authentication, cross-site scripting (XSS), and insecure direct object references (IDOR). Learn from real-world case studies to understand the impact of API security breaches and the lessons they provide. **Best Practices for API Security**: - **Authentication and Authorization**: Use robust authentication mechanisms like OAuth, JWT, and API keys. Implement role-based access control (RBAC) and the principle of least privilege. - **Input Validation and Sanitization**: Validate and sanitize inputs to prevent injection attacks, leveraging libraries and frameworks to assist in this process. -** Encryption and Data Protection**: Use HTTPS to encrypt data in transit, encrypt sensitive data at rest, and securely manage encryption keys. - **Rate Limiting and Throttling**: Apply rate limiting to prevent abuse and DDoS attacks, and enforce these limits effectively. - **Secure API Design and Development**: Design APIs with security in mind, minimize unnecessary data exposure, and use API gateways for added security layers. - **Monitoring and Logging**: Implement comprehensive logging and monitoring to detect suspicious activity, and follow best practices for logging sensitive information. - **Regular Security Testing**: Perform regular security audits and penetration testing, use automated tools for continuous security assessment, and integrate security testing into the CI/CD pipeline. - **Tools and Technologies for API Security**: Utilize security libraries and frameworks like OWASP and Spring Security, employ API security tools like Postman and Burp Suite, and adhere to best practices when using third-party APIs. - **Staying Updated on API Security**: Keep informed about the latest security threats through reliable sources, engage with the developer community via forums and conferences, and commit to continuous learning and improvement. By following these strategies, developers can stay ahead of emerging threats, continuously improve their skills, and contribute to creating secure, robust APIs. Securing your APIs is an ongoing commitment rather than a one-time effort. We urge all developers to adopt the best practices discussed in this post to safeguard their applications and users from potential threats. By maintaining a security-first mindset and staying proactive, you can significantly reduce the risk of security breaches and ensure the integrity of your APIs. At API4AI, with our extensive experience in API development, we are dedicated to helping developers create secure and efficient APIs. We encourage you to try our [APIs for image processing](https://api4.ai/apis) and assess their security. Your feedback and insights are crucial in helping us uphold the highest security standards. Together, we can build a safer and more secure digital environment. Start implementing these best practices today and continuously enhance your security measures as the threat landscape evolves. [More Stories about Web, Cloud, AI and APIs](https://api4.ai/blog)
taranamurtuzova
1,922,336
OTA updates in React Native
Hi there, it's been a long time since I have written a blog on React Native. Today I just want to...
0
2024-07-14T14:50:50
https://dev.to/ponikar/ota-updates-in-react-native-1pbo
javascript, reactnative, ux, mobile
Hi there, it's been a long time since I have written a blog on React Native. Today I just want to explain what OTA updates are and how you can use them in your React Native project. We will deep dive into OTA updates, advantages, and challenges you may face while dealing with this. ### Disclaimer This blog covers some advanced topics from my experience building various apps. If you're new to React Native, some parts might be hard to understand, and that's okay. Everyone starts somewhere. Feel free to ask questions or share your thoughts in the comments section. The idea of this blog is to give you a high-level overview of how OTA updates work in the real world. Instead of covering obvious topics like installation and troubleshooting, I'll focus on helping you understand the real impact of OTA updates when working with a production app. ## What is OTA update? OTA stands for Over the Air updates, it's an idea to update your JavaScript bundle on the fly without the need to update the binary (native Android/iOS update). ![OTA update flow](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/04j8zjv0esbe7zux1y6y.png) Basically, if you have OTA updates configured and you push a new bundle, existing users can download and load the new JS bundle on the next boot-up. OTA updates are really helpful to fix UI-related bugs on the fly and sometimes it helps avoid submitting your apps for review which can take up to 7 days. > There is also one popular method known as Server Driven UI that lets you control the behavior of the UI from the server. OTAs and Server Driven UI are two separate things. I might write a blog on it in the future but this blog is dedicated to OTA updates. You can share your thoughts in the comment section. So now we have understood what OTA updates are in general. Let's understand how it works in React Native. In very simple words, OTAs are just for updating your JS bundle on the fly. Roughly speaking, it's just an API call to your server and asking if there has been any update pushed lately or not. The server might give you a JS bundle that you can swap out with the current bundle and load the app, but as you can see things start to get complicated as we speak about swapping the JS bundle and making a server call and loading the JS bundle in the next boot-up. To make it easier, there are libraries like [`react-native-codepush`](https://github.com/microsoft/react-native-code-push) and [`eas-updates`](https://docs.expo.dev/eas-update/introduction/) by [Expo](https://docs.expo.dev/) that simplify this process. These libraries act as a wrapper around your app and they also provide a CodePush server in which you can configure and push your JS bundle. You can refer to their documentation to install this library. If you are facing any issues, you can share them in the comment section. ## Too Easy to break UX Remember CodePush only updates your JavaScript code. It cannot update your native Java/Objective C code. So it's really easy to break things by accidentally pushing a JS bundle that can break your app if the native modules of that bundle are missing. Let's take an example, suppose you have a music app live in the app store and people are loving it. They are using it all day but now you have a product requirement that requires you to access the user's microphone. You decided to install a library that lets you access the microphone. This library also requires you to configure native files such as `MainApplication.java` or `Appdelegate.m`. ![JS code invokes native function](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ku1p8q7scpm4snz3wz3p.png) You made changes, everything is working locally as expected. You decided to release this update via CodePush. The moment you release the OTA update the app will crash because your JS code is trying to call a native module which is missing. Users have to reinstall the application and you have to immediately revert that bundle so other users won't experience these crashes. ![Picture shows explosion](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m039aofaissz165m2noc.gif) We had the exact same situation while building Nintee Application. We had to revert the JS bundle and ask users to reinstall the app. It's easy to spoil the UX. Another challenging part is OTA updates don't reflect immediately. First, the user needs to download the bundle in the background and load it which can happen in the next app boot. So if users install this app, chances are they can load the stale JS bundle which is locally present. UX can easily get out of sync. ## Ground rules You have to establish some ground rules when you are working with OTA updates. Stick to the app version: I made a habit to increment the app version whenever I modified any native code. You can easily differentiate and upload the JS bundle corresponding to the app version. ![OTA updates for different versions of your app](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fb403pxus1amuozpkyqx.png) For example, if Segment A is using version `1.5.2` you can push OTA updates specifically for this segment. This way you can make sure that OTA updates support that App binary and it won't break the app. Decide whether you want updates to be visible immediately. Prioritize between OTA and Native binary release. You can also ship both updates at the same time. This makes sure if new users are onboarding, they will see the new changes meanwhile existing users can either load JS on the fly or update the binary. ## What's my take on OTA updates? These are the technical decisions to make sure the user's UX stays consistent. It depends on your team, and the size of your user base to decide whether you should configure OTA in your app or not. While working at Nintee, we realized that almost 75% of our users don't bother to update the app. They would likely use the old version of the app if there are no major updates. We had a situation where we had to ask users to update the app. This process became cumbersome as we were onboarding more and more users. OTA updates help us to push bug fixes, UI improvements, and functionalities on the fly. It was almost like how we update our web applications. Since the app distribution platform like Play Store and App Store can take up to 24 hours (sometimes) when an app only contains some bug fixes and minor changes, it is a really frustrating experience for a developer. You just have to be more careful while working with OTA updates as we saw it's too easy to spoil the User Experience. I hope this blog helped you to understand the high-level overview of how OTA updates work at the production level. Thank you for reading this blog and I shall come with another exciting blog. Please share your thoughts in the comment section. Curious to know more? Check out reference links [React Native Code Push library](https://github.com/microsoft/react-native-code-push) [Expo eas updates](https://docs.expo.dev/eas-update/introduction/) [Manage runtime version with eas updates](https://docs.expo.dev/eas-update/runtime-versions/) [Microsoft App Center retiring](https://github.com/microsoft/react-native-code-push/issues/2675) Happy Hacking...
ponikar
1,922,337
Sets - HashSet, LinkedHashSet, and TreeSet.
You can create a set using one of its three concrete classes: HashSet, LinkedHashSet, or TreeSet. The...
0
2024-07-13T14:14:05
https://dev.to/paulike/sets-hashset-linkedhashset-and-treeset-32mc
java, programming, learning, beginners
You can create a set using one of its three concrete classes: **HashSet**, **LinkedHashSet**, or **TreeSet**. The **Set** interface extends the **Collection** interface, as shown in Figure below. It does not introduce new methods or constants, but it stipulates that an instance of **Set** contains no duplicate elements. The concrete classes that implement **Set** must ensure that no duplicate elements can be added to the set. That is, no two elements **e1** and **e2** can be in the set such that **e1.equals(e2)** is **true**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0tgqm8ob656x9dlxgdn5.png) The **AbstractSet** class extends **AbstractCollection** and partially implements **Set**. The **AbstractSet** class provides concrete implementations for the **equals** method and the **hashCode** method. The hash code of a set is the sum of the hash codes of all the elements in the set. Since the **size** method and **iterator** method are not implemented in the **AbstractSet** class, **AbstractSet** is an abstract class. Three concrete classes of **Set** are **HashSet**, **LinkedHashSet**, and **TreeSet**, as shown in Figure below. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5h7zq7mjdxqqmwzmout3.png) ## HashSet The **HashSet** class is a concrete class that implements **Set**. You can create an empty _hash set_ using its no-arg constructor or create a hash set from an existing collection. By default, the initial capacity is **16** and the load factor is **0.75**. If you know the size of your set, you can specify the initial capacity and load factor in the constructor. Otherwise, use the default setting. The load factor is a value between **0.0** and **1.0**. The _load factor_ measures how full the set is allowed to be before its capacity is increased. When the number of elements exceeds the product of the capacity and load factor, the capacity is automatically doubled. For example, if the capacity is **16** and load factor is **0.75**, the capacity will be doubled to **32** when the size reaches **12** (16*0.75 = 12). A higher load factor decreases the space costs but increases the search time. Generally, the default load factor **0.75** is a good tradeoff between time and space costs. A **HashSet** can be used to store _duplicate-free_ elements. For efficiency, objects added to a hash set need to implement the **hashCode** method in a manner that properly disperses the hash code. Recall that **hashCode** is defined in the **Object** class. The hash codes of two objects must be the same if the two objects are equal. Two unequal objects may have the same hash code, but you should implement the **hashCode** method to avoid too many such cases. Most of the classes in the Java API implement the **hashCode** method. For example, the **hashCode** in the **Integer** class returns its **int** value. The **hashCode** in the **Character** class returns the Unicode of the character. The **hashCode** in the **String** class returns s0 *31^(n - 1) + s1 *31^(n - 2) + c + sn - 1, where si is s.charAt(i). The code below gives a program that creates a hash set to store strings and uses a foreach loop to traverse the elements in the set. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pp0l0remts22okl05cd6.png) The strings are added to the set (lines 11–16). **New York** is added to the set more than once, but only one string is stored, because a set does not allow duplicates. As shown in the output, the strings are not stored in the order in which they are inserted into the set. There is no particular order for the elements in a hash set. To impose an order on them, you need to use the **LinkedHashSet** class. Recall that the **Collection** interface extends the **Iterable** interface, so the elements in a set are iterable. A foreach loop is used to traverse all the elements in the set (lines 21–23). Since a set is an instance of **Collection**, all methods defined in **Collection** can be used for sets. The code below gives an example that applies the methods in the **Collection** interface on sets. ``` package demo; public class TestMethodsInCollection { public static void main(String[] args) { // Create set1 java.util.Set<String> set1 = new java.util.HashSet<>(); // Add strings to set1 set1.add("London"); set1.add("Paris"); set1.add("New York"); set1.add("San Francisco"); set1.add("Beijing"); System.out.println("set1 is " + set1); System.out.println(set1.size() + " elements in set1"); // Delete a string from set1 set1.remove("London"); System.out.println("\nset1 is " + set1); System.out.println(set1.size() + " elements in set1"); // Create set2 java.util.Set<String> set2 = new java.util.HashSet<>(); // Add strings to set2 set2.add("London"); set2.add("Shanghai"); set2.add("Paris"); System.out.println("\nset2 is " + set2); System.out.println(set2.size() + " elements in set2"); System.out.println("\nIs Taipei in set2? " + set2.contains("Taipei")); set1.addAll(set2); System.out.println("\nAfter adding set2 to set1, set1 is " + set1); set1.removeAll(set2); System.out.println("After removing set2 from set1, set1 is " + set1); set1.retainAll(set2); System.out.println("After removing common elements in set2 from set1, set1 is " + set1); } } ``` `set1 is [San Francisco, New York, Paris, Beijing, London] 5 elements in set1 set1 is [San Francisco, New York, Paris, Beijing] 4 elements in set1 set2 is [Shanghai, Paris, London] 3 elements in set2 Is Taipei in set2? false After adding set2 to set1, set1 is [San Francisco, New York, Shanghai, Paris, Beijing, London] After removing set2 from set1, set1 is [San Francisco, New York, Beijing] After removing common elements in set2 from set1, set1 is []` The program creates two sets (lines 7, 25). The **size()** method returns the number of the elements in a set (line 17). Line 20 `set1.remove("London");` removes **London** from **set1**. The **contains** method (line 34) checks whether an element is in the set. Line 36 `set1.addAll(set2);` adds **set2** to **set1**. Therefore, **set1** becomes **[San Francisco, New York, Shanghai, Paris, Beijing, London]**. Line 39 `set1.removeAll(set2);` removes **set2** from **set1**. Thus, **set1** becomes **[San Francisco, New York, Beijing]**. Line 42 `set1.retainAll(set2);` retains the common elements in **set1**. Since **set1** and **set2** have no common elements, **set1** becomes empty. ## LinkedHashSet **LinkedHashSet** extends **HashSet** with a linked-list implementation that supports an ordering of the elements in the set. The elements in a **HashSet** are not ordered, but the elements in a **LinkedHashSet** can be retrieved in the order in which they were inserted into the set. A **LinkedHashSet** can be created by using one of its four constructors, as shown in Figure above. These constructors are similar to the constructors for **HashSet**. The code below gives a test program for **LinkedHashSet**. The program simply replaces **HashSet** by **LinkedHashSet** in code above TestHashSet.java. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/258g566vu4thxb2aj5ec.png) A **LinkedHashSet** is created in line 8. As shown in the output, the strings are stored in the order in which they are inserted. Since **LinkedHashSet** is a set, it does not store duplicate elements. The **LinkedHashSet** maintains the order in which the elements are inserted. To impose a different order (e.g., increasing or decreasing order), you can use the **TreeSet** class. If you don’t need to maintain the order in which the elements are inserted, use **HashSet**, which is more efficient than **LinkedHashSet**. ## TreeSet **SortedSet** is a subinterface of **Set**, which guarantees that the elements in the set are sorted. Additionally, it provides the methods **first()** and **last()** for returning the first and last elements in the set, and **headSet(toElement)** and **tailSet(fromElement)** for returning a portion of the set whose elements are less than **toElement** and greater than or equal to **fromElement**, respectively. **NavigableSet** extends **SortedSet** to provide navigation methods **lower(e)**, **floor(e)**, **ceiling(e)**, and **higher(e)** that return elements respectively less than, less than or equal, greater than or equal, and greater than a given element and return **null** if there is no such element. The **pollFirst()** and **pollLast()** methods remove and return the first and last element in the tree set, respectively. **TreeSet** implements the **SortedSet** interface. To create a **TreeSet**, use a constructor, as shown in Figure above. You can add objects into a _tree set_ as long as they can be compared with each other. As discussed in [Section](https://dev.to/paulike/the-comparator-interface-1ebc), the elements can be compared in two ways: using the **Comparable** interface or the **Comparator** interface. The code below gives an example of ordering elements using the **Comparable** interface. The preceding example in code above, TestLinkedHashSet.java displays all the strings in their insertion order. This example rewrites the preceding example to display the strings in alphabetical order using the **TreeSet** class. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/amjgvx870q1vd3w3bdfu.png) `Sorted tree set: [Beijing, London, New York, Paris, San Francisco] first(): Beijing last(): San Francisco headSet("New York"): [Beijing, London] tailSet("New York"): [New York, Paris, San Francisco] lower("P"): New York higher("P"): Paris floor("P"): New York ceiling("P"): Paris pollFirst(): Beijing pollLast(): San Francisco New tree set: [London, New York, Paris]` The example creates a hash set filled with strings, then creates a tree set for the same strings. The strings are sorted in the tree set using the **compareTo** method in the **Comparable** interface. The elements in the set are sorted once you create a **TreeSet** object from a **HashSet** object using **new TreeSet<String>(set)** (line 18). You may rewrite the program to create an instance of **TreeSet** using its no-arg constructor, and add the strings into the **TreeSet** object. **treeSet.first()** returns the first element in **treeSet** (line 22), and **treeSet.last()** returns the last element in treeSet (line 23). **treeSet.headSet("New York")** returns the elements in **treeSet** before New York (lines 24). **treeSet.tailSet("New York")** returns the elements in **treeSet** after New York, including New York (lines 25). **treeSet.lower("P")** returns the largest element less than **P** in **treeSet** (line 28). **treeSet.higher("P")** returns the smallest element greater than **P** in **treeSet** (line 29). **treeSet.floor("P")** returns the largest element less than or equal to **P** in **treeSet** (line 30). **treeSet.ceiling("P")** returns the smallest element greater than or equal to **P** in **treeSet** (line 31). **treeSet.pollFirst()** removes the first element in **treeSet** and returns the removed element (line 32). **treeSet.pollLast()** removes the last element in **treeSet** and returns the removed element (line 33). All the concrete classes in Java Collections Framework have at least two constructors. One is the no-arg constructor that constructs an empty collection. The other constructs instances from a collection. Thus the **TreeSet** class has the constructor **TreeSet(Collection c)** for constructing a **TreeSet** from a collection **c**. In this example, **new TreeSet<>(set)** creates an instance of **TreeSet** from the collection **set**. If you don’t need to maintain a sorted set when updating a set, you should use a hash set, because it takes less time to insert and remove elements in a hash set. When you need a sorted set, you can create a tree set from the hash set. If you create a **TreeSet** using its no-arg constructor, the **compareTo** method is used to compare the elements in the set, assuming that the class of the elements implements the **Comparable** interface. To use a comparator, you have to use the constructor **TreeSet(Comparator comparator)** to create a sorted set that uses the **compare** method in the comparator to order the elements in the set. The code below gives a program that demonstrates how to sort elements in a tree set using the **Comparator** interface. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mclhi0uehf4jsvs7htgh.png) The **GeometricObjectComparator** class is defined in [GeometricObjectComparator.java](https://dev.to/paulike/the-comparator-interface-1ebc). The program creates a tree set of geometric objects using the **GeometricObjectComparator** for comparing the elements in the set (lines 8). The **Circle** and **Rectangle** classes were defined in [Abstract Classes](https://dev.to/paulike/abstract-classes-2ee5). They are all subclasses of **GeometricObject**. They are added to the set (lines 9–12). Two circles of the same radius are added to the tree set (lines 10–11), but only one is stored, because the two circles are equal and the set does not allow duplicates.
paulike
1,922,338
why should we use jupyter notebook ?
why should we use jupyter notebook?
0
2024-07-13T14:22:28
https://dev.to/rohan_ghimire_1228ad5ac6a/why-should-we-use-jupyter-notebook--5fca
why should we use jupyter notebook?
rohan_ghimire_1228ad5ac6a
1,922,339
motivational books worldwide: "The 7 Habits of Highly Effective "Awaken the Giant Within" read more......
A post by MD Antor
0
2024-07-13T14:25:08
https://dev.to/cabit18/motivational-books-worldwidethe-7-habits-of-highly-effectiveawaken-the-giant-within-read-more-24lf
[](https://sites.google.com/view/best-motivational-books-pdf/home)**[](https://sites.google.com/view/best-motivational-books-pdf/home)**
cabit18
1,922,340
Comparing the Performance of Sets and Lists
Sets are more efficient than lists for storing nonduplicate elements. Lists are useful for accessing...
0
2024-07-13T14:27:07
https://dev.to/paulike/comparing-the-performance-of-sets-and-lists-3di7
java, programming, learning, beginners
Sets are more efficient than lists for storing nonduplicate elements. Lists are useful for accessing elements through the index. The elements in a list can be accessed through the index. However, sets do not support indexing, because the elements in a set are unordered. To traverse all elements in a set, use a foreach loop. We now conduct an interesting experiment to test the performance of sets and lists. The code below gives a program that shows the execution time of (1) testing whether an element is in a hash set, linked hash set, tree set, array list, and linked list, and (2) removing elements from a hash set, linked hash set, tree set, array list, and linked list. ``` package demo; import java.util.*; public class SetListPerformanceTest { static final int N = 50000; public static void main(String[] args) { // Add numbers 0, 1, 2, ..., N - 1 to the array list List<Integer> list = new ArrayList<>(); for(int i = 0; i < N; i++) list.add(i); Collections.shuffle(list); // Shuffle the array list // Create a hash set, and test its performance Collection<Integer> set1 = new HashSet<>(list); System.out.println("Member test time for hash set is " + getTestTime(set1) + " milliseceonds"); System.out.println("Remove element time for hash set is " + getRemoveTime(set1) + " milliseceonds"); // Create a linked hash set, and test its performance Collection<Integer> set2 = new LinkedHashSet<>(list); System.out.println("Member test time for linked hash set is " + getTestTime(set2) + " milliseceonds"); System.out.println("Remove element time for linked hash set is " + getRemoveTime(set2) + " milliseceonds"); // Create a tree set, and test its performance Collection<Integer> set3 = new TreeSet<>(list); System.out.println("Member test time for tree set is " + getTestTime(set3) + " milliseceonds"); System.out.println("Remove element time for tree set is " + getRemoveTime(set3) + " milliseceonds"); // Create an array list, and test its performance Collection<Integer> list1 = new ArrayList<>(list); System.out.println("Member test time for array list is " + getTestTime(list1) + " milliseceonds"); System.out.println("Remove element time for array list is " + getRemoveTime(list1) + " milliseceonds"); // Create an linked list, and test its performance Collection<Integer> list2 = new LinkedList<>(list); System.out.println("Member test time for linked list is " + getTestTime(list2) + " milliseceonds"); System.out.println("Remove element time for linked list is " + getRemoveTime(list2) + " milliseceonds"); } public static long getTestTime(Collection<Integer> c) { long startTime = System.currentTimeMillis(); // Test if a number is in the collection for(int i = 0; i < N; i++) c.contains((int)(Math.random() * 2 * N)); return System.currentTimeMillis() - startTime; } public static long getRemoveTime(Collection<Integer> c) { long startTime = System.currentTimeMillis(); for(int i = 0; i < N; i++) c.remove(i); return System.currentTimeMillis() - startTime; } } ``` `Member test time for hash set is 31 milliseceonds Remove element time for hash set is 25 milliseceonds Member test time for linked hash set is 21 milliseceonds Remove element time for linked hash set is 47 milliseceonds Member test time for tree set is 33 milliseceonds Remove element time for tree set is 52 milliseceonds Member test time for array list is 3509 milliseceonds Remove element time for array list is 1796 milliseceonds Member test time for linked list is 7833 milliseceonds Remove element time for linked list is 3867 milliseceonds` The program creates a list for numbers from **0** to **N-1** (for **N** = **50000**) (lines 9–11) and shuffles the list (line 12). From this list, the program creates a hash set (line 15), a linked hash set (line 20), a tree set (line 25), an array list (line 30), and a linked list (line 35). The program obtains the execution time for testing whether a number is in the hash set (line 16), linked hash set (line 21), tree set (line 26), array list (line 31), and linked list (line 36), and obtains the execution time for removing the elements from the hash set (line 17), linked hash set (line 22), tree set (line 27), array list (line 32), and linked list (line 37). The **getTestTime** method invokes the **contains** method to test whether a number is in the container (line 45) and the **getRemoveTime** method invokes the **remove** method to remove an element from the container (line 54). As these runtimes illustrate, sets are much more efficient than lists for testing whether an element is in a set or a list. Therefore, the No-Fly list should be implemented using a set instead of a list, because it is much faster to test whether an element is in a set than in a list.
paulike
1,922,341
Enhancing Your Communication Skills for Success.
“Listen with curiosity. Speak with honesty. Act with integrity. The greatest problem with...
0
2024-07-13T14:29:13
https://dev.to/niharika_malik_4bef9c4a33/enhancing-your-communication-skills-for-success-32e2
“Listen with curiosity. Speak with honesty. Act with integrity. The greatest problem with communication is we don’t listen to understand. We listen to reply. When we listen with curiosity, we don’t listen with the intent to reply. We listen for what’s behind the words.” – Roy T. Bennett, author ## What are the importance of communications skills: In our daily life communications skills is importance to expressing our thoughts, and needs to connect others. Communication plays a vital part in building up a strong relationship across the world, either in organizational structure or outside of it. Now a days to become a good communicators is very importance. When you go outside of the country or states, that time you will feel how much importance our communication skills .And If you want to achieve your goals then you must give your efforts which is help you to achieve your dream. When you want to make a friends you need communications skills to make a friend and you must know about How to make a friends? Which way to communicate with stranger? How to start a good conversation ?Now a days nobody cares about you or think about you if you don't have good communications skills. Everyone wants to be a communicators but they are not choosing right way which is affect our hole life, But don't worry about it i will help you to improve your communications and some tips ,Let's discuss about the key points on our articles....... ## How to start to speak . step-1: Reading Books: Honestly first ask your self why should i reading books? How it helps me ? Here i am telling you how many benifits of reading book. ## Increases Intelligence : "The more that you read, the more things you will know. The more that you learn, the more places you'll go. "Exposure to vocabulary through reading not only leads to higher scores on reading tests but also higher scores on general intelligence tests for children. ## Boosts Brainpower : Not only does regular reading help make you smarter, but it can also actually increase your brainpower. It helps you to solve complex problem solution and it hepls you to think logical way , It is improve your thinking capacity and so many. Reading regularly improves memory function by giving your brain a good workout. ## Helps you better understand the content : When it comes to actually remembering what you're reading, you're better off going with a printed book than an e-book. It also give you to better understanding. Let's take example just compare two students who are reading books in their daily basic and another one who are never want's to reading books when you ask a question then you how much difference between them . First students they can give you deeper answer because they have ability to solve a problem , They never lots their confidence in front of other people. ## Improves sleep: Reading a physical book before bed helps you relax more than zoning out in front of a screen. Screens like readers and tablets can keep you awake longer and even impair sleep. Sleeping is very importance for a good health if your not sleep well then how can you get a good health ,if your health is not good then how can achieve dreams. ## step-2 ## Active Listening Skills To Practice (With Examples). ## Listening Skills: Well , ask yours self why are listening skills important ?How to practice active listening skills? Listening skills are skills that contribute to your ability to accurately receive information when communicating with others. These skills are an important part of effective communication in workplace. Developing good listening habits can help to ensure you understand the information correctly. ## step-3 The Importance of Clear and Robust Communication in Your Professional Career. ## Practice skills: “You can’t skip the practice and rely solely on talent. Practice is the bridge between your abilities and success.” Practice changes the human body physically and psychologically as it increases in skill level. Skills that are learned through deliberate practice are specific and time spent practicing is crucial for the individual. If an individual spent a short amount of time with high intensity during practice, they are not as likely to succeed as an individual with a long-term commitment to the practice and skill. , it helps us to improve our talking style and it helps us fluent speaker .If you are not doing regular practice then it will not help you to improve your communication skills, When you start practice with someone then you know to communicate with others, No matters what people think about you if you really want to become a fluent speaker then you must start practice from now .Never ever thinks other people just think about you how to become a fluent and how to become confidence .I know your facing lots of problem to speak something in fluent but it is a right time for you .If you think you can ,that's all just do and go head ,One things someone says that "practice makes perfect" just follow this line. conclusion: Developing these skills can lead to improved collaboration, conflict resolution, and overall success in various aspects of life. All the best!.
niharika_malik_4bef9c4a33
1,922,343
The 3 Crucial Instruction Types Every Technical Writer Needs to Know.
INSTRUCTIONS: THE HEART OF TECHNICAL WRITING. As you progress on your journey as a...
0
2024-07-13T17:03:34
https://dev.to/spiff/master-3-essential-instruction-types-for-technical-writing-success-1chj
webdev, writing, contentwriting, devrel
##INSTRUCTIONS: THE HEART OF TECHNICAL WRITING. ![Image of an instruction description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8mnxxpw421aexy276tje.png) As you progress on your journey as a technical writer, the writing structures you need to learn might overwhelm you. It may seem that technical writing is all about complex frameworks and following rigid templates. ![Image of an overwhelmed writer](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rhzmdg3h7lbbzln9ftxy.jpeg) However, if you take a look at some technical writing examples, you will notice that almost all of them consist of instructions and explanations on how to do things. Focus on mastering the act of writing instructions, and the rest will come easily. In technical writing, instruction comes in three common forms: **Frequently Asked Questions (FAQ)** **Step-by-Step Instructions** **Troubleshooting Guides** Mastering the art of writing clear and effective instructions is crucial for technical writers. Here's a breakdown of the three types of instructions every technical writer should master: **Frequently Asked Questions (FAQs):** - FAQs are structured to address common queries users might have about a product, service, or process. They are typically presented in a question-and-answer format to provide quick solutions to common issues or inquiries. ![Image of a dark themed background with question marks](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7qbppunih97mc6hmpda7.jpeg) **Step-by-Step Instructions:** - Step-by-step instructions break down complex processes into sequential, manageable steps. These are essential for guiding users through tasks, procedures, or setups in a systematic way, ensuring clarity and ease of execution. ![Image of a man climbing a staircase](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mjm1hutjx70as67ozntw.jpeg) **Troubleshooting Guides:** - Troubleshooting guides help users diagnose and resolve problems they encounter. They typically list common issues, symptoms, possible causes, and corresponding solutions or workarounds, enabling users to troubleshoot effectively. ![Image of troubleshooting logo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cmaytdkgyvy5rw2lhj0n.png) Learning these three forms of instructional writing empowers technical writers to create various types of technical documentation, including user manuals, instruction manuals, setup guides, release notes, workflows, customer service scripts, and so much more. Each form serves a specific purpose in providing users with clear, concise, and actionable information. Ready to become better at technical writing? By perfecting the art of clear, concise instructions, you'll unlock the door to creating everything from user manuals to troubleshooting guides. ![Image description of a man opening a dooe](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gxupx2ph7l5785jdt99m.jpeg) It might seem overwhelming at first, but remember, even the most complex technical documents boil down to simple, easy-to-follow steps. Your journey to technical writing excellence starts here, and it's going to be a fun ride. Ready, set, write. I'd love to help you take the next step, **it's the most important skill you need to apply for technical writing jobs**. Would you like to know how to write **clear**, **concise**, and **user-centric** instructions? Let me know in the comment section. ![Image of comment section logo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6u20u6voaf3yl54418op.png)
spiff
1,922,344
Building a Modular Decoupled Backend using a Monorepo
There are a lot of articles about why you should decouple your backend to move faster, but very few...
0
2024-07-16T11:55:36
https://dev.to/woovi/building-a-modular-decoupled-backend-using-a-monorepo-2fik
monorepo, decoupled, koa
There are a lot of articles about why you should decouple your backend to move faster, but very few content on how to practically do it. This article is a practical approach that we use at Woovi to decouple our backend endpoints breaking them into packages in a monorepo. As we scale we can easily convert them to their own microservices. ## Creating a decoupled backend ![Decoupled](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ed21phwe2timyvr3zmfu.png) We should first start designing what endpoint patterns we are going to expose. In the image above we have five patterns: `/login`, `/register`, `/home`, `/api`, and `/provider`. You can think about each of these patterns as separate services, but we will deploy them in a single service to simplify deployment and maintenance. For each endpoint pattern, we are going to create a package in our monorepo, and each package will have its own Koa Router. The monorepo for these 5 "services" will be like this: ``` packages - login - register - home - api - provider ``` A sample koa router code is described below: ```ts import Router from '@koa/router'; export const getApiRouter = () => { const apiRouter = new Router(); apiRouter.all('/api/status', statusMiddleware); apiRouter.get('/api/v1/charge', chargeGet); apiRouter.post('/api/v1/charge', chargePost); apiRouter.get('/api/v1/transaction', transactionGet); apiRouter.post('/api/v1/transaction', transactionPost); return apiRouter; } ``` Only using a Koa router is not enough to expose these endpoints. We need a Koa app to make this possible. We will create a new package to combine all these "services" endpoints and expose them. ``` packages ... - woovi-server ``` It will need an `app.ts` file that has the Koa app with some middleware and will combine all these "services" endpoints: ```ts const app = new Koa({ proxy: true, // we are behind a proxy keys: [config.JWT_KEY], }); //Add your middleware here app.use() // combine all routes here const apiRouter = getApiRouter(); const homeRouter = getHomeRouter(); app.use(apiRouter.routes()).use(apiRouter.allowedMethods()); app.use(homeRouter.routes()).use(homeRouter.allowedMethods()); ``` Then you can run the server ```ts (async () => { const server = createServer(app.callback()); server.listen(config.SERVER_PORT) })(); ``` This way each "service"/package keeps its code separate, following the domain-driven design. ## Breaking in microservices As you scale, you can decide to move to microservices. Instead of using a single server to serve all your endpoints, you can have many servers. If you follow the approach above, it is as simple as creating a new Koa app for each package that you want to decouple in a new service. ## In Conclusion This approach lets you decouple your backend, making it clear the boundary of each separate "service" without all the burden of a full microservices approach. You get the benefits without the drawbacks. You can read more about DDD here [Domain Driven Design using a Monorepo](https://dev.to/woovi/domain-driven-design-using-a-monorepo-afg) --- Woovi [Woovi](https://www.woovi.com) is a Startup that enables shoppers to pay as they like. Woovi provides instant payment solutions for merchants to accept orders to make this possible. If you want to work with us, we are [hiring](https://woovi.com/jobs/)!
sibelius
1,922,345
Running a Website Speed Test
How to Ensure Your Site is Running Smoothly We’ve all experienced the frustration of a...
0
2024-07-14T06:29:00
https://travislord.xyz/articles/running-a-website-speed-test
webdev, beginners, tutorial, learning
### How to Ensure Your Site is Running Smoothly We’ve all experienced the frustration of a slow-loading web page. Sometimes the issue lies with your internet connection, but other times, the problem might be the website itself. If the website in question is your own, this can be particularly worrying. You might fear that visitors are encountering the same slow speeds or, worse, that they can’t access your site at all. To alleviate these concerns, you can bookmark a few URLs that will help you conduct a speed test of your site. These tools will provide valuable information to help you diagnose your website's response time. ### Tools for Website Speed Test 1. **WebSitePulse** For a comprehensive **speed test site**, try [WebSitePulse](https://www.websitepulse.com/tools/website-test). This tool allows you to test your website's response time from three different regions: the US, Europe, and Australia. Since your website server is likely hosted in one of these regions, the response time from that specific region will typically be faster. Here is the response time report for [travislord.xyz](https://travislord.xyz/) as tested from WebSitePulse’s New York, NY server: ![Website Speed Test](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zluq9k8nou3hg4ywarnb.png) 2. **Pingdom** If you want to dive deeper into the specifics of your site’s load time, including how long each component takes to load (such as the web page, images, and style sheets), [Pingdom](https://tools.pingdom.com/) is the tool for you. This Website Speed Test tool provides detailed insights and visuals. ### Comparing Website Speeds Is your site slow or fast? To find out, compare it to other sites using the same tool. It’s important to note that the location of the site’s hosting server can significantly impact its speed. For accurate comparisons, use the WebSitePulse tool and select the region that corresponds with your website's server location. For example, Google’s US search page loads in 0.085 seconds on WebSitePulse. If your site can match or beat this, you’re in great shape. For reference, [travislord.xyz](http://travislord.xyz/) loads in 0.076 seconds, and Amazon’s home page loads in 0.4 seconds from the US, which is considered excellent for an ecommerce website. ### Extra Website Speed Test Tools: I may go into detail about these tools at a later date. For now, I will leave them here. - [KeyCDN](https://tools.keycdn.com/speed) - [Debugbear](https://www.debugbear.com/test/website-speed) - [Web page test](https://www.webpagetest.org/) - [GTmetrix](https://gtmetrix.com/) - [CLI - Timing Details With cURL](https://blog.josephscott.org/2011/10/14/timing-details-with-curl/) ### Conclusion With these tools, you no longer need to wonder if your website is running slowly. Conducting regular speed tests of your site ensures you are aware of its performance and can address any issues promptly. This not only enhances user experience but also boosts your site's reliability and efficiency. Before you go please consider supporting by giving a **Hart, Share,** or **Follow**! * **Visit My Site & Projects**: [Travis lord](https://travislord.xyz/) **|** [Projects](https://travislord.xyz/projects) **|** [About Me](https://travislord.xyz/about) | [Contact](https://travislord.xyz/contact) * **Follow:** [GitHub](https://github.com/lilxyzz) | [DEV](https://dev.to/lilxyzz) **|** [Linkedin](https://au.linkedin.com/in/travis-lord-16b947108/) **|** [Medium](https://medium.com/@travis.lord)
lilxyzz
1,922,346
xbet barca
At 1xBet, dive into a universe of sports excitement and casino thrills. With intuitive navigation and...
0
2024-07-13T14:34:01
https://dev.to/xbetbarca/xbet-barca-42ie
At 1xBet, dive into a universe of sports excitement and casino thrills. With intuitive navigation and a wealth of betting markets, 1xBet is your go-to platform for all things gaming. Website: https://1xbetbarca.net/ Address: Madhya Pradesh, India Email: bcieoxpxcmtcuixes@gmail.com #1xbet, #1xbet_casino, #1xbet_barca, #register1xbet Website: https://1xbetbarca.net/ Phone: 0920904823 Address: Madhya Pradesh https://www.allsquaregolf.com/golf-users/xbet-barca https://www.kniterate.com/community/users/xbetbarca/ https://coolors.co/u/xbet_barca https://bookmarkstime.com/story17867443/xbet-barca https://bookmarkshq.com/story18977867/xbet-barca https://ko-fi.com/xbetbarca https://rentry.co/7o9odm3b https://www.divephotoguide.com/user/xbetbarca/ https://newspicks.com/user/10475127 https://www.noteflight.com/profile/58262f2b5a662d41793523b3fad9aa351c9ee7f9 https://bookmarkspring.com/story12324919/xbet-barca https://topsitenet.com/profile/xbetbarcaey/1228601/ https://disqus.com/by/xbetbarca/about/ https://www.chordie.com/forum/profile.php?id=1999342 https://padlet.com/luonntay8201930_2 https://eatsleepride.com/rider/xbetbarca/profile http://buildolution.com/UserProfile/tabid/131/userId/411549/Default.aspx https://www.gaiaonline.com/profiles/xbetbarca/46751817/ https://linktr.ee/xbetbarca https://www.plurk.com/p/3g1orxhjy6 https://687774.8b.io/ https://findaspring.org/members/xbetbarca/ https://roomstyler.com/users/xbetbarca https://blender.community/xbetbarca/ https://allmylinks.com/xbetbarca https://postheaven.net/6mmxuunqx4 https://www.kickstarter.com/profile/xbetbarca/about https://pinshape.com/users/4860817-luonntay8201930#designs-tab-open https://www.designspiration.com/luonntay82019309/ https://www.giantbomb.com/profile/xbetbarca/ https://dreevoo.com/profile_info.php?pid=658098 https://play.eslgaming.com/player/20231763/ https://photoclub.canadiangeographic.ca/profile/21308268 https://voz.vn/u/xbetbarca.2018892/#about https://connect.garmin.com/modern/profile/4a855a29-f3e0-4e26-bc57-a079052adcd2 http://www.askmap.net/location/6965341/vietnam/xbet-barca https://www.slideserve.com/xbetbarca https://link.space/@xbetbarca https://www.checkli.com/xbetbarca http://www.so0912.com/home.php?mod=space&uid=2277424 https://vnvista.com/hi/158309 https://www.castingcall.club/xbetbarca https://www.metooo.io/u/66928af46ffe32118a0a6b82 https://www.funddreamer.com/users/xbet-barca https://maps.roadtrippers.com/people/xbetbarca https://glose.com/u/xbetbarca https://www.bark.com/en/gb/company/xbetbarca/Azw0j/ https://dsred.com/home.php?mod=space&uid=4006015 https://www.pozible.com/profile/xbet-barca https://www.portalnet.cl/usuarios/xbetbarca.1105468/#info https://www.magcloud.com/user/xbetbarca https://teletype.in/@xbetbarca https://graphcommons.com/u/xbetbarca https://www.bakespace.com/members/profile/xbetbarca/1651933/ https://www.codingame.com/profile/49adf578718ff8d13e1dc4aa3ff733865335816 https://bookmarkrange.com/story18827792/xbet-barca https://personaljournal.ca/xbetbarca/g4dq4ztpn7 https://wirtube.de/a/xbetbarca/video-channels https://crypt.lol/xbetbarca https://expathealthseoul.com/profile/xbet-barca/ https://circleten.org/a/300240?postTypeId=whatsNew https://bandori.party/user/206083/xbetbarca/ https://boersen.oeh-salzburg.at/author/xbetbarca/ https://www.artscow.com/user/3201259 https://opentutorials.org/profile/171335 https://nguoiquangbinh.net/forum/diendan/member.php?u=141263&vmid=126629#vmessage126629 https://lnk.bio/xbetbarca https://www.creativelive.com/student/xbet-barca?via=accounts-freeform_2 https://www.diggerslist.com/xbetbarca/about https://www.elephantjournal.com/profile/luonnta-y-82-01930/ https://gitlab.pavlovia.org/xbetbarca https://manylink.co/@xbetbarca https://forum.dmec.vn/index.php?members/xbetbarca.66091/ https://club.doctissimo.fr/xbetbarca/ https://dlive.tv/xbetbarca https://magic.ly/xbetbarca https://challonge.com/xbetbarca https://chodilinh.com/members/xbetbarca.88771/#about https://potofu.me/xbetbarca https://answerpail.com/index.php/user/xbetbarca https://qiita.com/xbetbarca https://audiomack.com/xbetbarca https://kumu.io/xbetbarca/sandbox#untitled-map https://camp-fire.jp/profile/xbetbarca https://www.penname.me/@xbetbarca https://zzb.bz/0xfl7 https://penzu.com/p/db2632cdc510b0d3 https://bookmarkstumble.com/story19098307/xbet-barca https://participez.nouvelle-aquitaine.fr/profiles/xbetbarca/activity?locale=en https://mssg.me/8rz2l https://unsplash.com/@xbetbarca https://help.orrs.de/user/xbetbarca https://community.fyers.in/member/0uhkKm26xU https://www.deepzone.net/home.php?mod=space&uid=3853559 https://files.fm/xbetbarca/info https://www.catchafire.org/profiles/2923158/ https://jsfiddle.net/user/xbetbarca https://wibki.com/xbetbarca?tab=xbet%20barca https://controlc.com/18e7516c https://os.mbed.com/users/xbetbarca/ https://www.nintendo-master.com/profil/xbetbarca https://www.naucmese.cz/xbet-barca?_fid=240s https://phijkchu.com/a/xbetbarca/video-channels https://list.ly/luonnta-y-82-01930/lists https://www.pearltrees.com/xbetbarca https://pixbender.com/xbetbarca45 https://linkmix.co/24555130 http://gendou.com/user/xbetbarca https://app.talkshoe.com/user/xbetbarca https://hashnode.com/@xbetbarca https://www.freewebmarks.com/story/xbet-barca https://my.omsystem.com/members/xbetbarca http://www.invelos.com/UserProfile.aspx?alias=xbetbarca https://data.world/xbetbarca https://visual.ly/users/luonntay82019302 https://hub.docker.com/u/xbetbarca https://www.wpgmaps.com/forums/users/xbetbarca/ https://devpost.com/luonnta-y-82-01930 https://tupalo.com/en/users/7019977 https://www.goodreads.com/user/show/179939420-xbet-barca https://crowdin.com/project/xbetbarca https://www.dermandar.com/user/xbetbarca/ http://idea.informer.com/users/xbetbarca/?what=personal https://bookmarkswing.com/story18915041/xbet-barca https://zenwriting.net/xbetbarca https://dribbble.com/xbetbarca/about https://www.silverstripe.org/ForumMemberProfile/show/161110 https://bouchesocial.com/story19370681/xbet-barca https://mforum1.cari.com.my/home.php?mod=space&uid=3170548&do=profile https://qooh.me/xbetbarca https://fileforum.com/profile/xbetbarca https://www.postman.com/xbetbarca/workspace/xbet-barca/overview https://forum.liquidbounce.net/user/xbetbarca/ https://www.ilcirotano.it/annunci/author/xbetbarca https://slides.com/xbetbarca https://hackerone.com/xbetbarca?type=user https://wmart.kz/forum/user/169742/ http://www.fanart-central.net/user/xbetbarca/profile https://app.roll20.net/users/13569006/xbet-b https://turkish.ava360.com/user/xbetbarca/# https://www.ethiovisit.com/myplace/xbetbarca https://www.ohay.tv/profile/xbetbarca https://research.openhumans.org/member/xbetbarca https://fontstruct.com/fontstructors/2465440/xbetbarca https://muckrack.com/xbet-barca https://hindibookmark.com/story19122683/xbet-barca https://doodleordie.com/profile/xbetbarca https://www.anobii.com/fr/01b56126cc3a55b12b/profile/activity https://huggingface.co/xbetbarca https://skitterphoto.com/photographers/103031/xbet-barca https://inkbunny.net/xbetbarca https://community.tableau.com/s/profile/0058b00000IZmdY https://www.mobafire.com/profile/xbetbarca-1159964 https://www.fimfiction.net/user/770374/xbetbarca https://conifer.rhizome.org/xbetbarca https://www.equinenow.com/farm/xbetbarca.htm https://bookmarkextent.com/story19077772/xbet-barca https://dev.to/xbetbarca https://blogfonts.com/user/832863.html https://confengine.com/user/xbet-barca https://www.weddingbee.com/members/xbetbarca/ https://myanimelist.net/profile/xbetbarca https://shoplook.io/profile/xbetbarca https://tvchrist.ning.com/profile/xbetbarca www.artistecard.com/xbetbarca#!/contact https://www.pubpub.org/user/xbet-barca https://able2know.org/user/xbetbarca/ https://writeablog.net/xbetbarca https://bookmarkport.com/story19541246/xbet-barca https://chart-studio.plotly.com/~xbetbarca https://www.cakeresume.com/me/xbetbarca https://leetcode.com/u/xbetbarca/ https://www.reverbnation.com/xbetbarca https://www.metal-archives.com/users/xbetbarca https://www.hahalolo.com/@66928ecc05740e60d0955d95 http://hawkee.com/profile/7287610/ https://www.bondhuplus.com/xbetbarca https://www.circleme.com/xbetbarcaws https://www.foroatletismo.com/foro/members/xbetbarca.html https://electronoobs.io/profile/39781# http://bbs.01bim.com/home.php?mod=space&uid=962260 https://xbetbarca.notepin.co/ https://pxhere.com/en/photographer-me/4309074 https://collegeprojectboard.com/author/xbetbarca/ https://golosknig.com/profile/xbetbarca/ https://community.amd.com/t5/user/viewprofilepage/user-id/426768 https://flipboard.com/@xbetbarca https://naijamp3s.com/index.php?a=profile&u=xbetbarca https://lab.quickbox.io/xbetbarcavs https://gatherbookmarks.com/story18170807/xbet-barca https://bookmarketmaven.com/story17980077/xbet-barca https://p.lu/a/xbetbarca/video-channels https://my.desktopnexus.com/xbetbarca/ https://www.openstreetmap.org/user/xbetbarca https://dirstop.com/story19714426/xbet-barca https://peatix.com/user/23058034/view https://answerpail.com/index.php/user/xbetbarca https://stocktwits.com/xbetbarca https://www.shippingexplorer.net/en/user/xbetbarca/108970 https://suzuri.jp/xbetbarca https://www.proarti.fr/account/xbetbarca https://socialtrain.stage.lithium.com/t5/user/viewprofilepage/user-id/76705 https://rotorbuilds.com/profile/49082/ https://hypothes.is/users/xbetbarca https://www.edna.cz/uzivatele/xbetbarca/ https://dutrai.com/members/xbetbarca.28790/#about https://hubpages.com/@xbetbarca#about https://www.instapaper.com/p/xbetbarca https://www.giveawayoftheday.com/forums/profile/201691 https://www.exchangle.com/xbetbarca https://telegra.ph/xbetbarca-07-13 https://willysforsale.com/profile/xbetbarca
xbetbarca
1,922,347
Case Study: Counting Keywords
This section presents an application that counts the number of the keywords in a Java source file....
0
2024-07-13T14:35:39
https://dev.to/paulike/case-study-counting-keywords-4kfa
java, programming, learning, beginners
This section presents an application that counts the number of the keywords in a Java source file. For each word in a Java source file, we need to determine whether the word is a keyword. To handle this efficiently, store all the keywords in a **HashSet** and use the **contains** method to test if a word is in the keyword set. The code below gives this program. ``` package demo; import java.util.*; import java.io.*; public class CountKeywords { public static void main(String[] args) { Scanner input = new Scanner(System.in); System.out.print("Enter a Java source file: "); String filename = input.nextLine(); File file = new File(filename); if(file.exists()) { try { System.out.println("The number of keywords in " + filename + " is " + countKeywords(file)); } catch (Exception e) { System.out.println("An error occurred while counting keywords: " + e.getMessage()); } } else { System.out.println("File " + filename + " does not exist"); } } public static int countKeywords(File file) throws Exception { // Array of all Java keywords + true, false and null String[] keywordString = {"abstract", "assert", "boolean", "break", "byte", "case", "catch", "char", "class", "const", "continue", "default", "do", "double", "else", "enum", "extends", "for", "final", "finally", "float", "goto", "if", "implements", "import", "instanceof", "int", "interface", "long", "native", "new", "package", "private", "protected", "public", "return", "short", "static", "strictfp", "super", "switch", "synchronized", "this", "throw", "throws", "transient", "try", "void", "volatile", "while", "true", "false", "null"}; Set<String> keywordSet = new HashSet<>(Arrays.asList(keywordString)); int count = 0; Scanner input = new Scanner(file); while(input.hasNext()) { String word = input.next(); if(keywordSet.contains(word)) count++; } return count; } } ``` `Enter a Java source file: c:\Welcome.java The number of keywords in c:\Welcome.java is 5` `Enter a Java source file: c:\TTT.java File c:\TTT.java does not exist` The program prompts the user to enter a Java source filename (line 9) and reads the filename (line 10). If the file exists, the **countKeywords** method is invoked to count the keywords in the file (line 15). The **countKeywords** method creates an array of strings for the keywords (lines 26) and creates a hash set from this array (lines 28). It then reads each word from the file and tests if the word is in the set (line 35). If so, the program increases the count by 1 (line 36). You may rewrite the program to use a **LinkedHashSet**, **TreeSet**, **ArrayList**, or **LinkedList** to store the keywords. However, using a **HashSet** is the most efficient for this program.
paulike
1,922,348
Resolving Module Version Chaos: Locking Down Dependencies in Python Projects with Poetry
Hey there! 👋 I've got a nifty trick to share about managing Python dependencies, especially when...
0
2024-07-13T14:35:49
https://dev.to/ma7dev/resolving-module-version-chaos-locking-down-dependencies-in-python-projects-with-poetry-4mlf
python, pip, poetry, bash
Hey there! 👋 I've got a nifty trick to share about managing Python dependencies, especially when they're not version-locked. Let me walk you through how I tackled it using **Poetry**. ## Problem 🤔 Ever faced a `requirements.txt` that looks like this? ```txt tqdm matplotlib ``` No version numbers can be a recipe for chaos during builds or at runtime due to inconsistencies. I needed to lock these dependencies to specific versions to keep things smooth and reliable, like this: ```txt tqdm==4.64.0 matplotlib==3.5.3 ``` ## Solution ✨ ### Why Poetry? I chose **Poetry** because it's like the npm of the Python world—it respects semantic versioning and creates a lock file so every install is consistent. No more "works on my machine" issues! ### Step-by-Step Guide #### 1) **Install Poetry:** ```sh curl -sSL https://install.python-poetry.org | python3 - ``` #### 2) **Grab a simple `pyproject.toml` template:** ```sh wget https://gist.githubusercontent.com/ma7dev/7298ffc4409032edd4d18a57b4c38f3a/raw/1c32efcbde31aaf896c6d47b32dac19ed44d14a4/pyproject.toml ``` #### 3) **Install those unversioned dependencies:** ```sh cat requirements.txt | xargs poetry add ``` #### 4) **Export the installed dependencies in a more structured format:** ```sh poetry export -f requirements.txt --output long_requirements.txt --without-hashes ``` #### 5) **Clean up the exported file:** ```sh # Strip unwanted python version constraints cat long_requirements.txt | cut -d ";" -f 1 > with_dep_requirements.txt # Filter out extraneous dependencies cat requirements.txt | while read line do echo $(grep -n $line'==' with_dep_requirements.txt | cut -d ":" -f 2) >> final_requirements.txt done ``` ### Result 🚀 Here’s what you end up with, all dependencies neatly versioned (`final_requirements.txt`): ```txt tqdm==4.64.0 matplotlib==3.5.3 ... (rest of your dependencies) ``` This setup ensures that all packages are locked to specific versions, making your project stable and reproducible wherever it goes. 🌐 --- If you enjoyed reading this article, check my other articles on [ma7.dev/blog](https://ma7.dev/blog).
ma7dev
1,922,349
Conhecendo o LangChain
Atualmente, uma biblioteca tem se destacado bastante no universo de Inteligência Artificial. Esta...
0
2024-07-13T15:45:06
https://dev.to/programadriano/conhecendo-o-langchain-egk
--- title: Conhecendo o LangChain published: true description: tags: # cover_image: https://direct_url_to_image.jpg # Use a ratio of 100:42 for best results. # published_at: 2024-07-13 14:22 +0000 --- Atualmente, uma biblioteca tem se destacado bastante no universo de Inteligência Artificial. Esta biblioteca se chama LangChain. Para quem ainda não conhece ela, esta biblioteca oferece uma ampla gama de funcionalidades para a criação e o gerenciamento de pipelines de processamento de linguagem natural. A seguir você tem uma imagem demonstrando esta biblioteca junto com o seu toolkit (kit de ferramentas) que ela nos disponibiliza: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x78i9sxpv5z15ssnkgpq.png) Vamos conhecer cada um deles abaixo: * Document Loaders and Utils: Essa parte do LangChain se refere aos carregadores de documentos e utilitários. Eles são responsáveis por ler, processar e carregar documentos de diversas fontes (arquivos de texto, PDFs, bancos de dados, etc.) para que possam ser utilizados nas operações de processamento de linguagem natural. * Prompts: Prompts são instruções ou perguntas fornecidas ao modelo de linguagem para gerar respostas ou realizar uma tarefa específica. Eles são fundamentais para direcionar o comportamento do modelo de linguagem. * Chains (ou cadeias): Elas representam uma sequência de etapas ou operações que são realizadas em conjunto para alcançar um objetivo específico. Embora o uso de um único LLM possa ser suficiente para tarefas mais simples, LangChain fornece uma interface padrão e algumas implementações comumente usadas para encadear LLMs para aplicações mais complexas, entre si ou com outros módulos especializados. * LLM (Large Language Models): LLMs são modelos de linguagem de grande escala, como GPT-4, BERT, e outros, que são utilizados para realizar várias tarefas de processamento de linguagem natural, como tradução, sumarização, e geração de texto. * Agents: Eles são componentes que agem de forma autônoma para realizar tarefas específicas baseadas em entradas e em modelos de linguagem. Eles podem tomar decisões e realizar ações com base em regras ou aprendizagem. Para ficar mais claro como esta biblioteca funciona, vejamos um exmeplo prático dela utilizando o Python + LangChain + Arquivo .txt onde vamos analisar o sentimento de alguns títulos de noticias. ```bash from langchain_community.llms import Ollama from langchain.callbacks.manager import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.prompts import PromptTemplate from langchain.chains import LLMChain from langchain_community.document_loaders import TextLoader import os def carregar_documentos(caminho_arquivo): loader = TextLoader(caminho_arquivo) documentos = loader.load() return documentos def limpar_texto(texto): return texto.strip() llm = Ollama( model="llama2", num_gpu=0, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]) ) prompt_sentimento = "Analise o sentimento do seguinte texto em português: {text}" prompt_resumo = "Gere um resumo em português para o seguinte texto: {text}" template_sentimento = PromptTemplate(input_variables=["text"], template=prompt_sentimento) template_resumo = PromptTemplate(input_variables=["text"], template=prompt_resumo) chain_sentimento = LLMChain(llm=llm, prompt=template_sentimento) chain_resumo = LLMChain(llm=llm, prompt=template_resumo) caminho_arquivo = os.path.join(os.path.dirname(__file__), "noticias.txt") if not os.path.exists(caminho_arquivo): raise FileNotFoundError(f"O arquivo {caminho_arquivo} não foi encontrado.") documentos = carregar_documentos(caminho_arquivo) for doc in documentos: texto_limpo = limpar_texto(doc.page_content) resultado_sentimento = chain_sentimento.run({"text": texto_limpo}) resultado_resumo = chain_resumo.run({"text": texto_limpo}) print(f"Notícia: {texto_limpo}") print(f"Sentimento: {resultado_sentimento}") print(f"Resumo: {resultado_resumo}") print("-" * 50) ``` Executando o código nós temos o seguinte resultado: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fsmaw5rqulnnk1bndlw6.png) Bom, com isso eu finalizo mais este artigo, espero que tenham gostado e até a próxima pessoal :)
programadriano
1,922,351
Kolors AI: Revolutionizing Photorealistic Text-to-Image Generator
Table of Contents What is AI Image Generator? What is Kolors AI? Key Features of Kolors AI How to...
0
2024-07-13T14:45:58
https://dev.to/christianhappygo/kolors-ai-revolutionizing-photorealistic-text-to-image-generator-2eia
ai
**Table of Contents** 1. What is AI Image Generator? 2. What is [Kolors AI](https://www.kolors-ai.com/)? 3. Key Features of [Kolors AI](https://www.kolors-ai.com/) 4. How to Use Kolors AI? 5. Kolors AI Pricing 6. Benefits of Using Kolors AI 7. Conclusion 8. Frequently Asked Questions 9. How does Kolors AI synthesize images? 10. What types of images can Kolors AI generate? 11. What is the maximum resolution Kolors AI can generate? 12. Does Kolors AI support multiple languages? 13. What are the pricing options for Kolors AI? ![Kolors AI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d7xnitbnt5jn1ohdda9x.png) **What is AI Image Synthesis?** AI image synthesis refers to the use of artificial intelligence and machine learning algorithms to create or enhance images based on textual descriptions. This technology leverages large datasets and complex models to generate high-quality, photorealistic images from text prompts, making it possible to visualize concepts and ideas with remarkable detail and accuracy. **What is [Kolors AI](https://www.kolors-ai.com/)?** [Kolors AI](https://www.kolors-ai.com/) is an advanced latent diffusion model developed by the Kolors Team at Kuaishou Technology. Designed for photorealistic text-to-image synthesis, Kolors AI excels in rendering detailed and visually appealing images from both English and Chinese text prompts. Utilizing state-of-the-art machine learning techniques, Kolors AI sets new standards in image generation, making it a powerful tool for various creative and practical applications. **Key Features of [Kolors AI](https://www.kolors-ai.com/)** Bilingual Text Comprehension Kolors AI leverages the General Language Model (GLM) to understand and process both English and Chinese texts, outperforming traditional models like T5 used in other image synthesis tools. **Enhanced Training Strategy** Kolors AI's training involves two distinct phases: concept learning with broad knowledge and quality improvement with high-aesthetic data. This ensures the model produces images with exceptional visual appeal and detail. **Optimized Noise Schedule** A novel noise schedule is employed to enhance high-resolution image generation, allowing Kolors AI to produce images that are not only high in quality but also rich in detail. **KolorsPrompts Benchmark** Kolors AI includes a category-balanced benchmark, KolorsPrompts, which guides the training and evaluation process to ensure superior performance in various categories and challenges. **How to Use [Kolors AI](https://www.kolors-ai.com/)?** Using Kolors AI is simple and straightforward: 1. Visit the Kolors AI Website: Go to the [Kolors AI](https://www.kolors-ai.com/) website. 2. Input your prompt about how the image will be like. 3. Download Image: Once the image is ready, download it to your device. **Kolors AI Pricing** Kolors AI is now totaly free, generate any image as you like. **Benefits of Using Kolors AI** 1. Improved Image Quality Kolors AI enhances image quality by generating photorealistic images with high detail and minimal noise. 2. Versatility Kolors AI can generate images across various styles and categories, making it suitable for diverse applications, from creative projects to professional presentations. 3. Ease of Use With a user-friendly interface, Kolors AI is accessible to users with limited technical backgrounds, allowing anyone to generate high-quality images with ease. 4. Time-Saving Kolors AI's efficient algorithms and streamlined process save users time, delivering high-quality images quickly and effortlessly. **Conclusion** [Kolors AI](https://www.kolors-ai.com/) is a revolutionary tool in the field of text-to-image synthesis, offering unparalleled capabilities in generating photorealistic images from text prompts. With its advanced features, user-friendly interface, and versatile applications, Kolors AI empowers users to bring their ideas to life with remarkable clarity and detail. **Frequently Asked Questions** 1. How does Kolors AI synthesize images? Kolors AI uses a latent diffusion model trained on extensive datasets and a novel noise schedule to generate high-quality, detailed images from text descriptions. 2. What types of images can Kolors AI generate? Kolors AI can generate a wide range of images, including photorealistic scenes, artistic renditions, and detailed illustrations, based on text prompts in both English and Chinese. 3. What is the maximum resolution Kolors AI can generate? Kolors AI can generate images up to 4K resolution, ensuring high detail and clarity in the output. 4. Does [Kolors AI](https://www.kolors-ai.com/) support multiple languages? Yes, Kolors AI supports both English and Chinese, making it accessible to a broader audience.
christianhappygo
1,922,353
Front-End Development Tools Installation and Configuration (Mac)
Preface This tutorial will guide you step-by-step into the world of front-end development...
0
2024-07-13T14:47:24
https://dev.to/lunamiller/front-end-development-tools-installation-and-configuration-mac-5hbe
webdev, beginners, programming, frontend
### Preface This tutorial will guide you step-by-step into the world of front-end development tools on both Windows and Mac platforms. Together, we will explore how to properly install and configure commonly used development tools, making it easy for you to take your first step in front-end development. Whether you are a beginner or an experienced developer, this tutorial will help you resolve various issues during the installation process of development tools, allowing you to focus more on writing code. Let's get started! ### VSCode (Free and User-Friendly Code Editor) Click to Download: [https://code.visualstudio.com/Download](https://code.visualstudio.com/Download) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bjthk6i0jkeicqxe1rh8.png) 1. The official website provides three installation methods. To determine your chip type, there are two ways: First method: Click the Apple icon in the top left corner -> About This Mac -> Check the chip type. Second method: Use a command. Open the terminal and input uname -m. It returns x86_64 (Intel-based) or arm64 (Apple silicon-based). Intel chip: Intel chip. Apple silicon: Apple silicon (commonly known as M series chips). Universal: Universal version. 2. After the download is complete, click to view. 3. Actually, it's not over yet because it's currently in the Downloads folder and not in our Launchpad, so we can't find it there. Generally, software installed on Mac is in dmg format. After clicking it, drag the compressed file to the Applications folder. This completes the installation. Since we downloaded the zip format, it directly contains the application. To place it in the Applications folder, click Finder -> Drag the downloaded file to Applications. This completes the installation. 4. When we run the program, a dialog box will ask if you want to open it. Click Open to enter the VSCode interface, and you can start using it. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ewho0qiqg4oru0n0pcow.png) 5. Useful plugin recommendations will be shared at the end. ### ServBay (Development Environment Management Tool for Mac OS) Before installing node.js, I want to introduce a tool called ServBay. What is it and what can it do? ServBay is an all-in-one [development environment](https://www.servbay.com) management tool designed to ease the burden of maintaining development environments. It allows developers to start coding within minutes without spending time installing and debugging the development environment. Official website: [ServBay](https://www.servbay.com) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/incq2zzzifre7ypsozn9.png) ServBay supports various versions of Node.js, ensuring you can choose the appropriate version for development and deployment based on your project requirements. Here are some common versions of Node.js supported by ServBay We can easily install and manage Node.js through ServBay's GUI panel. Here are the steps to install Node.js via the ServBay GUI panel: 1. Open the ServBay GUI panel. 2. Navigate to the `Services` section. 3. Select the Node.js version you need. 4. Click the green `Install` button and wait for the installation to complete. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ydzeozmf5fkmfsrkgbmh.png) #### Bundled Modules ServBay provides several package managers for Node.js, making it convenient to manage project dependencies: **npm (Node Package Manager)**: The default Node.js package manager and the most widely used. **pnpm**: An efficient package manager that saves disk space and speeds up installation. **yarn**: A package manager developed by Facebook that offers stable and efficient dependency management. **Note**: ServBay also allows easy switching, installation, and viewing of node versions. The reason I recommend using ServBay is that during actual development, if multiple projects are running simultaneously and they rely on different versions of node, it would be very inconvenient to uninstall the current version and install the required version each time. ServBay can help solve this problem, no need nvm. ### Git (Version Control Tool) Mac actually comes with git. Input git -v in the terminal. If a version number appears, git is installed; otherwise, it needs to be installed. Download the installer from the official website. Official website: [git-scm.com](https://git-scm.com) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jd95bm1l9au07eo8uy0o.png) To verify the installation, open the terminal and input git -v. If a version number appears, the installation was successful. ### VSCode Useful Plugins 1. Theme Recommendation: Ayu (available in dark and light modes, I personally prefer dark mode). 2. ESLint (Helps you find and fix issues in your code). 3. Prettier (Formats code to maintain a consistent style, usually used with ESLint). 4. GitLens (Displays the history of each line of code, among other features). 5. Guides (Helps locate the start and end of brackets when code is deeply nested). 6. Image Preview (Previews images, very useful). 7. Material Icon Theme (Attractive icons). 8. Path Intellisense (Auto-completes file paths, very useful). 9. Todo Tree (Adds a TODO icon after downloading, allowing quick navigation to comments like //TODO xxx). 10. Volar (A must-have for Vue developers, provides code highlighting and other features).
lunamiller
1,922,354
Trendz-An Ecommerce Site Using Wix
This is a submission for the Wix Studio Challenge . What I Built Trendz is a...
0
2024-07-13T14:52:46
https://dev.to/itxnargis/trendz-an-ecommerce-site-3pah
devchallenge, wixstudiochallenge, webdev, javascript
*This is a submission for the [Wix Studio Challenge ](https://dev.to/challenges/wix).* ## What I Built Trendz is a comprehensive eCommerce website featuring sections such as Home, About, Services, and Products. The site is designed to provide users with a seamless and personalized shopping experience. Utilizing Wix Stores and Wix Member Area for login and signup, Trendz ensures secure and efficient user management. ## Demo ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gl2dc6udqbz820mzenuy.jpeg) Demo: [https://itxnargiskhatun.wixstudio.io/trendz](https://itxnargiskhatun.wixstudio.io/trendz) ## Development Journey Trendz focuses on providing a seamless shopping experience for users, featuring a variety of products. Creating Trendz was a fulfilling journey, especially since I started with no prior knowledge of Wix. My first task was to create the header and footer sections. I focused on making these sections clean and user-friendly to ensure easy navigation. The website includes a dynamic homepage with sections for Home, About, Services, and Products, as well as a robust user authentication system through the Wix Member Area for login and signup. **Wix Stores API:** Integrated to manage product data and display information on the website. This allowed for easy addition, categorization, and management of products, as well as the creation of dynamic product pages. **Wix Member Area API:** Implemented to facilitate user login and signup, enabling users to create accounts, save preferences, and track orders.
itxnargis
1,922,355
Burden Of Proof
Page: METAPhilosophy Relevance: Justified Cognitive Just because someone denies something, does...
0
2024-07-13T14:53:08
https://dev.to/metaphilosophy/burden-of-proof-1ci3
metaphilosophy, philosophy, epistemology, burdenofproof
> Page: [METAPhilosophy](https://dev.to/metaphilosophy) > Relevance: [Justified Cognitive](https://dev.to/metaphilosophy/justified-cognitive-313m) Just because someone denies something, does that mean there's no burden of proof? Many try to evade by saying, 'If it exists, then prove it.' But if I believe it doesn't exist, how do I prove it? So, is there no burden to prove for those who deny the existence of something? Is that it? If you reject something, it doesn't mean you're free to claim, 'No need to prove something that consider doesn't exists.' <u>BUT, PROVE THAT IT'S IMPOSSIBLE TO BE EXISTS❗️</u> Actually, when someone asserts something, there should be a reason. And asserting the absence of something also requires a reason (burden of proof). It's just generally not understood. If someone rejects something, at the very least there should be evidence showing the impossibility of 'its existence.' **Simply put, prove the absence of traces of something considered non-existent.** ✅ Let's avoid speaking without evidence, assuming proof isn't necessary; that's how it is. ❇️ For example, 'There was a duck-headed apple strolling along the roadside this morning.' Then someone refutes it by asserting, 'That didn't happen.' I would counter with, 'Prove your denial.' They should then say, 'There are no signs characterizing an apple (here, they must show the absence of signs - apple tracks, head tracks, etc.).' ✅ In short, denial still requires proof. Specifically, proof of 'impossibility,' because those making the claim are proving 'possibility of existence,' thus those refuting the claim need to prove the impossibility of its existence. ❇️ This doesn't mean those who reject don't need to prove; they still need to prove the absence of its traces, its impossibility.
metaphilosophy
1,922,357
Maps
You can create a map using one of its three concrete classes: HashMap, LinkedHashMap, or TreeMap. A...
0
2024-07-13T15:03:22
https://dev.to/paulike/maps-21d3
java, programming, learning, beginners
You can create a map using one of its three concrete classes: HashMap, LinkedHashMap, or TreeMap. A map is a container object that stores a collection of key/value pairs. It enables fast retrieval, deletion, and updating of the pair through the key. A map stores the values along with the keys. The keys are like indexes. In **List**, the indexes are integers. In **Map**, the keys can be any objects. A map cannot contain duplicate keys. Each key maps to one value. A key and its corresponding value form an entry stored in a map, as shown in Figure below (a). Figure below (b) shows a map in which each entry consists of a Social Security number as the key and a name as the value. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bctd86eypejhch52skmn.png) There are three types of maps: **HashMap**, **LinkedHashMap**, and **TreeMap**. The common features of these maps are defined in the **Map** interface. Their relationship is shown in Figure below. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gmrg15npht8lur45deu8.png) The **Map** interface provides the methods for querying, updating, and obtaining a collection of values and a set of keys, as shown in Figure below. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e6dh06maflif28lojheo.png) The _update methods_ include **clear**, **put**, **putAll**, and **remove**. The **clear()** method removes all entries from the map. The **put(K key, V value)** method adds an entry for the specified key and value in the map. If the map formerly contained an entry for this key, the old value is replaced by the new value and the old value associated with the key is returned. The **putAll(Map m)** method adds all entries in **m** to this map. The **remove(Object key)** method removes the entry for the specified key from the map. The _query methods_ include **containsKey**, **containsValue**, **isEmpty**, and **size**. The **containsKey(Object key)** method checks whether the map contains an entry for the specified key. The **containsValue(Object value)** method checks whether the map contains an entry for this value. The **isEmpty()** method checks whether the map contains any entries. The **size()** method returns the number of entries in the map. You can obtain a set of the keys in the map using the **keySet()** method, and a collection of the values in the map using the **values()** method. The **entrySet()** method returns a set of entries. The entries are instances of the **Map.Entry<K, V>** interface, where **Entry** is an inner interface for the **Map** interface, as shown in Figure below. Each entry in the set is a key/value pair in the underlying map. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/168x83y9hf4jgyza0jdm.png) The **AbstractMap** class is a convenience abstract class that implements all the methods in the **Map** interface except the **entrySet()** method. The **HashMap**, **LinkedHashMap**, and **TreeMap** classes are three _concrete implementations_ of the **Map** interface, as shown in Figure below. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7l33ep7ntf82oca7fy3t.png) The **HashMap** class is efficient for locating a value, inserting an entry, and deleting an entry. **LinkedHashMap** extends **HashMap** with a linked-list implementation that supports an ordering of the entries in the map. The entries in a **HashMap** are not ordered, but the entries in a **LinkedHashMap** can be retrieved either in the order in which they were inserted into the map (known as the _insertion order_) or in the order in which they were last accessed, from least recently to most recently accessed (_access order_). The no-arg constructor constructs a **LinkedHashMap** with the insertion order. To construct a **LinkedHashMap** with the access order, use **LinkedHashMap(initialCapacity, loadFactor, true)**. The **TreeMap** class is efficient for traversing the keys in a sorted order. The keys can be sorted using the **Comparable** interface or the **Comparator** interface. If you create a **TreeMap** using its no-arg constructor, the **compareTo** method in the **Comparable** interface is used to compare the keys in the map, assuming that the class for the keys implements the **Comparable** interface. To use a comparator, you have to use the **TreeMap(Comparator comparator)** constructor to create a sorted map that uses the **compare** method in the comparator to order the entries in the map based on the keys. **SortedMap** is a subinterface of **Map**, which guarantees that the entries in the map are sorted. Additionally, it provides the methods **firstKey()** and **lastKey()** for returning the first and last keys in the map, and **headMap(toKey)** and **tailMap(fromKey)** for returning a portion of the map whose keys are less than **toKey** and greater than or equal to **fromKey**, respectively. **NavigableMap** extends **SortedMap** to provide the navigation methods **lowerKey(key)**, **floorKey(key)**, **ceilingKey(key)**, and **higherKey(key)** that return keys respectively less than, less than or equal, greater than or equal, and greater than a given key and return **null** if there is no such key. The **pollFirstEntry()** and **pollLastEntry()** methods remove and return the first and last entry in the tree map, respectively. Prior to Java 2, **java.util.Hashtable** was used for mapping keys with values. **Hashtable** was redesigned to fit into the Java Collections Framework with all its methods retained for compatibility. **Hashtable** implements the **Map** interface and is used in the same way as **HashMap**, except that the update methods in **Hashtable** are synchronized. The code below gives an example that creates a _hash map_, a _linked hash map_, and a _tree map_ for mapping students to ages. The program first creates a hash map with the student’s name as its key and the age as its value. The program then creates a tree map from the hash map and displays the entries in ascending order of the keys. Finally, the program creates a linked hash map, adds the same entries to the map, and displays the entries. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pp32an1eyj5bbi72huk1.png) `Display entries in HashMap {Cook=29, Smith=30, Lewis=29, Anderson=31} Display entries in ascending order of key {Anderson=31, Cook=29, Lewis=29, Smith=30} The age for Lewis is 29 Display entries in LinkedHashMap {Smith=30, Anderson=31, Cook=29, Lewis=29}` As shown in the output, the entries in the **HashMap** are in random order. The entries in the **TreeMap** are in increasing order of the keys. The entries in the **LinkedHashMap** are in the order of their access, from least recently accessed to most recently. All the concrete classes that implement the **Map** interface have at least two constructors. One is the no-arg constructor that constructs an empty map, and the other constructs a map from an instance of **Map**. Thus, **new TreeMap<String, Integer>(hashMap)** (lines 18) constructs a tree map from a hash map. You can create an insertion-ordered or access-ordered linked hash map. An access-ordered linked hash map is created in lines 23. The most recently accessed entry is placed at the end of the map. The entry with the key **Lewis** is last accessed in line 30, so it is displayed last in line 33. If you don’t need to maintain an order in a map when updating it, use a **HashMap**. When you need to maintain the insertion order or access order in the map, use a **LinkedHashMap**. When you need the map to be sorted on keys, use a **TreeMap**.
paulike
1,922,358
Qaushey: A 100% handloom brand!
Qaushey did not start as Qaushey. About 9 years ago, it was actually a faint idea of what was going...
0
2024-07-13T15:04:43
https://dev.to/demo_research_f81f2604301/qaushey-a-100-handloom-brand-54aa
Qaushey did not start as Qaushey. About 9 years ago, it was actually a faint idea of what was going to turn into a passion for creativity, love for Indian Handlooms and the whole wide world to take this idea to. One of the weekends, on a rainy Mumbai afternoon, the topic of how the Indian way of living is the best way to live sustainability started. It led to the food, the Yoga, the Ayurveda and so many more snippets of our culture. One of the important aspects of our culture is clothing. And there are so many varieties of them that we can go on forever and still not be done with the diversity it bring to us. The advent of “Handloom” dates back to the Indus Valley civilization. The art was appreciated in every nook and corner of the world including faraway lands of Romans, Mesopotamians, and Egyptians. This is where the inspiration for the idea of Qaushey came into being. Shruti and Jaspreet decided they would take this to the world one day. Almost always, it takes a lot of courage to step out of your zone of comfort and start building something from scratch. Time went by and their conviction to make Qaushey known to everyone around the world got stronger. Someday in the year 2023, they decided to go all in and make this a reality. There was no eureka moment because they were so convinced that this is the right thing to do for our artisans, customers around the globe and for our beloved country above all. October 16th, 2023 Qaushey had its genesis. https://qaushey.com/
demo_research_f81f2604301
1,922,359
events.wordpress.org - all over the world: 3,925 events this year :: 108 countries :: 540,537 participants
Meet the community - behind WordPress: https://events.wordpress.org - all over the world: 3,925...
0
2024-07-13T15:06:50
https://dev.to/hub24/eventswordpressorg-all-over-the-world-3925-events-this-year-108-countries-540537-participants-44ik
Meet the community - behind WordPress: https://events.wordpress.org - all over the world: 3,925 events this year :: 108 countries :: 540,537 participants All over the world contributors meet to share, learn, and collaborate on the WordPress project. Where WordPress contributors meet see a full list of events: Meet the community - behind WordPress: https://events.wordpress.org a. Wordcamp: https://central.wordcamp.org/ Wordcamps all arount the world b. https://www.meetup.com/pro/wordpress WordPress-Meetups c. https://doaction.org/ - do action hackatons :
hub24
1,922,361
The Definitive Guide to RxJS Creation Operators in Angular
RxJS (Reactive Extensions for JavaScript) is a powerful library for reactive programming using...
0
2024-07-13T15:09:23
https://dev.to/itsshaikhaj/the-definitive-guide-to-rxjs-creation-operators-in-angular-1icd
angular, rxjs, typescript, webdev
RxJS (Reactive Extensions for JavaScript) is a powerful library for reactive programming using observables, making it easier to compose asynchronous or callback-based code. When combined with Angular, RxJS provides a robust and flexible way to handle asynchronous operations and manage data streams. In this article, we'll explore RxJS in Angular with practical examples, focusing on various creation operators. ### **Table of Contents** | Heading | Sub-Topics | |----------------------------------|----------------------------------------------------------| | **Introduction to RxJS** | What is RxJS?, Importance in Angular, Basic Concepts | | **Setting Up RxJS in Angular** | Installing RxJS, Setting Up a New Angular Project | | **Understanding Observables** | What is an Observable?, Subscribing to Observables | | **Creation Operators Overview** | Overview of Creation Operators, Usage in Angular | | **Using `of()` Operator** | Example, Use Cases, Outputs | | **Using `from()` Operator** | Example, Converting Arrays, Outputs | | **Using `fromEvent()` Operator** | Example, DOM Events, Outputs | | **Using `interval()` Operator** | Example, Timed Sequences, Outputs | | **Using `timer()` Operator** | Example, Delayed Execution, Outputs | | **Using `range()` Operator** | Example, Emitting Sequences, Outputs | | **Using `defer()` Operator** | Example, Deferred Execution, Outputs | | **Using `generate()` Operator** | Example, Custom Sequences, Outputs | | **Using `empty()` Operator** | Example, Emitting Complete, Outputs | | **Using `never()` Operator** | Example, Infinite Observables, Outputs | | **Using `throwError()` Operator**| Example, Emitting Errors, Outputs | | **Using `iif()` Operator** | Example, Conditional Observables, Outputs | | **Practical Applications** | Combining Operators, Real-World Scenarios | | **Advanced Tips and Tricks** | Best Practices, Performance Tips | | **Common Pitfalls and Solutions**| Avoiding Mistakes, Debugging Tips | | **FAQs** | Common Questions, Detailed Answers | | **Conclusion** | Summary, Further Reading | --- ### **Introduction to RxJS** #### **What is RxJS?** RxJS (Reactive Extensions for JavaScript) is a library for composing asynchronous and event-based programs by using observable sequences. It provides powerful operators to work with asynchronous data streams. #### **Importance in Angular** Angular heavily relies on RxJS for handling asynchronous operations, especially HTTP requests, events, and reactive forms. Understanding RxJS is crucial for effective Angular development. #### **Basic Concepts** - **Observables**: Collections of data over time. - **Observers**: Functions that listen to observables. - **Operators**: Functions that enable complex asynchronous code. ### **Setting Up RxJS in Angular** #### **Installing RxJS** RxJS comes bundled with Angular, so there's no need for separate installation. However, if needed, it can be installed via npm: ```bash npm install rxjs ``` #### **Setting Up a New Angular Project** To create a new Angular project, use the Angular CLI: ```bash ng new rxjs-angular-demo cd rxjs-angular-demo ng serve ``` ### **Understanding Observables** #### **What is an Observable?** An observable is a stream that can emit multiple values over time. It’s a powerful way to handle asynchronous operations in Angular. #### **Subscribing to Observables** To consume the values emitted by an observable, you subscribe to it: ```typescript import { of } from 'rxjs'; const observable = of(1, 2, 3); observable.subscribe(value => console.log(value)); ``` ### **Creation Operators Overview** Creation operators are used to create observables from various sources such as arrays, events, or intervals. ### **Using `of()` Operator** #### **Example** The `of()` operator creates an observable from a list of values: ```typescript import { of } from 'rxjs'; const numbers$ = of(1, 2, 3, 4, 5); numbers$.subscribe(value => console.log(value)); ``` #### **Use Cases** - Emitting a sequence of values. - Testing sequences of data. #### **Outputs** ``` 1 2 3 4 5 ``` ### **Using `from()` Operator** #### **Example** The `from()` operator creates an observable from an array or iterable: ```typescript import { from } from 'rxjs'; const array$ = from([10, 20, 30]); array$.subscribe(value => console.log(value)); ``` #### **Converting Arrays** Easily convert arrays to observables for processing: ```typescript const numbers = [1, 2, 3, 4, 5]; const numbers$ = from(numbers); numbers$.subscribe(value => console.log(value)); ``` #### **Outputs** ``` 10 20 30 ``` ### **Using `fromEvent()` Operator** #### **Example** The `fromEvent()` operator creates an observable from DOM events: ```typescript import { fromEvent } from 'rxjs'; const clicks$ = fromEvent(document, 'click'); clicks$.subscribe(event => console.log(event)); ``` #### **DOM Events** Handle DOM events like clicks, inputs, or mouse movements: ```typescript const clicks$ = fromEvent(document.getElementById('myButton'), 'click'); clicks$.subscribe(event => console.log('Button clicked!', event)); ``` #### **Outputs** When a button is clicked, it logs the event object. ### **Using `interval()` Operator** #### **Example** The `interval()` operator creates an observable that emits a sequence of numbers at regular intervals: ```typescript import { interval } from 'rxjs'; const interval$ = interval(1000); interval$.subscribe(value => console.log(value)); ``` #### **Timed Sequences** Generate timed sequences for periodic tasks: ```typescript const seconds$ = interval(1000); seconds$.subscribe(value => console.log(`Seconds elapsed: ${value}`)); ``` #### **Outputs** ``` 0 1 2 3 ... ``` ### **Using `timer()` Operator** #### **Example** The `timer()` operator creates an observable that emits a single value after a specified time: ```typescript import { timer } from 'rxjs'; const timer$ = timer(2000); timer$.subscribe(value => console.log('Timer completed!', value)); ``` #### **Delayed Execution** Execute code after a delay: ```typescript const delayed$ = timer(5000); delayed$.subscribe(() => console.log('5 seconds passed!')); ``` #### **Outputs** ``` Timer completed! 0 ``` ### **Using `range()` Operator** #### **Example** The `range()` operator creates an observable that emits a sequence of numbers within a specified range: ```typescript import { range } from 'rxjs'; const range$ = range(1, 10); range$.subscribe(value => console.log(value)); ``` #### **Emitting Sequences** Generate a range of numbers: ```typescript const range$ = range(5, 5); range$.subscribe(value => console.log(value)); ``` #### **Outputs** ``` 1 2 3 4 5 6 7 8 9 10 ``` ### **Using `defer()` Operator** #### **Example** The `defer()` operator creates an observable only when an observer subscribes: ```typescript import { defer } from 'rxjs'; const deferred$ = defer(() => of(new Date())); deferred$.subscribe(value => console.log(value)); ``` #### **Deferred Execution** Delay the creation of an observable until subscription: ```typescript const createObservable = () => of('Deferred execution'); const deferred$ = defer(createObservable); deferred$.subscribe(value => console.log(value)); ``` #### **Outputs** ``` Current date and time when subscribed ``` ### **Using `generate()` Operator** #### **Example** The `generate()` operator creates an observable using a loop structure: ```typescript import { generate } from 'rxjs'; const generated$ = generate(0, x => x < 3, x => x + 1, x => x * 2); generated$.subscribe(value => console.log(value)); ``` #### **Custom Sequences** Create complex sequences: ```typescript const sequence$ = generate(1, x => x <= 5, x => x + 1, x => x * 2); sequence$.subscribe(value => console.log(value)); ``` #### **Outputs** ``` 0 2 4 ``` ### **Using `empty()` Operator** #### **Example** The `empty()` operator creates an observable that emits no items but terminates normally: ```typescript import { empty } from 'rxjs'; const empty$ = empty(); empty$.subscribe({ next: () => console.log('Next'), complete: () => console.log('Complete') }); ``` #### **Emitting Complete** Create observables that complete immediately: ```typescript const emptyObservable$ = empty(); emptyObservable$.subscribe({ next: () => console.log('Next'), complete: () => console.log('Complete') }); ``` #### **Outputs** ``` Complete ``` ### **Using `never()` Operator** #### **Example** The `never()` operator creates an observable that never emits items and never completes: ```typescript import { never } from 'rxjs'; const never$ = never(); never$.subscribe({ next: () => console.log('Next'), complete: () => console.log('Complete') }); ``` #### **Infinite Observables** Create observables for long-running processes: ```typescript const infinite$ = never(); infinite$.subscribe({ next: () => console.log('Next'), complete: () => console.log('Complete') }); ``` #### **Outputs** No output since it never emits or completes. ### **Using `throwError()` Operator** #### **Example** The `throwError()` operator creates an observable that emits an error: ```typescript import { throwError } from 'rxjs'; const error$ = throwError('An error occurred!'); error$.subscribe({ next: () => console.log('Next'), error: err => console.log('Error:', err), complete: () => console.log('Complete') }); ``` #### **Emitting Errors** Handle error scenarios effectively: ```typescript const errorObservable$ = throwError(new Error('Something went wrong!')); errorObservable$.subscribe({ next: () => console.log('Next'), error: err => console.log('Error:', err.message), complete: () => console.log('Complete') }); ``` #### **Outputs** ``` Error: An error occurred! ``` ### **Using `iif()` Operator** #### **Example** The `iif()` operator creates an observable based on a condition: ```typescript import { iif, of } from 'rxjs'; const condition = true; const iif$ = iif(() => condition, of('Condition is true'), of('Condition is false')); iif$.subscribe(value => console.log(value)); ``` #### **Conditional Observables** Switch between observables based on conditions: ```typescript const isEven = num => num % 2 === 0; const conditional$ = iif(() => isEven(2), of('Even'), of('Odd')); conditional$.subscribe(value => console.log(value)); ``` #### **Outputs** ``` Condition is true ``` ### **Practical Applications** #### **Combining Operators** Combine multiple operators to create complex workflows: ```typescript import { of, interval, merge } from 'rxjs'; import { map, take } from 'rxjs/operators'; const source1$ = of('A', 'B', 'C'); const source2$ = interval(1000).pipe(map(i => `Number: ${i}`), take(3)); const combined$ = merge(source1$, source2$); combined$.subscribe(value => console.log(value)); ``` #### **Real-World Scenarios** Use RxJS for handling HTTP requests, event streams, and more: ```typescript import { HttpClient } from '@angular/common/http'; import { catchError } from 'rxjs/operators'; import { of } from 'rxjs'; constructor(private http: HttpClient) {} fetchData() { this.http.get('https://api.example.com/data') .pipe( catchError(error => of(`Error: ${error.message}`)) ) .subscribe(data => console.log(data)); } ``` ### **Advanced Tips and Tricks** #### **Best Practices** - Use operators to handle errors gracefully. - Compose operators for clean and readable code. - Unsubscribe from observables to prevent memory leaks. #### **Performance Tips** - Avoid nested subscriptions. - Use `takeUntil()` for better memory management. - Leverage `Subject` and `BehaviorSubject` for efficient state management. ### **Common Pitfalls and Solutions** #### **Avoiding Mistakes** - Always unsubscribe from subscriptions to prevent memory leaks. - Use appropriate operators for the task at hand. - Handle errors using `catchError` or similar operators. #### **Debugging Tips** - Use `tap()` for logging intermediate values. - Leverage browser developer tools to inspect observable streams. - Write unit tests to verify observable behavior. ### **FAQs** **What is RxJS used for in Angular?** RxJS is used for handling asynchronous operations, managing event streams, and composing complex data flows in Angular applications. **How do I create an observable in Angular?** You can create an observable using creation operators like `of()`, `from()`, `interval()`, etc. These operators are part of the RxJS library. **Why should I use RxJS in Angular?** RxJS provides a powerful way to handle asynchronous data, allowing for more readable and maintainable code, especially in complex applications. **What is the difference between `of()` and `from()`?** `of()` creates an observable from a list of values, while `from()` creates an observable from an array or iterable. **How do I handle errors in RxJS?** You can handle errors using the `catchError` operator, which allows you to catch and handle errors within an observable sequence. **Can I use RxJS with other JavaScript frameworks?** Yes, RxJS is a standalone library and can be used with other JavaScript frameworks like React, Vue, and Node.js. ### **Conclusion** RxJS is a vital tool for Angular developers, providing a flexible and powerful way to handle asynchronous operations. By mastering RxJS creation operators and understanding how to compose and manage observables, you can build robust, scalable, and maintainable Angular applications. For further reading, consider exploring the [official RxJS documentation](https://rxjs.dev/), as well as advanced topics like custom operators and higher-order observables.
itsshaikhaj
1,922,362
30 Future Supercars and Sports Cars
Explore 30 upcoming supercars and sports cars, blending cutting-edge tech with thrilling performance...
0
2024-07-13T15:09:39
https://dev.to/businessgrow1086/30-future-supercars-and-sports-cars-392l
Explore 30 upcoming [supercars](https://www.forbes.com/sites/jimgorzelany/2019/07/23/here-are-the-coolest-new-cars-for-2020/) and sports cars, blending cutting-edge tech with thrilling performance and stunning designs for ultimate driving experiences.
businessgrow1086
1,922,363
🧑‍💻 12 Discord Communities for Learning to Code
When you're learning to code, having a strong community can make all the difference in your learning...
0
2024-07-13T15:11:23
https://dev.to/jamesmurdza/12-discord-communities-for-learning-to-code-2a7h
programming, community, discord, learning
--- title: "🧑‍💻 12 Discord Communities for Learning to Code" published: true tags: programming, community, discord, learning --- When you're learning to code, having a strong community can make all the difference in your learning journey! If you have no community around you, you can quickly find some like-minded learners on Discord. Here's a curated list of Discord server that an aspiring coder should consider joining: Certainly! Here's the revised list in the requested format: --- ## 1. The Odin Project 🟢 Members Online: 17k The Odin Project is a supportive community focused on web development, offering a comprehensive curriculum and discussions for all levels. 🔗 Join link: [Join The Odin Project Discord Server](https://discord.gg/fbFCkYabZB) --- ## 2. The Coding Den 🟢 Members Online: 23k The Coding Den is a welcoming space for coders of all levels, providing resources, coding help, and general programming discussions. 🔗 Join link: [Join The Coding Den Discord Server](https://discord.com/invite/code) --- ## 3. The Programmer's Hangout 🟢 Members Online: 22k The Programmer's Hangout is a friendly community for discussing various programming topics, sharing knowledge, and getting coding help. 🔗 Join link: [Join The Programmer's Hangout Discord Server](https://discord.com/invite/programming) --- ## 4. Harvard CS50 🟢 Members Online: 23k Harvard CS50's official Discord offers resources, peer support, and guidance for course participants. 🔗 Join link: [Join Harvard CS50 Discord Server](https://discord.gg/cs50) --- ## 5. Codecademy 🟢 Members Online: 8k Codecademy's official community provides support, project sharing, and discussions for learners. 🔗 Join link: [Join Codecademy Discord Server](https://discord.com/invite/codecademy) --- ## 6. JavaScript Mastery 🟢 Members Online: 4k JavaScript Mastery is a focused community for JavaScript enthusiasts, offering resources, project collaborations, and coding help. 🔗 Join link: [Join JavaScript Mastery Discord Server](https://discord.com/invite/javascript-mastery-programming-coding-community-710138849350647871) --- ## 7. Python 🟢 Members Online: 46k Python's community is one of the largest for Python learners, providing resources, coding help, and discussions on Python topics. 🔗 Join link: [Join Python Discord Server](https://discord.com/invite/python) --- ## 8. Devcord 🟢 Members Online: 4k Devcord is a community for web developers, offering support, resources, and discussions on front-end and back-end development. 🔗 Join link: [Join Devcord Discord Server](https://discord.com/invite/devcord) --- ## 9. Free Code Camp 🟢 Members Online: 4k Free Code Camp's official Discord provides support, resources, and collaboration opportunities for learners. 🔗 Join link: [Join Free Code Camp Discord Server](https://discord.gg/KVUmVXA) --- ## 10. CodeSupport 🟢 Members Online: 3k CodeSupport is a helpful community offering coding help, resources, and discussions on various programming topics. 🔗 Join link: [Join CodeSupport Discord Server](https://discord.gg/invite/codesupport-240880736851329024) --- ## 11. Devpost 🟢 Members Online: 1k Devpost's official Discord offers support and collaboration opportunities for hackathon enthusiasts. 🔗 Join link: [Join Devpost Discord Server](https://discord.com/invite/HP4BhW3hnp) --- ## 12. DonTheDeveloper 🟢 Members Online: 230 DonTheDeveloper is a smaller community offering personalized support, career advice, and discussions. 🔗 Join link: [Join DonTheDeveloper Discord Server](https://discord.gg/donthedeveloper) --- Know a great server that I missed? Leave a comment below!
jamesmurdza
1,923,662
Optimizing CSS for Performance
Introduction: CSS is an essential aspect of web development and plays a crucial role in creating...
0
2024-07-15T04:18:16
https://dev.to/tailwine/optimizing-css-for-performance-25kf
Introduction: CSS is an essential aspect of web development and plays a crucial role in creating visually appealing websites. However, as websites become more complex, the CSS code also tends to become larger and more complex, affecting the overall performance of the website. Slow loading websites can lead to a negative user experience, resulting in decreased traffic and potential loss of customers. Therefore, optimizing CSS for performance is crucial for a successful website. Advantages: 1. Improved Loading Time: Optimizing CSS can significantly improve the loading time of a website, making it more user-friendly and reducing bounce rates. 2. Better User Experience: A well-optimized CSS code can enhance the visual appeal of a website and make it more appealing to visitors, leading to higher user engagement. 3. Reduced Bandwidth Usage: Optimizing CSS can also help reduce the amount of data that needs to be transferred, resulting in decreased bandwidth usage and potentially saving costs. Disadvantages: 1. Tedious Process: Optimizing CSS can be a tedious and time-consuming process, especially for larger and more complex websites. 2. Skill and Knowledge Requirements: Properly optimizing CSS requires a deep understanding of CSS and various optimization techniques, making it a challenging task for beginners. Features: 1. Code Minification: One of the most common techniques for optimizing CSS includes code minification, which involves removing unnecessary characters, whitespace, and comments from the code. 2. Preprocessors: Using CSS preprocessors such as SASS or LESS can make the optimization process more efficient by allowing for the use of variables, functions, and mixins. 3. Efficient Use of Selectors: Selectors can significantly impact the performance of a website, so it is essential to use them efficiently and avoid repeating them unnecessarily. Conclusion: In conclusion, optimizing CSS for performance is critical for creating fast and visually appealing websites. However, it requires a balance between optimizing for performance and maintaining a high-quality code. With the right knowledge and techniques, developers can strike this balance and create websites that are both visually appealing and have excellent performance.
tailwine
1,922,364
Case Study: Occurrences of Words
This case study writes a program that counts the occurrences of words in a text and displays the...
0
2024-07-13T15:16:21
https://dev.to/paulike/case-study-occurrences-of-words-hnm
java, programming, learning, beginners
This case study writes a program that counts the occurrences of words in a text and displays the words and their occurrences in alphabetical order of the words. The program uses a **TreeMap** to store an entry consisting of a word and its count. For each word, check whether it is already a key in the map. If not, add an entry to the map with the word as the key and value **1**. Otherwise, increase the value for the word (key) by **1** in the map. Assume the words are case insensitive; e.g., **Good** is treated the same as **good**. The code below gives the solution to the problem. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uovtoooci9myom1f4ui7.png) `a 2 class 1 fun 1 good 3 have 3 morning 1 visit 1` The program creates a **TreeMap** (line 11) to store pairs of words and their occurrence counts. The words serve as the keys. Since all values in the map must be stored as objects, the count is wrapped in an **Integer** object. The program extracts a word from a text using the **split** method (line 13) in the [**String** class](https://dev.to/paulike/the-string-class-17o7). For each word extracted, the program checks whether it is already stored as a key in the map (line 18). If not, a new pair consisting of the word and its initial count (**1**) is stored in the map (line 19). Otherwise, the count for the word is incremented by **1** (lines 21–23). The program obtains the entries of the map in a set (line 29), and traverses the set to display the count and the key in each entry (lines 32–33). Since the map is a tree map, the entries are displayed in increasing order of words. You can display them in ascending order of the occurrence counts as well. Now sit back and think how you would write this program without using map. Your new program would be longer and more complex. You will find that map is a very efficient and powerful data structure for solving problems such as this.
paulike
1,922,365
Observables vs Promises
Asynchronous programming is a cornerstone of modern web development, enabling applications to handle...
0
2024-07-13T15:19:35
https://dev.to/itsshaikhaj/observables-vs-promises-4fm
webdev, javascript, programming, beginners
Asynchronous programming is a cornerstone of modern web development, enabling applications to handle tasks such as data fetching, event handling, and real-time updates without blocking the main execution thread. Two powerful tools for managing asynchronous operations in JavaScript are Observables and Promises. This article delves into the differences between Observables and Promises, providing clear examples to help you understand when and how to use each. ## Table of Contents 1. [Introduction to Promises](#introduction-to-promises) 2. [Introduction to Observables](#introduction-to-observables) 3. [Key Differences Between Promises and Observables](#key-differences-between-promises-and-observables) 4. [Examples](#examples-and-outputs) - [Example 1: Simple Asynchronous Operation](#example-1-simple-asynchronous-operation) - [Example 2: Handling Multiple Values Over Time](#example-2-handling-multiple-values-over-time) - [Example 3: Cancellation](#example-3-cancellation) 5. [When to Use Promises vs. Observables](#when-to-use-promises-vs-observables) 6. [Conclusion](#conclusion) ## Introduction to Promises Promises are a modern JavaScript feature designed to handle asynchronous operations. A Promise represents a single value that will be available now, in the future, or never. It can be in one of three states: pending, fulfilled, or rejected. ### Creating a Promise ```javascript const myPromise = new Promise((resolve, reject) => { setTimeout(() => { resolve('Promise resolved!'); }, 1000); }); ``` ### Consuming a Promise ```javascript myPromise.then(value => { console.log(value); }).catch(error => { console.error(error); }); ``` ### Output ``` Promise resolved! ``` ## Introduction to Observables Observables are a key feature of the Reactive Extensions (RxJS) library, used extensively in frameworks like Angular. An Observable is a stream of data that can emit multiple values over time. Unlike Promises, Observables are lazy and can be canceled. ### Creating an Observable ```javascript import { Observable } from 'rxjs'; const myObservable = new Observable(observer => { setTimeout(() => { observer.next('First value'); }, 1000); setTimeout(() => { observer.next('Second value'); }, 2000); setTimeout(() => { observer.complete(); }, 3000); }); ``` ### Subscribing to an Observable ```javascript myObservable.subscribe({ next(value) { console.log(value); }, error(err) { console.error('Error:', err); }, complete() { console.log('Completed'); } }); ``` ### Output ``` First value Second value Completed ``` ## Key Differences Between Promises and Observables - **Single vs. Multiple Values**: Promises resolve to a single value, while Observables can emit multiple values over time. - **Eager vs. Lazy**: Promises are eager, meaning they start executing immediately, whereas Observables are lazy and do not start until they are subscribed to. - **Cancellation**: Promises cannot be canceled once initiated. Observables can be canceled by unsubscribing. - **Operators**: Observables come with a rich set of operators (e.g., map, filter, debounce) for transforming and composing data streams. ## Examples ### Example 1: Simple Asynchronous Operation #### Using Promises ```javascript const fetchDataPromise = () => { return new Promise((resolve, reject) => { setTimeout(() => { resolve('Data fetched using Promise'); }, 1000); }); }; fetchDataPromise().then(data => { console.log(data); }); ``` ### Output ``` Data fetched using Promise ``` #### Using Observables ```javascript import { Observable } from 'rxjs'; const fetchDataObservable = new Observable(observer => { setTimeout(() => { observer.next('Data fetched using Observable'); observer.complete(); }, 1000); }); fetchDataObservable.subscribe({ next(data) { console.log(data); }, complete() { console.log('Completed'); } }); ``` ### Output ``` Data fetched using Observable Completed ``` ### Example 2: Handling Multiple Values Over Time #### Using Promises Handling multiple values over time with Promises involves chaining multiple Promises, which can be cumbersome. ```javascript const fetchFirst = () => { return new Promise(resolve => { setTimeout(() => { resolve('First value'); }, 1000); }); }; const fetchSecond = () => { return new Promise(resolve => { setTimeout(() => { resolve('Second value'); }, 2000); }); }; fetchFirst().then(value1 => { console.log(value1); return fetchSecond(); }).then(value2 => { console.log(value2); }); ``` ### Output ``` First value Second value ``` #### Using Observables Observables handle multiple values naturally. ```javascript import { Observable } from 'rxjs'; const multiValueObservable = new Observable(observer => { setTimeout(() => { observer.next('First value'); }, 1000); setTimeout(() => { observer.next('Second value'); }, 2000); setTimeout(() => { observer.complete(); }, 3000); }); multiValueObservable.subscribe({ next(value) { console.log(value); }, complete() { console.log('Completed'); } }); ``` ### Output ``` First value Second value Completed ``` ### Example 3: Cancellation #### Using Promises Promises cannot be canceled once they are initiated. ```javascript const nonCancellablePromise = new Promise((resolve, reject) => { setTimeout(() => { resolve('This cannot be canceled'); }, 3000); }); // You cannot cancel this promise once started. ``` #### Using Observables Observables can be canceled by unsubscribing. ```javascript import { Observable, Subscription } from 'rxjs'; const cancellableObservable = new Observable(observer => { const timeoutId = setTimeout(() => { observer.next('This will not be logged'); observer.complete(); }, 3000); // Cleanup logic return () => { clearTimeout(timeoutId); console.log('Observable canceled'); }; }); const subscription: Subscription = cancellableObservable.subscribe({ next(value) { console.log(value); }, complete() { console.log('Completed'); } }); // Cancel the Observable after 1 second setTimeout(() => { subscription.unsubscribe(); }, 1000); ``` ### Output ``` Observable canceled ``` ## When to Use Promises vs. Observables - **Use Promises**: - When you need to handle a single asynchronous operation that resolves once (e.g., HTTP requests, reading a file). - When you want simplicity and don't need the advanced capabilities of Observables. - **Use Observables**: - When dealing with multiple values over time (e.g., user input, WebSocket connections, real-time data). - When you need to cancel the asynchronous operation. - When you need advanced operators for data transformation and composition. ## Conclusion Both Promises and Observables are powerful tools for managing asynchronous operations in JavaScript. Promises are simpler and great for single operations, while Observables offer more flexibility and are ideal for complex scenarios involving multiple values and real-time data streams. Understanding the strengths and use cases of each will help you make informed decisions in your development projects. By mastering Promises and Observables, you can write more efficient, readable, and maintainable asynchronous code, improving the overall performance and user experience of your applications.
itsshaikhaj
1,922,390
Dive into Computer Science: Explore Free Online Courses
The article is about a curated collection of five free online courses in the field of Computer Science. It covers a diverse range of topics, including programming languages, computer architecture, privacy and security in social networks, algorithms, and an introduction to computer science and Python programming. The courses are provided by renowned institutions such as Northeastern University, Carnegie Mellon University, and the Massachusetts Institute of Technology (MIT), offering learners the opportunity to expand their knowledge and skills in the dynamic world of technology. The article aims to guide readers through these exceptional educational resources, providing a brief overview of each course and its key highlights to pique their interest and encourage them to explore these valuable learning opportunities.
27,985
2024-07-13T15:21:25
https://dev.to/getvm/dive-into-computer-science-explore-free-online-courses-3ij6
getvm, programming, freetutorial, collection
Are you eager to expand your knowledge in the dynamic field of Computer Science? Look no further! We've curated a collection of five free online courses that cover a wide range of topics, from programming languages and computer architecture to privacy and security in social networks. 🌐 ![MindMap](https://internal-api-drive-stream.feishu.cn/space/api/box/stream/download/authcode/?code=YjNkNTYyZWFiM2U0NmE2MDdjYjMxODRkM2FkOWQzMDdfY2QyMTkyNzI0N2E4YjViMmI1ODY3ZWJlYjhlNWQ4MTBfSUQ6NzM5MTE0MDc3ODU4NTY2OTYzNl8xNzIwODg0MDgzOjE3MjA5NzA0ODNfVjM) ## Mastering Programming Languages with Northeastern University Delve into the fundamentals of programming languages with this comprehensive course from Northeastern University. Explore the intricacies of language design, implementation, and applications, equipping you with the essential skills to become a proficient programmer. 👨‍💻 [Programming Languages | Northeastern University](https://getvm.io/tutorials/cs-4400-programming-languages-northeastern-university) ![Programming Languages | Northeastern University](https://tutorial-screenshot.getvm.io/4054.png) ## Unraveling the Mysteries of Computer Architecture Gain a solid understanding of the basic concepts of computer architecture with this free, Creative Commons-licensed book. Dive deep into the inner workings of computers and learn how they function at the hardware level. 🖥️ [Basic Computer Architecture](https://getvm.io/tutorials/basic-computer-architecture) ![Basic Computer Architecture](https://tutorial-screenshot.getvm.io/161.png) ## Safeguarding Privacy and Security in Online Social Networks Discover the challenges and best practices in maintaining privacy and security on online social media platforms. This comprehensive course from IIT Madras explores the latest techniques to protect user data and secure social media applications. 🔒 [Privacy and Security in Online Social Networks | IIT Madras](https://getvm.io/tutorials/privacy-and-security-in-online-social-networks-iit-madras) ## Mastering Algorithms with Carnegie Mellon University Enhance your problem-solving skills with this algorithms course from the renowned Carnegie Mellon University. Explore fundamental algorithms and their proofs, taught by world-class professors. 🧠 [Algorithms | Carnegie Mellon University](https://getvm.io/tutorials/15-451-651-algorithms-carnegie-mellon-university) ![Algorithms | Carnegie Mellon University](https://tutorial-screenshot.getvm.io/4089.png) ## Dive into Computer Science and Python Programming with MIT Embark on a comprehensive journey through computer science and Python programming with this course from the prestigious Massachusetts Institute of Technology (MIT). Develop a strong foundation in problem-solving and coding. 🐍 [MIT's Introduction to Computer Science | Python Programming](https://getvm.io/tutorials/mits-introduction-to-computer-science-and-programming-in-python) ![MIT's Introduction to Computer Science | Python Programming](https://tutorial-screenshot.getvm.io/2061.png) Unlock your potential and explore these remarkable free online courses in Computer Science. Whether you're a beginner or an experienced learner, these resources will equip you with the knowledge and skills to thrive in the dynamic world of technology. 💻 Happy learning! ## Enhance Your Learning Experience with GetVM Playground Unlock the full potential of the computer science courses featured in this collection by utilizing the GetVM Playground. GetVM is a powerful Google Chrome browser extension that provides an online coding environment, allowing you to seamlessly practice and apply the concepts you learn. 🚀 With the GetVM Playground, you can dive right into hands-on exercises, experiment with code, and solidify your understanding of the course material. No more hassle with setting up local development environments - the Playground handles everything for you, so you can focus on learning and honing your skills. 💻 Embrace the convenience of having a dedicated, cloud-based workspace that synchronizes with the course content. Leverage the Playground's intuitive interface, real-time collaboration features, and instant feedback to enhance your learning journey. Whether you're a beginner or an experienced programmer, the GetVM Playground will empower you to put your newfound knowledge into practice and unlock your full potential. 🌟 So, as you explore these exceptional computer science courses, be sure to take advantage of the GetVM Playground to maximize your learning experience. Unlock the power of hands-on practice and watch your skills soar to new heights! 🚀 --- ## Want to Learn More? - 📖 Explore More [Free Resources on GetVM](https://getvm.io/explore) - 💬 Join our [Discord](https://discord.gg/XxKAAFWVNu) or tweet us [@GetVM](https://x.com/getvmio) 😄
getvm
1,922,391
Automating Task Organization with the openai-dotnet Library
Introduction Productivity is a goal we all strive for, especially in a world where the...
0
2024-07-13T15:22:44
https://dev.to/alisson_podgurski/automating-task-organization-with-the-openai-dotnet-library-566c
chatgpt, csharp, dotnet, iot
## Introduction Productivity is a goal we all strive for, especially in a world where the to-do list seems never-ending. Fortunately, technology is here to help! In this article, we'll explore how to integrate GPT Chat with the [openai-dotnet library](https://github.com/openai/openai-dotnet) to create a productivity assistant that organizes your tasks, prioritizes what's most important, and even offers personalized tips to improve your efficiency. ## What is the openai-dotnet Library? The openai-dotnet library is an amazing tool that allows developers to easily integrate GPT language models into their .NET applications. It's like having a super-intelligent virtual assistant by your side, ready to help in any situation. ## Creating the Productivity Assistant Let's start by creating a productivity assistant that does everything except wash the dishes. Here is the code we used: ```csharp using OpenAI.Chat; public class TaskOrganizer { private readonly ChatClient _chatClient; public TaskOrganizer(string apiKey) { _chatClient = new(model: "gpt-4o", apiKey); } public async Task<string> OrganizeTasksAsync(string taskList) { var prompt = $"Organize the following tasks by category and importance:\n\n{taskList}"; var completion = await _chatClient.CompleteChatAsync(prompt); return completion.Value.ToString(); } public async Task<string> PrioritizeTasksAsync(string organizedTasks) { var prompt = $"Based on the following organized tasks, suggest a priority list:\n\n{organizedTasks}"; var completion = await _chatClient.CompleteChatAsync(prompt); return completion.Value.ToString(); } public async Task<string> GetProductivityTipsAsync(string taskDetails) { var prompt = $"Based on the following task details, provide personalized productivity tips:\n\n{taskDetails}"; var completion = await _chatClient.CompleteChatAsync(prompt); return completion.Value.ToString(); } } ``` ## Program Implementation Now, let's configure the main program to use our TaskOrganizer class: Don't forget to create your appsettings.json ```json { "OpenAI": { "ApiKey": "<your-key>" } } ``` After that: ```csharp using Microsoft.Extensions.Configuration; class Program { public static async Task Main(string[] args) { var configuration = new ConfigurationBuilder() .SetBasePath(Directory.GetCurrentDirectory()) .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true) .Build(); var apiKey = configuration["OpenAI:ApiKey"]; var assistant = new ProductivityAssistant(apiKey); await assistant.Run(); } } public class ProductivityAssistant { private readonly TaskOrganizer _taskOrganizer; public ProductivityAssistant(string apiKey) { _taskOrganizer = new TaskOrganizer(apiKey); } public async Task Run() { string taskList = "1. Complete project report\n2. Attend team meeting\n3. Respond to client emails\n4. Plan next week's schedule"; string organizedTasks = await _taskOrganizer.OrganizeTasksAsync(taskList); Console.WriteLine($"Organized Tasks: {organizedTasks}\n\n"); string priorityList = await _taskOrganizer.PrioritizeTasksAsync(organizedTasks); Console.WriteLine($"Priority List: {priorityList}\n\n"); string taskDetails = "1. Complete project report - High importance, deadline tomorrow\n2. Attend team meeting - Medium importance, scheduled for 3 PM\n3. Respond to client emails - High importance, response needed by end of day\n4. Plan next week's schedule - Low importance, can be done anytime"; string productivityTips = await _taskOrganizer.GetProductivityTipsAsync(taskDetails); Console.WriteLine($"Productivity Tips: {productivityTips}\n\n"); } } ``` ## Conclusion Integrating GPT Chat with the openai-dotnet library is like having an intelligent personal assistant that helps you organize your tasks, prioritize what really matters and even offers tips to increase productivity. This approach not only keeps tasks in order, but also provides valuable insights to improve personal and team efficiency. Try this integration in your own projects and discover how artificial intelligence can transform the way you manage your time and tasks. After all, who wouldn't like a little magic in their daily work? ## Additional Notes 1. Make sure you have your OpenAI API key configured correctly in the appsettings.json file. 2. Experiment with adjusting the prompts to get different results that better suit your specific needs. 3. Stay tuned for updates to the openai-dotnet library to take advantage of new features and improvements. ## Example Code To see the full code for the example discussed in this article, visit the [repository on GitHub](https://github.com/AlissonPodgurskiSolution/ProductivityAssistant).
alisson_podgurski
1,922,392
What is the difference between full stack developer and software engineer
The world of software development is brimming with talented individuals, but two terms often cause...
0
2024-07-13T15:24:23
https://dev.to/epakconsultant/what-is-the-difference-between-full-stack-developer-and-software-engineer-2iie
software
The world of software development is brimming with talented individuals, but two terms often cause confusion: full stack developers and software engineers. While both play crucial roles in bringing software to life, their areas of expertise and career paths can differ. This guide sheds light on the distinctions between these vital roles. [A Beginner Guide to EC2, RDS, IAM, Docker, Jenkins, Kubernetes, and More](https://www.amazon.com/dp/B0CS4QHLKY) Full Stack Mastery: Building from the Ground Up Imagine a website or application. A full-stack developer is like the architect and construction crew rolled into one. They possess expertise across the entire development lifecycle, from the user-facing front-end to the server-side logic that powers the application. - Front-End Proficiency: Full-stack developers are comfortable with the technologies that users interact with directly. They utilize programming languages like HTML, CSS, and JavaScript to design visually appealing and interactive user interfaces. Popular front-end frameworks like React, Angular, or Vue.js further enhance their development capabilities. - Back-End Savvy: But a full-stack developer doesn't stop at the surface. They delve into the back-end, the engine that makes the application tick. Languages like Python, Java, or Ruby power server-side logic, and frameworks like Django or Spring streamline development. - Database Management: Storing and managing application data effectively is crucial. Full-stack developers understand relational databases like MySQL or PostgreSQL to ensure data integrity and efficient retrieval. [Full Stack Software Engineer II](https://app.draftboard.com/apply/PKIhkomTJ) The Software Engineer: Specialists in the Software Landscape Software engineers, on the other hand, can be thought of as specialists within the broader development landscape. They possess a deep understanding of software development principles, algorithms, and design patterns. While some software engineers might have full-stack capabilities, their focus may lie in specific areas: - Front-End Engineers: These specialists dedicate their expertise to crafting user interfaces. They stay updated on the latest front-end technologies and trends, pushing the boundaries of user experience and interaction design. - Back-End Engineers: Their domain lies on the server-side. Back-end engineers design and develop the core functionalities of an application, ensuring efficient data processing, security, and scalability. - DevOps Engineers: They bridge the gap between development and operations. DevOps engineers implement tools and processes to automate software deployment, testing, and infrastructure management, ensuring a smooth and efficient software release cycle. [Pocket-Friendly Feasts: 5 Dollar Meals That Satisfy](https://benable.com/sajjaditpanel/e98543c1a254e10c80b2) Collaboration is Key: Working Together for Success While full-stack developers can handle a broad spectrum of tasks, software engineers with specialized expertise are often necessary for complex projects. In large-scale development environments, teams typically consist of both full-stack developers and software engineers working collaboratively: - Full-stack developers might be responsible for building core functionalities and user interfaces for smaller projects or early-stage prototypes. - Software engineers with specialized knowledge (front-end, back-end, DevOps) can be brought in for larger projects or specific development phases requiring in-depth expertise. Choosing Your Path: Full Stack Versatility vs. Software Engineering Specialization The ideal career path depends on your interests and goals: - Full-stack development offers a versatile skillset, making you a valuable asset for smaller companies or startups where agility and the ability to wear multiple hats is essential. - Software engineering allows for deeper specialization within a specific area. This path can be ideal for those who enjoy technical challenges and want to master a particular aspect of software development. The Evolving Landscape: Adaptability is Paramount Both full-stack developers and software engineers need to be adaptable. The tech industry is constantly evolving, and new technologies emerge frequently. Continuous learning and a willingness to upskill are essential for success in either path. Conclusion: Building the Future, Together Full-stack developers and software engineers are both crucial players in the software development world. Understanding the distinctions between these roles empowers you to choose the path that aligns with your interests and career aspirations. Remember, regardless of your chosen path, collaboration and a commitment to continuous learning will be your guiding lights as you navigate the exciting and ever-evolving world of software development.
epakconsultant
1,922,394
Extending Prisma ORM with Custom Middleware: Logging, Validation, and Security
What Will This Blog Cover? In this blog, we will: Explain what middleware is and how it...
0
2024-07-13T15:27:10
https://dev.to/emal_isuranga_2d9d79931d7/extending-prisma-orm-with-custom-middleware-logging-validation-and-security-29cc
prisma, middleware, prismaorm, node
#### What Will This Blog Cover? In this blog, we will: 1. Explain what middleware is and how it works in Prisma ORM. 2. Show how to create middleware for logging, validation, and security. 3. Provide examples of how to use middleware in your projects. By the end of this blog, you’ll understand how to extend Prisma ORM’s functionality with custom middleware to make your applications more robust and secure. ## Section 1: Understanding Prisma Middleware #### What is Middleware in Prisma? Middleware in Prisma is a piece of code that runs before or after a database query. It acts like a checkpoint where you can add extra tasks, such as logging, validation, or security checks. #### How Does Middleware Work with Prisma’s Request Lifecycle? When you make a database request with Prisma, the request goes through several stages. Middleware can be added at these stages to perform additional actions. This helps you control and modify the behavior of your database interactions. #### Overview of Middleware Use Cases Middleware can be used for various purposes: - Logging: Keep a record of all database queries and their execution times. - Validation: Ensure that the data being saved meets specific criteria. - Security: Implement checks to protect your data from unauthorized access. ## Section 2: Setting Up Your Prisma Project #### Basic Prisma Setup Before you start using middleware, you need to have a basic Prisma setup. This includes having Node.js installed and a database to connect to. #### Guide to Initializing a Prisma Project 1.**Install Prisma CLI:** Open your terminal and run: `npm install @prisma/cli --save-dev` 2.**Initialize Prisma:**Set up Prisma in your project by running: `npx prisma init` 3.**Install Prisma Client:** Add the Prisma Client to your project with: `npm install @prisma/client` 4.**Configure the Database:** Update the prisma/schema.prisma file to define your data models and connect to your database. With these steps, your Prisma project will be ready to use, and you can start adding middleware to enhance its functionality. ## Section 3: Creating a Custom Middleware #### 3.1 Basic Middleware Structure **What is the Basic Structure of a Middleware Function in Prisma?** A middleware function in Prisma is a piece of code that runs every time a database query is made. It has two main parts: - Parameters: Information about the query being made. - Next Function: A function that allows the query to proceed to the database. **Example of a Simple Middleware to Log Queries** Here’s how you can create a middleware that logs every query: **1.Create the Middleware Function:** ```javascript const { PrismaClient } = require('@prisma/client'); const prisma = new PrismaClient(); prisma.$use(async (params, next) => { // Log the query parameters console.log('Params:', params); // Execute the query const result = await next(params); // Log the result console.log('Result:', result); return result; }); ``` **2.Explanation:** - Params: Contains information about the model and action of the query. - Next: A function that allows the query to be executed. - Logging: Logs the parameters before executing the query and the result after the query. This simple middleware logs every query made to the database, helping you keep track of what’s happening in your application. ##Section 4: Adding Custom Validation **4.1 Why Custom Validation?** **Importance of Custom Validation** Custom validation is crucial because it ensures that the data being saved to your database meets specific requirements and standards. It helps in maintaining data integrity and preventing invalid or malicious data from being stored. Use Cases Where Built-in Validation is Insufficient - Complex Data Structures: When you need to validate nested objects or arrays. - Conditional Validation: When validation rules depend on other fields’ values. - Custom Business Logic: When you have specific business rules that need to be enforced, which built-in validation rules can’t handle. **4.2 Implementing Validation Middleware** Creating Middleware to Validate Data Here’s how you can create middleware to validate data before it reaches the database: **1.Create the Validation Middleware Function:** ```javascript const { PrismaClient } = require('@prisma/client'); const prisma = new PrismaClient(); prisma.$use(async (params, next) => { // Custom validation for the 'User' model on create action if (params.model === 'User' && params.action === 'create') { const { email, password } = params.args.data; // Example validation: Check if email contains '@' if (!email.includes('@')) { throw new Error('Invalid email format'); } // Example validation: Check if password length is at least 8 characters if (password.length < 8) { throw new Error('Password must be at least 8 characters long'); } } // Proceed with the query return next(params); }); ``` This middleware validates the data for the ‘User’ model before it is saved to the database, ensuring that only valid data is stored. ## Section 5: Implementing Security Checks **5.1 Importance of Security Middleware** **Why Security Checks are Critical** Security checks are essential to protect your application and its data from unauthorized access and malicious activities. They ensure that only authorized users can perform certain actions, thereby safeguarding sensitive information and maintaining data integrity. Common Security Concerns - Authorization: Ensuring that users have the right permissions to access or modify data. - Data Access Control: Restricting access to certain data based on user roles or other criteria. **5.2 Creating Security Middleware** Example of Middleware to Enforce User Permissions Here’s how you can create middleware to enforce user permissions: **1.Create the Security Middleware Function:** ```javascript const { PrismaClient } = require('@prisma/client'); const prisma = new PrismaClient(); prisma.$use(async (params, next) => { // Security check for the 'Post' model on update and delete actions if (params.model === 'Post' && (params.action === 'update' || params.action === 'delete')) { const postId = params.args.where.id; // Fetch the post to check the author const post = await prisma.post.findUnique({ where: { id: postId } }); // Replace 'getCurrentUserId' with actual implementation to get current user's ID if (post.authorId !== getCurrentUserId()) { throw new Error('You are not authorized to perform this action'); } } // Proceed with the query return next(params); }); // Dummy function to simulate fetching the current user's ID function getCurrentUserId() { return 1; // Replace with actual implementation } ``` This middleware enforces user permissions, ensuring that only the author of a post can update or delete it, thereby enhancing the security of your application. ## Section 6: Combining Middleware #### How to Combine Multiple Middleware Functions Combining multiple middleware functions in Prisma allows you to perform several tasks in sequence, such as logging, validation, and security checks. Prisma executes middleware in the order they are defined, making it easy to chain multiple middleware functions together. #### Example of Using Multiple Middleware for Logging, Validation, and Security Here's how you can combine middleware functions to handle logging, validation, and security: 1. **Define Middleware Functions**: ```javascript const { PrismaClient } = require('@prisma/client'); const prisma = new PrismaClient(); // Logging middleware const loggingMiddleware = async (params, next) => { console.log('Params:', params); const result = await next(params); console.log('Result:', result); return result; }; // Validation middleware const validationMiddleware = async (params, next) => { if (params.model === 'User' && params.action === 'create') { const { email, password } = params.args.data; if (!email.includes('@')) { throw new Error('Invalid email format'); } if (password.length < 8) { throw new Error('Password must be at least 8 characters long'); } } return next(params); }; // Security middleware const securityMiddleware = async (params, next) => { if (params.model === 'Post' && (params.action === 'update' || params.action === 'delete')) { const postId = params.args.where.id; const post = await prisma.post.findUnique({ where: { id: postId } }); if (post.authorId !== getCurrentUserId()) { throw new Error('You are not authorized to perform this action'); } } return next(params); }; function getCurrentUserId() { return 1; // Replace with actual implementation } ``` 2. **Combine and Use Middleware Functions**: ```javascript // Adding middleware to Prisma client prisma.$use(loggingMiddleware); prisma.$use(validationMiddleware); prisma.$use(securityMiddleware); ``` 3. **Explanation**: - **Logging Middleware**: Logs the parameters and results of each query. - **Validation Middleware**: Validates user data before it is saved to the database. - **Security Middleware**: Ensures that only authorized users can update or delete posts. - **Combining Middleware**: Middleware functions are added to the Prisma client in the desired order, ensuring they run sequentially. By combining these middleware functions, you can create a robust Prisma setup that handles multiple aspects of query processing, from logging and validation to security, making your application more reliable and secure. ## Section 7: Testing Your Middleware #### Importance of Testing Middleware Testing middleware ensures it works correctly, prevents bugs, and maintains code quality. #### Tips for Testing Middleware - **Isolate Logic**: Test middleware separately from other parts of the application. - **Mock Dependencies**: Simulate database interactions and other external dependencies. - **Check Edge Cases**: Ensure the middleware handles all possible scenarios, including edge cases and error conditions.
emal_isuranga_2d9d79931d7
1,922,395
Cryptomator: end-to-end encrypt files in any cloud
With Cryptomator you can easily encrypt your files before uploading them to the cloud. This way you can be sure that your data is safe and secure even if the cloud provider gets hacked.
0
2024-07-13T15:35:26
https://dev.to/andreagrandi/cryptomator-end-to-end-encrypt-files-in-any-cloud-54kc
cryptomator, encryption, cloud, privacy
--- title: "Cryptomator: end-to-end encrypt files in any cloud" published: true description: "With Cryptomator you can easily encrypt your files before uploading them to the cloud. This way you can be sure that your data is safe and secure even if the cloud provider gets hacked." tags: cryptomator, encryption, cloud, privacy cover_image: https://www.andreagrandi.it/posts/cryptomator-end-to-end-encrypt-files-in-cloud/split-screenshots.png --- If you are using a cloud storage service like **Google Drive**, **Dropbox**, **OneDrive**, etc., you should be aware that **your files are not encrypted by default**. This means that the cloud provider is able to access and, in some cases, even share them with third parties. In case your cloud provider gets hacked, your files could also be exposed to the public. To prevent this from happening, you can use a tool like [**Cryptomator**](https://cryptomator.org) to automatically encrypt your files before uploading them to the cloud. ## How does it work? Cryptomator is a **free** and **open-source** software that allows you to create an encrypted vault on your computer where you can store your files. Once you have added your files to the vault, Cryptomator will encrypt them using strong encryption algorithms (256 bit AES). To be more clear, **Cryptomator doesn't upload your files**. Your cloud provider application does: You write files to the Cryptomator vault -> Cryptomator encrypts them and writes them in a folder which is synced up to your cloud provider -> Your cloud provider application uploads the encrypted files to the cloud. When you want to access your files, you simply access them from your mounted vault and Cryptomator will automatically decrypt them for you... You can find the rest of this post here: https://www.andreagrandi.it/posts/cryptomator-end-to-end-encrypt-files-in-cloud/
andreagrandi
1,922,397
Undress AI Tool: An Advanced Guide for Professional Photographers
Professional photographers require sophisticated tools to enhance their work, ensuring that each...
0
2024-07-13T15:36:12
https://dev.to/gogato2980/undress-ai-tool-an-advanced-guide-for-professional-photographers-2bpo
Professional photographers require sophisticated tools to enhance their work, ensuring that each image meets high standards of quality and creativity. The Undress AI Tool offers advanced AI-driven features that cater to the needs of professional photographers, streamlining their workflow and improving the final output. This guide provides an in-depth look at how professional photographers can leverage the Undress AI Tool to achieve outstanding results. **Introduction** The [Undress AI Tool](https://www.undressaitool.com/) is designed to provide powerful image editing capabilities, making it an indispensable tool for professional photographers. This guide will help you navigate its advanced features and optimize your photography workflow. **Setting Up Your Professional Account** Visit the Website: Go to the Undress AI Tool homepage. Sign Up: Click on the “Sign Up” button to create a new professional account. Enter your professional details, including name, email address, and business information. Verification: Verify your email address by clicking on the link sent to your inbox. Login: Use your credentials to log in to your account. Navigating the Professional Dashboard Once logged in, you will be directed to the professional dashboard. Here’s an overview: Home: Your central hub for accessing all features and recent activities. Projects: Manage and organize your photography projects with ease. Settings: Customize your account settings, including profile details, preferences, and business information. Analytics: Access analytics to track your editing activity and performance. Help: Get access to advanced tutorials, FAQs, and priority customer support. **Uploading High-Resolution Images** Click on “Upload Image”: Located prominently on the dashboard. Select File: Choose high-resolution image files from your device. Supported formats include JPEG, PNG, TIFF, and RAW. Upload: Click “Open” to upload your image to the platform. The upload process is optimized for high-resolution files. **Exploring Advanced Features** Precision Image Editing AI Tools: Utilize a range of AI-powered tools for detailed image edits, including fine object removal, intricate background changes, and precision enhancement filters. Manual Adjustments: Make manual adjustments for finer control over the editing process, allowing for professional-grade results. Batch Processing Upload Multiple Images: Upload multiple images at once for batch processing. Apply Edits: Apply consistent edits across all images in a batch, ensuring uniform quality and saving time. Interactive Demo for Professionals Try the Demo: Use the interactive demo to experiment with advanced features before applying them to your projects. Follow Instructions: Upload an image and follow the guided steps to apply different professional-grade edits and see real-time results.
gogato2980
1,922,398
Effective task management in Habitica using a text file and AWS Serverless
Creating tasks (to-dos) in Habitica is not the most effective and pleasant thing using the UI...
0
2024-07-13T15:45:18
https://pabis.eu/blog/2024-07-13-Easy-Manage-Habitica-Tasks-Lambda-S3.html
habitica, lambda, stepfunctions, dynamodb
Creating tasks (to-dos) in Habitica is not the most effective and pleasant thing using the UI (although tools like Trello or Jira are far from better). Compared to editing a simple and single text file, whether formatted as Markdown or not, is much easier and gives a better overview (at least if the list is just a simple title without a long description). In [previous post](https://dev.to/aws-builders/track-your-performance-using-habitica-timestream-and-grafana-1igm) I demonstrated you a way to measure overall productivity or performance in Habitica. However, I myself use it only for daily and repeating tasks - I don't have to use the UI to define it every time. For one-time to-dos the situation is very different. I want simplicity and efficiency. And what is more efficient than simple text? I will use Habitica's API powers again to create a solution that will transform a text file to a list of tasks in Habitica including performing edits. I will also show you how to solve the problem of low API rate limits. The completed project is tagged `v3` and built on top of previous posts. - Repo: [https://github.com/ppabis/habitica-item-seller/tree/v3](https://github.com/ppabis/habitica-item-seller/tree/v3) - Part 1: [Use AWS Serverless to sell items in Habitica](https://dev.to/aws-builders/use-aws-serverless-to-sell-items-in-habitica-91) - Part 2: [Track your performance using Habitica, Timestream and Grafana](https://dev.to/aws-builders/track-your-performance-using-habitica-timestream-and-grafana-1igm) Processing a list of tasks -------------------------- The lists of tasks will be uploaded to an S3 bucket. An event notification in this bucket will trigger a Lambda function that will process this file and create and update tasks. The format for the tasks will be the following: ``` ID due date difficulty+attribute - task description 0001. 15/07/2024 TP - Wash the dishes 0002. HI - Create a new blog post ``` ID will be used to track the tasks in DynamoDB at a later stage. You can just rely on the line number but I want to ensure consistency. This also allows to split tasks between multiple files. Due date can be a date in `DD/MM/YYYY` format or empty string in case there's no due date. Difficulties are the following: Trivial, Easy, Medium and Hard. They will be mapped to appropriate `priority` values in Habitica. Attributes are not visible in Habitica UI - it determines which skill of the player will multiply task's value on completion: Strength, Intelligence, Perception and Constitution. Any line that will not follow this format will be just discarded - we will use regular expressions to verify it. In the above example, the first line will be ignored. This allows us to keep comments or more details for each task and they won't be processed. Let's define the first function with which we will be able to process the tasks into Python objects. It will split the input data into lines and then parse each line to produce an object with all the task parameters. We will also wrap each line processing call in a try-catch block so that in case the date is formatted badly, we will just print an error and continue. ```python import re from datetime import datetime DIFFICULTIES = { 'T': '0.1', 'E': '1', 'M': '1.5', 'H': '2' } ATTRIBUTES = { 'S': 'str', 'I': 'int', 'P': 'per', 'C': 'con' } def line_to_task(line: str) -> dict | None: # (Task ID). (Date?) (Difficulty) (Attribute) - (Title), feel free to adapt to your needs r = re.match('^(\\d+).\\s+([0-9/]*)\\s*([TEMHtemh])([SIPCsipc])\\s+-\\s+(.*)$', line) if r: date = None if r.group(2) == '' else datetime.strptime(r.group(2), '%d/%m/%Y') difficulty = DIFFICULTIES[r.group(3).upper()] attribute = ATTRIBUTES[r.group(4).upper()] return { 'id': r.group(1), 'date': date, 'difficulty': difficulty, 'attribute': attribute, 'title': r.group(5), } return None def parse_task_list(task_list: str) -> list[dict]: tasks = [] for line in task_list.split('\n'): try: task = line_to_task(line.strip()) if task: tasks.append(task) except Exception as e: print(f"Error processing line '{line}': {e}") return tasks ``` I tested the function with the following task list and received a correct output when parsing all the values. I formatted the output to be somewhat of a readable format to see if every value has its place. ### Input ``` This is an unrelated lien taht should be ignored 0001. 01/08/2024 MI - Create a list of tasks 0002. 02/08/2024 ES - Create another list of tasks 0003. TS - A task with no due date! This is not a task 0004. This is also not a task 0005. 03/10/2024 - Also this is also not a task 0006. TC - test 12345 this should be a task - with a hyphen -- extra - hyphens 0007. 05/08/2024 HP - 😂 emojis 🏆 0010. 66/12/3033 EC - this task should be just warned and not crash but is wrong 0234. HC - test 0234 this should be a task ``` ### Output ``` Error processing line '0010. 66/12/3033 EC - this task should be just warned and not crash but is wrong': time data '66/12/3033' does not match format '%d/%m/%Y' 0001: difficulty=1.5 attribute=int date=2024-08-01 00:00:00 Create a list of tasks 0002: difficulty=1 attribute=str date=2024-08-02 00:00:00 Create another list of tasks 0003: difficulty=0.1 attribute=str date=None A task with no due date! 0006: difficulty=0.1 attribute=con date=None test 12345 this should be a task - with a hyphen -- extra - hyphens 0007: difficulty=2 attribute=per date=2024-08-05 00:00:00 😂 emojis 🏆 0234: difficulty=2 attribute=con date=None test 0234 this should be a task ``` Storing and updating tasks -------------------------- Now we have to decide which tasks need to be created and which ones need to be updated. We need to map our task IDs to Habitica's UUIDs. We will use DynamoDB for that. This will be also more efficient to compare contents of the task with an entry in DynamoDB in contrast to querying Habitica API for each task. However, to keep things even simpler, I will deliberately skip handling deletion of the tasks - for this we would need to either rely on file differences (lines removed) or scan entire table and compare to what we have loaded from the file. (Alternatively, my idea was a `-` in front of the ID is to remove task in one file upload and then task can be safely deleted from the file for next uploads, but this post was already very long 😅). Our primary key will be the task ID. As it always increases, it shouldn't pose any problems. The keys don't have to be increased by one - you can use `10000`, `20000` and so on to split projects or just keep some order/grouping. I will first define the simplest function which is creating a new task. It will return a formatted `dict` that can be directly submitted to DynamoDB. The `date` field will always be that will be either an empty string or in an ISO standard format such as `2024-07-08T18:33:56`. This will make things simpler rather than checking if the `date` exists in the record. ```python def create_task(task: dict) -> dict: return { 'id': task['id'], 'title': task['title'], 'date': task['date'].isoformat() if task['date'] else "", 'difficulty': task['difficulty'], 'attribute': task['attribute'] } ``` The next function will be used to update the task found already in DynamoDB table. It will compare each field separately and return the changed object if some changes were made to it or `None` when we shouldn't update the row. ```python def compare_and_update(task: dict, item: dict) -> dict | None: dirty = False task['date'] = task['date'].isoformat() if task['date'] else "" if task['title'] != item['title']: item['title'] = task['title'] dirty = True if task['date'] != item['date']: item['date'] = task['date'] dirty = True # ... continues for other fields... return item if dirty else None ``` As the last point, we will combine the logic to use both of the functions to either create, update or skip the task. However, here comes a twist - Habitica allows for batch creating new tasks but can only update existing ones one by one. To save on time and executions, we will create a list of tasks to be created and do this immediately in this function. Tasks that need to be updated will be updated in DynamoDB and their IDs will be passed further to the Step Function. Why? The problem is that Habitica API is very strict on the amount of calls we can do (30 per minute). We can use simple `sleep` in the Lambda but this will incur unnecessary costs. Step Function has a `Wait` block that can save us a bit of money. The process will look something like on the diagram below. ![Processing tasks](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dnn0mxmlavhk4dp0hh6y.jpg) How to determine if a task is new or need updating? We don't have to reach out to the API. Tasks that are known need to have UUID mapping it to Habitica. So we just need to check this one attribute. If it is there and change was detected with above function, we update the row in DynamoDB and add ID to the list. The next function will simply load it again from DynamoDB and send a request to Habitica. ```python def store_update_tasks(tasks: list[dict]) -> list[str]: updated_ids, new_tasks = [], [] for task in tasks: # Update a task in DynamoDB if the input is different than what is in # the database. If the record in the database does not have UUID, it # means that it needs to be created in Habitica. response = ddb.get_item(Key={'id': task['id']}) if 'Item' in response: updated = compare_and_update(task, response['Item']) if updated: ddb.put_item(Item=updated) if 'habitica_uuid' in updated and updated['habitica_uuid']: # This item exists in Habitica updated_ids.append(updated['id']) else: # This item was existing but wasn't submitted to Habitica yet new_tasks.append(updated) else: print(f"Task {task['id']} is up to date.") # Task not found so create a new one else: ddb.put_item(Item=create_task(task)) new_tasks.append(task) id_uuids = batch_create_tasks(new_tasks) # Dummy function for now # Update rows in DynamoDB to hold UUIDs of the recently created tasks for local_id, uuid in ids_uuids: ddb.update_item( Key={'id': task_id}, UpdateExpression='SET habitica_uuid = :uuid', ExpressionAttributeValues={':uuid': uuid} ) return updated_ids def batch_create_tasks(tasks) -> list[(str, str)]: # Dummy function for now return [(task['id'], f"123456-{task['id']}") for task in tasks] ``` As the last step, we have to create Lambda handler which will be the target for S3 event on object upload or update - so in case we create a new task list or update it, the Lambda will be triggered. If you plan on using multiple files in S3, this function should be fine with processing it. However, you need to keep the IDs in all lists unique. We will process all the objects that were sent by the event and do it safely in try-catch block so that one bad file won't crash the entire process. We will iterate through all records in the event in case the function was triggered for multiple uploads. ```python # imports, clients... def process_file(record) -> list[str]: obj = s3.get_object( Bucket=record['s3']['bucket']['name'], Key=record['s3']['object']['key'] ) tasks = parse_task_list(obj['Body'].read().decode('utf-8')) ids = store_update_tasks(tasks) print(f"Tasks to update: {', '.join(ids)}") return ids def lambda_handler(event, context): ids = [] for record in event['Records']: if record['eventName'].startswith('ObjectCreated'): try: _ids = process_file(record) ids.extend(_ids) except Exception as e: print(f"Error processing record: {e}") # Here we will start Step Function if ids is not empty so if there are any # tasks to be updated in Habitica if ids: print(f"Dummy - Starting step function for update {', '.join(ids)}") return { 'statusCode': 200, 'body': ids } ``` Connecting events ----------------- *This section contains a lot trials and errors on how to connect S3 bucket, S3 event notifications and Lambda. If you want a working solution, navigate to* [*GitHub repository*](https://github.com/ppabis/habitica-item-seller/tree/v3). Now comes the **tricky** part. We used Serverless Application Model based on CloudFormation to create previous Habitica related projects. So the natural thing to do would be to: create a bucket, create a DynamoDB table, connect it to Lambda `Event` property. Let's try to do it. ```yaml AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: A function that processes task list uploaded to S3 and submits it to Habitica Globals: Function: Timeout: 15 MemorySize: 128 Resources: # ... resources from previous posts HabiticaTaskListBucket: Type: AWS::S3::Bucket Properties: BucketName: habitica-task-list-abcdef123456 HabiticaTaskListTable: Type: AWS::Serverless::SimpleTable Properties: TableName: HabiticaTaskList HabiticaProcessTaskList: Type: AWS::Serverless::Function Properties: CodeUri: process_task_list/ Handler: main.lambda_handler Runtime: python3.12 Architectures: - arm64 Policies: - S3ReadPolicy: BucketName: !Ref HabiticaTaskListBucket - DynamoDBCrudPolicy: TableName: !Ref HabiticaTaskListTable - AWSSecretsManagerGetSecretValuePolicy: SecretArn: !Ref HabiticaSecret Environment: Variables: TABLE_NAME: !Ref HabiticaTaskListTable HABITICA_SECRET: !Ref HabiticaSecret Events: S3Event: Type: S3 Properties: Bucket: !Ref HabiticaTaskListBucket Events: s3:ObjectCreated:* ``` I will not filter the event, just let it process anything that lands in the bucket as why not. After we run `sam build` and `sam deploy` we will get an error. ``` Status: FAILED. Reason: Circular dependency between resources: [HabiticaTaskListBucket, HabiticaProcessTaskListRole, HabiticaProcessTaskListS3EventPermission, HabiticaProcessTaskList] ``` Ok, so maybe we should create a bucket in a separate stack and then reference its name in this stack. Let's try doing that. I will cut the `HabiticaTaskListBucket` resource from this stack, deploy it form a different YAML file, export the bucket name as the output and import it into this SAM template. Create a new `template.yml` in a new directory. ```yaml --- AWSTemplateFormatVersion: '2010-09-09' Transform: AWS::Serverless-2016-10-31 Description: A bucket for Habitica task lists - bucket.yaml Resources: HabiticaTaskListBucket: Type: AWS::S3::Bucket Properties: BucketName: habitica-task-list-abcdef123456 Outputs: BucketName: Value: !Ref HabiticaTaskListBucket Export: Name: HabiticaTaskListBucket ``` Now instead of `Bucket: !Ref HabiticaTaskListBucket` we will use `Bucket: !ImportValue HabiticaTaskListBucket`. This will take the bucket name from global CloudFormation outputs and insert it into this stack. ```yaml # ... Resources: # This is now in a separate stack #HabiticaTaskListBucket: # Type: AWS::S3::Bucket # Properties: # BucketName: habitica-task-list-abcdef123456 HabiticaProcessTaskList: Type: AWS::Serverless::Function Properties: CodeUri: process_task_list/ # ... cut for clarity Policies: - S3ReadPolicy: BucketName: !ImportValue HabiticaTaskListBucket # ... cut for clarity Events: S3Event: Type: S3 Properties: Bucket: !ImportValue HabiticaTaskListBucket Events: s3:ObjectCreated:* ``` ```bash $ cd tasks_bucket $ sam build $ sam deploy --guided $ cd .. $ sam build $ sam deploy ... Invalid Serverless Application Specification document. Number of errors found: 1. Resource with id [HabiticaProcessTaskList] is invalid. Event with id [S3Event] is invalid. S3 events must reference an S3 bucket in the same template. ``` This seems absurd. So, one possibility is to define the bucket first, deploy and then connect it. But this contradicts the whole infrastructure as code concept as it introduces manual steps. Our only hope is to use native CloudFormation `AWS::Lambda::Permission` to give the S3 events source permission to invoke the function and `NotificationConfiguration` in S3 resource. Let's try that. Remember to destroy the previously created S3 bucket if you did try the method with importing. In the stack below, Lambda will be created first, then the permission and as the last step the bucket will be created. It has to happen in this order because the S3 notification requires that the permission is already in place. However, there's still one loop in here as well! In the read policy for the Lambda we refer the bucket. However, unlike in case of `Event` section we can use a simple string parameter and `!Ref` in both `BucketName` properties. Essentially, we have to replace all references to the bucket with a string or manually created ARN - which is not ideal but better than double deployment. ```yaml Parameters: HabiticaTaskListBucketName: Type: String Description: Name of the S3 bucket that will hold the task list Default: habitica-task-list-abcdef123456 # ... Resources: HabiticaTaskListBucket: Type: AWS::S3::Bucket Properties: BucketName: !Ref HabiticaTaskListBucketName NotificationConfiguration: LambdaConfigurations: - Event: s3:ObjectCreated:* Function: !GetAtt HabiticaProcessTaskList.Arn HabiticaProcessTaskList: Type: AWS::Serverless::Function Properties: CodeUri: process_task_list/ # ... cut for clarity Policies: - S3ReadPolicy: - removed BucketName: !Ref HabiticaTaskListBucketName # ... cut for clarity # Events: - removed HabiticaProcessTaskListS3Permission: Type: AWS::Lambda::Permission Properties: Action: lambda:InvokeFunction FunctionName: !GetAtt HabiticaProcessTaskList.Arn Principal: s3.amazonaws.com SourceArn: !Sub "arn:aws:s3:::${HabiticaTaskListBucketName}" ``` ```bash $ cd tasks_bucket $ sam delete $ cd .. $ sam build && sam deploy ``` Testing the task list processor ------------------------------- I will upload a file to the S3 bucket and expect DynamoDB table to contain all the valid tasks. Next I will replace the file with some updates to task and I would like to see the table updated accordingly. For now, we will not `POST` or `PUT` anything into Habitica. We will only track what is happening in the logs and in DynamoDB Table. ``` $ aws s3 cp tasks.txt s3://habitica-task-list-abcdef123456/tasks.txt $ # ... checking logs, table $ aws s3 cp tasks2.txt s3://habitica-task-list-abcdef123456/tasks.txt $ # ... checking logs and table again ``` The file that I used is the same as in the example above. As the first step, I checked CloudWatch logs if there's any output from the function. I can see that everything went smoothly in terms of processing the file. ``` Error processing line '0010. 66/12/3033 EC - this task should be just warned and not crash but is wrong': time data '66/12/3033' does not match format '%d/%m/%Y' Tasks found in file = 6 Tasks to update: ``` Now I will scan the DynamoDB table items. I will use AWS Console for that. The screenshot below shows the status of both the first version of the file and some updates that were performed after reuploading with some changes. After second run the logs also show expected results - two tasks were existing previously and have to be updated in the next routine. One task was new and was just silently inserted and mock-created in Habitica. ``` Tasks found in file = 7 Task 0001 is up to date. Task 0002 is up to date. Task 0006 is up to date. Task 0234 is up to date. Tasks to update: 0003, 0007 Starting Step Function with 2 tasks. ``` ![DynamoDB tasks](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2pu92z68sz6btfgpo4qk.jpg) Preparing a tag in Habitica --------------------------- Before we create a function that will store every task in Habitica, I suggest creating a tag that will mark each of the tasks that were created automatically. Go to your Habitica dashboard and select `Tags` at the top of the lists. On mobile it looks different. Select `Edit tags` and add a new one that will be used for the purpose. ![Habitica tags](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/22nyleugofogi9tc31aq.jpg) Now we need to retrieve the tag's ID. We can automate it in the Lambda on first run but it likely won't change so we can just retrieve it once and hardcode it. To do this, we can use environment variables and `curl`. For each `read` paste appropriate value. It won't echo since it is specified with `-s` flag. The tag you created will likely be the last one in the array. ```bash $ read -s HABITICA_USER $ read -s HABITICA_KEY $ export HABITICA_USER HABITICA_KEY $ curl -s -H "x-api-user: $HABITICA_USER" \ -H "x-api-key: $HABITICA_KEY" \ -H "x-client: $HABITICA_USER-taskscheduler10" \ https://habitica.com/api/v3/tags ... { "id": "a2e84af7-4b0d-46a3-8dcc-3b94b4205e59", "name": "Learning" }, { "id": "4aedf1fc-8dd7-4ff9-95f1-1f3112a0b815", "name": "Automated" } ], "notifications": [], "userV": 21793, "appVersion": "5.26.1" } ``` Edit your `HabiticaProcessTaskList` function's environment variables and add `TASK_TAG` with the value you received from above commands. Function for creating tasks in Habitica -------------------------------------- I will get the tag ID from an environment variable. The new function will take the list of new tasks, format it appropriately and batch send it to Habitica API. Afterwards it will return the list of tuples - our task ID and UUID in Habitica that we will need to save back to DynamoDB. This function will be called only once per file update in S3. I will copy `auth.py` from the [previous project](https://dev.to/aws-builders/use-aws-serverless-to-sell-items-in-habitica-91) and use it to create appropriate headers. ```python import requests, os from auth import get_headers HABITICA_URL="https://habitica.com/api/v3" TASK_TAG = os.getenv("TASK_TAG", "") HEADERS = get_headers() def batch_create_tasks(tasks) -> list[(str, str)]: habitica_tasks = [ create_task(task, TASK_TAG) for task in tasks ] original_ids = [task['id'] for task in tasks] url = f"{HABITICA_URL}/tasks/user" response = requests.post(url, json=habitica_tasks, headers=HEADERS) code = response.status_code if code == 200 or code == 201: data = response.json()['data'] uuids = [ t['id'] for t in data ] if isinstance(data, list) else [data['id']] return list(zip(original_ids, uuids)) raise Exception(response.json()['message']) def create_task(task: dict, tag: str = "") -> dict: # A helper function to format the task as needed data = { "text": task['title'], "type": "todo", "priority": task['difficulty'], "attribute": task['attribute'] } if 'date' in task and task['date']: data['date'] = task['date'].isoformat() if tag: data['tags'] = [tag] return data ``` I built the SAM template and deployed the updated function again. I checked on Habitica's side if the new tasks were created and was pleasantly surprised. I also checked the DynamoDB table and new UUIDs were in place. Now it's time to implement the last part - updating the tasks in Habitica after they were updated in DynamoDB. ![Tasks in Habitica](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jp67vteqxjmqlvjup8qc.jpg) Lambda for updating ------------------- Before we create the Step Function that will be triggered by upstream Lambda that parses the new task list, we will create the downstream Lambda that will be called by this Step Functions. It will read the event that contains the list of task IDs to be updated and will return the same list with the processed item removed. For simplicity the event will also contain `Finished` - a boolean that will determine if a loop in the Step Function should break or not. First, we will retrieve the task from DynamoDB and format it so that it fits Habitica's API. We will throw exceptions in case of problems but it won't crash our process. It's just for logging. ```python import boto3, os TABLE_NAME = os.getenv('TABLE_NAME') ddb = boto3.client('dynamodb').Table(TABLE_NAME) def get_formatted_task(task_id: str) -> tuple[str, dict]: row = ddb.get_item(Key={'id': task_id}) if 'Item' in row: if 'uuid' not in row['Item'] or not row['Item']['uuid']: raise Exception(f"Task {task_id} does not have a UUID!") task = row['Item'] return task['uuid'], format_task(task) raise Exception(f"Task {task_id} not found!") def format_task(task: dict) -> dict: formatted = { "text": task['title'], "priority": task['difficulty'], "attribute": task['attribute'], "date": None # Can be used for clearing the due date } if 'date' in task and task['date']: formatted['date'] = task['date'] return formatted ``` Now we can create a new function that will just send a request to Habitica to update the task. This one is just a very simple PUT request. `auth.py` is also needed to be copied into this Lambda's directory. ```python import requests HABITICA_URL="https://habitica.com/api/v3" def update_task(headers: dict, uuid: str, data: dict): url = f"{HABITICA_URL}/tasks/{uuid}" response = requests.put(url, json=data, headers=headers) code = response.status_code if code == 200: return response.json() raise Exception(response.json()['message']) ``` As the last step, we glue together the functions and manage the list of tasks received from Step Function. ```python def lambda_handler(event, context): tasks = event.get('List', []) if not tasks: event['Finished'] = True return event try: task = event['List'][0] uuid, task = get_formatted_task(task) update_task(HEADERS, uuid, task) except Exception as e: print(e) # We will continue processing and just log problems event['List'] = event['List'][1:] # We will also control Step Function's loop from here event['Finished'] = len(event['List']) == 0 return event ``` Step Function for updating and delaying each call ------------------------------------------------ Now it's time to define a Step Function that will be triggered by the first Lambda and will execute the second Lambda in a loop with a delay. We can define it in AWS SAM as `AWS::Serverless::StateMachine` resource. The definition will look like the code below (we can use YAML for defining the states). Our state machine will also need permissions to execute Lambda. As a loop we will use `Choice` block that will check if input variable `Finished` is set to `false` or otherwise go to `Succeed` state. The Lambda will output transformed variables that will be given back to `Loop Tasks`. Wait block simply passes all the values as they are. ```yaml HabiticaUpdateTasksStateMachine: Type: AWS::Serverless::StateMachine Properties: Policies: - LambdaInvokePolicy: FunctionName: !Ref HabiticaUpdateTasksLambda Definition: StartAt: Loop Tasks States: # Loop for all tasks Loop Tasks: Type: Choice Default: Succeed Choices: - Variable: "$.Finished" BooleanEquals: false Next: Update Task # Loop start Update Task: Type: Task Resource: !GetAtt HabiticaUpdateTasksLambda.Arn Parameters: List.$: "$.List" Finished: false Next: Wait Wait: Type: Wait Seconds: 2 Next: Loop Tasks # Loop end Succeed: Type: Succeed ``` ![Step Function diagram](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xj8nhouwtpgoqf6lx4ma.jpg) The diagram above shows how the Step Function look like after deploying it with SAM. As you see, we still have to define a Lambda function for updating tasks. This is a simple setup like for previous function. It will just reference secret and DynamoDB table. ```yaml HabiticaUpdateTasksLambda: Type: AWS::Serverless::Function Properties: CodeUri: update_tasks/ Handler: main.lambda_handler Runtime: python3.12 Architectures: - arm64 Policies: - DynamoDBCrudPolicy: TableName: !Ref HabiticaTaskListTable - AWSSecretsManagerGetSecretValuePolicy: SecretArn: !Ref HabiticaSecret Environment: Variables: HABITICA_SECRET: !Ref HabiticaSecret TABLE_NAME: !Ref HabiticaTaskListTable ``` Now we will edit the first Lambda function to allow it to trigger the Step Function and pass the list of tasks to update. Insert the following policy and environment variable inside. Also add a line at the end of `lambda_handler` to the `main.py` of that first function. ```yaml HabiticaProcessTaskList: Type: AWS::Serverless::Function Properties: CodeUri: process_task_list/ # ... cut for clarity Policies: # ... cut for clarity - StepFunctionsExecutionPolicy: StateMachineName: !GetAtt HabiticaUpdateTasksStateMachine.Name Environment: Variables: TABLE_NAME: !Ref HabiticaTaskListTable HABITICA_SECRET: !Ref HabiticaSecret TASK_TAG: 01234567-89ab-cdef-0123-456789abcdef STEP_FUNCTION_NAME: !Ref HabiticaUpdateTasksStateMachine ``` ```python # ... imports step = client('stepfunctions') STEP_FUNCTION_NAME = os.getenv('STEP_FUNCTION_NAME') def lambda_handler(event, context): ids = [] # ... code follows # If the list is empty, we don't have to even execute the Step Function if len(ids) > 0 and STEP_FUNCTION_NAME: print(f"Starting Step Function {STEP_FUNCTION_NAME} with {len(ids)} tasks.") step.start_execution(stateMachineArn=STEP_FUNCTION_NAME, input=json.dumps({"List": ids, "Finished": False})) return { 'statusCode': 200, 'body': { 'List': ids, 'Finished': len(ids) == 0 } } ``` Final test ---------- I deleted all the tasks in Habitica and DynamoDB. I uploaded the first list of tasks and an update to it. The tasks were created as expected and updated in Habitica as well! ![Tasks updated](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/znf7s5of5xpnvyebh6aa.jpg) I also checked how the Step Function behaves. It correctly looped twice for two updated tasks. Step Functions allow to look into the execution for each step, check the inputs and outputs during the process. It's very useful for debugging. ![Whole execution](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eo8fgqyyd5yb7m4tw1ei.jpg) ![Input and output](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xk6250c16r52fjynuy7o.jpg) As mentioned before, this solution does not support deletion of tasks. Another idea is to connect a Git repository with CodePipeline that will update the file in S3. For a developer, such a solution would be even more usable than uploading files to S3 even using AWS CLI. That way you would also be able to track history of changes.
ppabis
1,922,412
Caddy: Manual Maintenance Mode
Coming from NGINX and others the concept of a maintenance mode that can be manually enabled is...
0
2024-07-13T15:57:10
https://blog.marco.ninja/notes/technology/caddy/caddy-manual-maintenance-mode/
webdev, devops
Coming from NGINX and others the concept of a maintenance mode that can be manually enabled is something I have used many times before. With Caddy it is equally as easy, just using a less obvious syntax. It always follows this general pattern: - Check if a `maintenance.on` file exists at a specified location - If it does, deliver the maintenance page - Otherwise, do the normal thing you do **Notice**: Doing maintenance pages the way I describe in this post means your webserver is checking a file on disk for every request. In small deployments this is likely not adding any overhead you need to be concerned with, but keep it in mind. ## nginx To refresh your memory, this patterns looks like this in nginx. ```nginx server { location / { set $maintenance 0; if (-f /app/maintenance.on) { set $maintenance 1; } if ($maintenance = 1) { rewrite ^(.*)$ /app/static/maintenance.html last; return 503; } proxy_pass http://127.0.0.1:8080; } } ``` ## Caddyfile With Caddy it works exactly the same, just using a different vocabulary. ``` example.marco.ninja { @maintenanceModeActive file /app/maintenance.on handle @maintenanceModeActive { try_files /app/static/maintenance.html file_server { status 503 } } reverse_proxy 127.0.0.1:8080 } ```
professorlogout
1,922,399
How to use apply guardrail to protect PII information with AWS Bedrock Converse API, Lambda and Python! - Anthropic Haiku
In this article, I am going to demonstrate a revision of previously published workshop on how to...
0
2024-07-13T17:12:06
https://dev.to/bhatiagirish/how-to-use-apply-guardrail-to-protect-pii-information-with-aws-bedrock-converse-api-lambda-and-python-anthropic-haiku-3480
aws, amazonbedrock, bedrockconverseapi, generativeai
In this article, I am going to demonstrate a revision of previously published workshop on how to build a serverless GenAI solution to create call center transcript summary via a Rest API, Lambda and AWS Bedrock Converse API and protect the sensitive information like PII using guardrail policy. On July 10, 2024, during the AWS New York Summit, AWS announced the introduction of the **Apply Guardrail** feature for its Generative AI services. Amazon Bedrock, already a standout service in the AWS lineup, gains further flexibility with this new feature, allowing developers to decouple the guardrail from Large Language Model (LLM) invocation using Bedrock. In June 2024, AWS added support for guardrails via the Converse API. However, a challenge I encountered was that guardrail policies were being applied to both input and response. With additional code for guard content, I was able to accomplish the desired result however it was additional lines of code to meet the use case requirements. Here's a [link ](https://dev.to/bhatiagirish/how-to-use-guardrail-to-protect-pii-information-with-aws-bedrock-converse-api-lambda-and-python-anthropic-haiku-1323)to my previous article on how to apply guardrails using the Converse API to protect Personally Identifiable Information (PII). The newly announced Apply Guardrail function empowers developers with more control over how best to implement guardrails when invoking the Bedrock API. This feature ensures that based on the guardrail policy, if an input is blocked, developers can return a response before Bedrock is even invoked. This approach not only enhances security but also optimizes efficiency by saving unnecessary calls to the foundational model. This enhancement marks a significant step forward in the customization and control developers have over their AI applications, ensuring safer and more efficient interactions with generative AI models. **In this article, I have updated my previous published article and code to demonstrate how the Apply Guardrail feature can be used to protect PII information from a transcript summary generated using Amazon Bedrock, Lambda, and API.** Examples of PII (Personal Identifiable Information) - SSN, Account Number, Phone, Email, Address etc. **Let's revisit the Guardrail Policies supported by AWS** **Guardrail Policies** The Amazon Bedrock Guardrail feature allows you to configure various filters, providing responsible boundaries for the responses generated by your AI solution. These guardrails help ensure that the outputs are appropriate and align with your requirements and standards. **Content Filters** Content Filters across 6 categories - Hate - Insults - Sexual - Violence - Misconduct - Prompt Attack Filters can be set to None, Low, Medium, High. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hrnz020t0gmd93pxnnmi.png) **Denied Topics** You can specify filter for the topic that API should not respond to! **Word Filter** You can specify words that you want filter to act on before providing a response! **Sensitive Information Filter** Filter to either block or mask the Personal Identifiable Information. Amazon Bedrock allow provides a way to configure the message provided back to the user if input or the response in violation with the guardrail configured policies. For example, if Sensitive information filter configured to block a request with account number, then, you can provide a customize response letting user know that request cannot be processed since it contains a forbidden data element. Let's review our use cases: • There is a transcript available for a case resolution and conversation between customer and support/call center team member. • A call summary needs to be created based on this resolution/conversation transcript. • An automated solution is required to create call summary. • An automated solution will provide a repeatable way to create these call summary notes. • Increase in productivity as team members usually work on documenting these notes can focus on other tasks. • Guardrail should be configured so that PII information is not displayed in the response. • Guardrail will also be applied to input. If input contains blocked data element like account number, then, API will not invoke the LLM and return the initial response to the consumer. I am generating my lambda function using AWS SAM, however similar can be created using AWS Console. I like to use AWS SAM wherever possible as it provides me the flexibility to test the function without first deploying to AWS cloud. Here is the **architecture** diagram for our use case. ![Image architecture](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/egtyunz5birz1rk4jynp.png) **Create a SAM Template** I will create a SAM template for the lambda function that will contain the code to invoke Bedrock Converse API along with required parameters and a prompt. Lambda function can be created without the SAM template however, I prefer to use Infra as Code approach since that allow for easy recreation of cloud resources. Here is the SAM template for the lambda function. ![Image SAM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6wd3hs07da6q0cy5ghop.png) **Create a Lambda Function** The Lambda function serves as the core of this automated solution. It contains the code necessary to fulfill the business requirement of creating a summary of the call center transcript using the Amazon Bedrock Converse API. This Lambda function accepts a prompt, which is then forwarded to the Bedrock Converse API to generate a response using the Anthropic Haiku foundation model. Now, Let's look at the code behind it. **Example of apply guardrail in the function:** ![Image GD1](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ffnkr39lq03wv272292.png) ![Image GD2](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/koe9yonebxft1soaev9q.png) ![Image Lambda](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nut39u7hc5koivqjsx8k.png) **Build function locally using AWS SAM** Next build and validate function using AWS SAM before deploying the lambda function in AWS cloud. Few SAM commands used are: • SAM Build • SAM local invoke • SAM deploy **Bedrock Invoke Model Vs. Bedrock Converse API** **Bedrock InvokeModel** ![Image Bedrock](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1wfd3rb6cij6al3x322c.png) **Bedrock Converse API** ![Image Converse](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jlbyodetj98zbo6s9po0.png) **Validate the GenAI Model response using a prompt** Prompt engineering is an essential component of any Generative AI solution. It is both art and science, as crafting an effective prompt is crucial for obtaining the desired response from the foundation model. Often, it requires multiple attempts and adjustments to the prompt to achieve the desired outcome from the Generative AI model. Given that I'm deploying the solution to AWS API Gateway, I'll have an API endpoint post-deployment. I plan to utilize Postman for passing the prompt in the request and reviewing the response. Additionally, I can opt to post the response to an AWS S3 bucket for later review. ![Image Postman](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tar2xo8vd1evpybr2lw3.png) I am using Postman to pass transcript file for the prompt. This transcript file has a conversation between call center employee (John) and customer (Girish) about a request to reset the password due to the locked account. • John: Hello, thank you for calling technical support. My name is John and I will be your technical support representative. Can I have your account number, please? • Girish: Yes, my account number is 21X-45X-8790. • John: Thank you. I see that you have locked your account due to multiple failed attempts to enter your password. To reset your password, I will need to ask you a few security questions. Can you please provide me with the answers to your security questions? • Girish: Sure, my security questions are: What is your favorite color? and What is your favorite food? • John: Please can you provide your zip code? • Girish: Yes, my zip code is 43215. • John: one final question, Please confirm your email address. • Girish: my email is gbtest@gmailtest.com. • John: Great, thank you. I will now reset your password and send you an email with instructions on how to log in to your account. Please check your email in a few minutes. • Girish: Thank you so much for your help. • John: You're welcome. Is there anything else I can assist you with today? • Girish: No, that's all for now. Thank you again for your help. • John: You're welcome. Have a great day! **Review the guarded/masked response returned by Generative AI Foundation Model ** ![Image Response1](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ymasemyjfy4boo3oergx.png) As you can note in the response above, GenAI response has masked the PII information. Let's look at the response once guardrail policy is updated to block the PII data. **Response with blocked data** Here is the response when policy is updated to block if PII contains account number. ![Image Response2](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8qbx9ym3a8zwdrvi820p.png) **Input with blocked data** ![Image Response3](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0wd5c2wm12qr6atdxkff.png) With these steps, a serverless GenAI solution to create call center transcript summary via a Rest API, Lambda and AWS Bedrock Converse API has been successfully completed. Amazon Bedrock Guardrail has been configured to protect PII information. Apply guardrail was applied to demonstrate how both input and response data can be protected using the guardrail. Python/Boto3 were used to invoke the Bedrock Converse API with Anthropic Haiku. As was demonstrated, with Converse API, guardrail was used to implement a policy to control the GenAI response and abstract and block the PII data! A guardrail was created to remove the PII information from the response. Also, guardrail config was updated to validate that account number when configured for blocking, will be blocked. Thanks for reading! Click here to get to YouTube video for this solution. {% embed https://www.youtube.com/watch?v=UNEtKudYvA4 %} https://www.youtube.com/watch?v=UNEtKudYvA4 𝒢𝒾𝓇𝒾𝓈𝒽 ℬ𝒽𝒶𝓉𝒾𝒶 𝘈𝘞𝘚 𝘊𝘦𝘳𝘵𝘪𝘧𝘪𝘦𝘥 𝘚𝘰𝘭𝘶𝘵𝘪𝘰𝘯 𝘈𝘳𝘤𝘩𝘪𝘵𝘦𝘤𝘵 & 𝘋𝘦𝘷𝘦𝘭𝘰𝘱𝘦𝘳 𝘈𝘴𝘴𝘰𝘤𝘪𝘢𝘵𝘦 𝘊𝘭𝘰𝘶𝘥 𝘛𝘦𝘤𝘩𝘯𝘰𝘭𝘰𝘨𝘺 𝘌𝘯𝘵𝘩𝘶𝘴𝘪𝘢𝘴𝘵
bhatiagirish
1,922,400
Understanding and Fixing "Object Reference Not Set to an Instance of an Object" in C#
Introduction One of the most common and frustrating errors that C# developers encounter is...
28,056
2024-07-13T15:57:16
https://dev.to/iamcymentho/understanding-and-fixing-object-reference-not-set-to-an-instance-of-an-object-in-c-2794
csharp, c, softwareengineering, help
## Introduction One of the most common and frustrating errors that C# developers encounter is the infamous `Object reference not set to an instance of an object`. This error message can be confusing, especially for those new to programming. In this article, we will demystify this error, explain its causes, provide a memorable real-life analogy, and offer solutions to prevent and fix it. What Does `"Object Reference Not Set to an Instance of an Object"` Mean? In layman's terms, this error occurs when you try to use an object that hasn't been created yet. It's like trying to drive a car that hasn't been built—you can't use something that doesn't exist. In technical terms, this error is a `NullReferenceException`. It happens when you attempt to access a member (method, property, field) of an object that is null. A null object means that the object reference points to nothing or no instance of the object exists in memory. **Real-Life Analogy** Imagine you're at home, and you want to make a phone call. You reach for your phone, but it's not there because you never bought one. In this scenario: - The phone is the object. - Reaching for the phone is like trying to access a member of the object. - Not having the phone is like the object reference being null. So, when you try to make a call, you can't, because the phone (`object`) doesn't exist. Similarly, in your code, trying to use an object that hasn't been instantiated results in the Object reference not set to an instance of an object error. **Common Scenarios and Fixes:** **- Uninitialized Objects** ```csharp class Person { public string Name { get; set; } } Person person = null; Console.WriteLine(person.Name); // Throws NullReferenceException ``` **Fix: Initialize the Object** ```csharp Person person = new Person(); person.Name = "John"; Console.WriteLine(person.Name); // No error ``` **- Uninitialized Members in a Class** ```csharp class Car { public Engine Engine { get; set; } } class Engine { public int Horsepower { get; set; } } Car car = new Car(); Console.WriteLine(car.Engine.Horsepower); // Throws NullReferenceException ``` **Fix: Initialize the Member** ```csharp Car car = new Car { Engine = new Engine() }; car.Engine.Horsepower = 150; Console.WriteLine(car.Engine.Horsepower); // No error ``` **- Null Return from Methods** ```csharp class Repository { public Person GetPersonById(int id) { // Returns null if person not found return null; } } Repository repo = new Repository(); Person person = repo.GetPersonById(1); Console.WriteLine(person.Name); // Throws NullReferenceException ``` **Fix: Check for Null** ```csharp Person person = repo.GetPersonById(1); if (person != null) { Console.WriteLine(person.Name); // No error if person is not null } else { Console.WriteLine("Person not found"); } ``` **- Incorrect Assumptions about Collections** ```csharp List<Person> people = null; Console.WriteLine(people.Count); // Throws NullReferenceException ``` **Fix: Initialize Collections** ```csharp List<Person> people = new List<Person>(); Console.WriteLine(people.Count); // No error ``` **Best Practices to Avoid NullReferenceExceptions** 1. Use Null-Conditional Operators The null-conditional operator (?.) can help safely access members of an object that might be null. ```csharp Person person = null; Console.WriteLine(person?.Name); // No error, outputs nothing ``` 2. Initialize Variables and Members Always initialize your variables and class members to avoid null references. ```csharp class Car { public Engine Engine { get; set; } = new Engine(); } ``` 3. Perform Null Checks Always check for null before accessing members of an object. ```csharp if (person != null) { Console.WriteLine(person.Name); } ``` 4. Use Safe Navigation with LINQ When using LINQ, ensure that collections are not null before performing queries. ```csharp var names = people?.Select(p => p.Name).ToList(); ``` **Conclusion** The Object reference not set to an instance of an object error is a common stumbling block for C# developers, but understanding its cause and knowing how to prevent and fix it can save you a lot of headaches. Always remember to initialize your objects and perform null checks where necessary. With these best practices in mind, you'll be well-equipped to handle and avoid this error in your future projects. `LinkedIn Account` : [LinkedIn](https://www.linkedin.com/in/matthew-odumosu/) `Twitter Account `: [Twitter](https://twitter.com/iamcymentho) **Credit**: Graphics sourced from [LoginRadius](https://www.loginradius.com/blog/engineering/exception-handling-in-csharp/)
iamcymentho
1,922,402
Hi
I'm here to document my journey from a noob to a FullStack Web Developer. I believe that this is the...
0
2024-07-13T15:41:57
https://dev.to/gregharis/hi-o25
I'm here to document my journey from a noob to a FullStack Web Developer. I believe that this is the beginning of an amazing journey. So let the journey begin
gregharis
1,922,403
Singleton and Unmodifiable Collections and Maps
You can create singleton sets, lists, and maps and unmodifiable sets, lists, and maps using the...
0
2024-07-13T15:42:07
https://dev.to/paulike/singleton-and-unmodifiable-collections-and-maps-25dh
java, programming, learning, beginners
You can create singleton sets, lists, and maps and unmodifiable sets, lists, and maps using the static methods in the **Collections** class. The **Collections** class contains the static methods for lists and collections. It also contains the methods for creating immutable singleton sets, lists, and maps, and for creating read-only sets, lists, and maps, as shown in Figure below. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b6nrp0n2g8moerbhlk9s.png) The **Collections** class defines three constants—**EMPTY_SET**, **EMPTY_LIST**, and **EMPTY_MAP**—for an empty set, an empty list, and an empty map. These collections are immutable. The class also provides the **singleton(Object o)** method for creating an immutable set containing only a single item, the **singletonList(Object o)** method for creating an immutable list containing only a single item, and the **singletonMap(Object key, Object value)** method for creating an immutable map containing only a single entry. The **Collections** class also provides six static methods for returning _read-only views for collections_: **unmodifiableCollection(Collection c)**, **unmodifiableList(List list)**, **unmodifiableMap(Map m)**, **unmodifiableSet(Set set)**, **unmodifiableSortedMap(SortedMap m)**, and **unmodifiableSortedSet(Sorted Set s)**. This type of view is like a reference to the actual collection. But you cannot modify the collection through a read-only view. Attempting to modify a collection through a read-only view will cause an **UnsupportedOperationException**.
paulike
1,922,556
Complete Review of Undress AI Tool's Website and Tools
In the realm of digital content creation, having access to powerful image editing tools can...
0
2024-07-13T18:05:29
https://dev.to/gogato2980/complete-review-of-undress-ai-tools-website-and-tools-171d
In the realm of digital content creation, having access to powerful image editing tools can significantly elevate the quality and impact of your visuals. The [Undress AI Tool](https://www.undressaitool.com/) emerges as a standout solution in this regard, harnessing advanced artificial intelligence to streamline and enhance the editing process. Whether you're a professional photographer, a marketer, or a content creator, understanding how this tool can transform your creative workflow is essential. Overview of Undress AI Tool's Capabilities The Undress AI Tool is designed to simplify complex image editing tasks, particularly focusing on background removal and enhancing overall image quality. Here's a breakdown of its key capabilities: Advanced Background Removal: One of the standout features of the Undress AI Tool is its ability to accurately and efficiently remove backgrounds from images. Using AI-powered algorithms, it can precisely isolate subjects from their backgrounds, making it ideal for creating clean, professional-looking portraits, product images, and marketing. visuals.
gogato2980
1,922,404
The Evolution of Speech Recognition: A Sista AI Perspective
Discover the transformative power of voicebots with Sista AI. Join the AI revolution today! 🚀
0
2024-07-13T15:45:35
https://dev.to/sista-ai/the-evolution-of-speech-recognition-a-sista-ai-perspective-1jkd
ai, react, javascript, typescript
<h2>Unveiling the Future of Speech Recognition with Sista AI</h2><p>Speech recognition technology has rapidly advanced in recent years, driven by AI and machine learning breakthroughs. This progress has reshaped how users interact with technology, making it more intuitive and accessible. Sista AI is at the forefront of this revolution, offering an end-to-end AI integration platform that transforms any app into a smart app with a voice assistant in less than 10 minutes.</p><p>The voice assistant can be seamlessly integrated into any app or website, providing a range of innovative features designed to enhance user engagement and accessibility. It leverages state-of-the-art conversational AI agents capable of delivering precise responses and understanding complex queries, providing a human-like interaction experience. The voice user interface supports commands in over 40 languages, ensuring a dynamic and engaging user experience for a global audience.</p><h2>Challenges and Promises in Speech Recognition</h2><p>However, the increasing capabilities of speech recognition also raise concerns about security. AI-generated voices can deceive voice recognition systems used for identity verification, potentially compromising sensitive data. This underscores the need for robust security measures and constant vigilance against evolving AI capabilities.</p><p>Despite these challenges, the future of speech recognition holds immense promise. By 2030, speech recognition is expected to feature truly multilingual models, rich standardized output objects, and widespread availability. This technology will continue to improve, enabling seamless human-machine collaboration and reducing bias. Applications will expand to include uses such as police body cams and live video captions.</p><h2>Empowering Businesses with Sista AI</h2><p>Sista AI is committed to driving progress in speech recognition and AI integration. The platform's advanced AI solutions set new industry standards, offering unprecedented capabilities and seamless integration that revolutionize how businesses and users interact with technology. With features such as conversational AI agents, voice user interfaces, and real-time data integration, Sista AI is poised to transform the way we interact with technology.</p><h2>The Future of Interaction: Sista AI's Vision</h2><p>In conclusion, the power of speech recognition is reshaping the future of technology. With Sista AI at the forefront, we can expect to see significant advancements in user experience, accessibility, and security. As AI capabilities continue to evolve, it is crucial to balance innovation with robust security measures to ensure a safe and seamless interaction experience for all users. Visit <a href='https://smart.sista.ai/?utm_source=sista_blog&utm_medium=blog_post&utm_campaign=The_Evolution_of_Speech_Recognition_A_Sista_AI_Perspective'>Sista AI</a> and start unlocking the potential of AI integration today.</p><br/><br/><a href="https://smart.sista.ai?utm_source=sista_blog_devto&utm_medium=blog_post&utm_campaign=big_logo" target="_blank"><img src="https://vuic-assets.s3.us-west-1.amazonaws.com/sista-make-auto-gen-blog-assets/sista_ai.png" alt="Sista AI Logo"></a><br/><br/><p>For more information, visit <a href="https://smart.sista.ai?utm_source=sista_blog_devto&utm_medium=blog_post&utm_campaign=For_More_Info_Link" target="_blank">sista.ai</a>.</p>
sista-ai
1,922,405
Building the Brains of the Machine: A Guide to Becoming a Computer Hardware Engineer
The whirring fans, blinking lights – the hardware is the very foundation of the digital world we...
0
2024-07-13T15:50:25
https://dev.to/epakconsultant/building-the-brains-of-the-machine-a-guide-to-becoming-a-computer-hardware-engineer-1fpk
hardware
The whirring fans, blinking lights – the hardware is the very foundation of the digital world we navigate. If you're fascinated by the intricate components that power our computers, then a career as a computer hardware engineer might be your perfect fit. This guide unveils the path to becoming a skilled professional in this exciting field. [Raspberry Pi Robotics: Programming with Python and Building Your First Robot](https://www.amazon.com/dp/B0CTG9RGFM) Understanding the Hardware Landscape Computer hardware engineers design, develop, and test the physical components that make up a computer system. This encompasses a vast array of elements: - Processors (CPUs): The brains of the operation, responsible for executing instructions. - Memory (RAM): Holds data for temporary processing. - Storage Devices (HDDs, SSDs): Store data permanently. - Motherboards: The central hub connecting various components. - Graphics Processing Units (GPUs): Handle graphics rendering and intensive computations. - Peripherals: Input/output devices like keyboards, mice, and monitors. [Pocket-Friendly Feasts: 5 Dollar Meals That Satisfy](https://benable.com/sajjaditpanel/e98543c1a254e10c80b2) The Skills Arsenal of a Hardware Engineer To excel in this domain, a strong foundation in several areas is essential: - Hardware Knowledge: In-depth understanding of computer architecture, component functionalities, and their interactions within a system. - Circuit Design: Proficiency in electronic circuits, their design principles, and troubleshooting techniques. - Programming Languages: Familiarity with languages like C, C++, and Python for low-level programming and hardware interaction. - Mathematics and Physics: A solid grasp of mathematical concepts like calculus and physics principles like electricity and magnetism is crucial. - Problem-Solving and Analytical Skills: The ability to identify hardware malfunctions, diagnose root causes, and design effective solutions is paramount. Charting Your Educational Course Several educational paths can lead you to a successful career in computer hardware engineering: - Bachelor's Degree: Earning a Bachelor's degree in Computer Engineering, Electrical Engineering, or a related field equips you with a strong theoretical foundation and practical skills. Look for programs with courses focused on digital logic design, computer architecture, and embedded systems. - Master's Degree: While not mandatory, a Master's degree in a specialized area like computer architecture or VLSI design (Very-Large-Scale Integration) can provide an edge in specific fields and research-oriented roles. [Hardware Engineer](https://app.draftboard.com/apply/pkARuRm) Gaining Practical Experience: Internships and Projects Beyond academics, practical experience is invaluable. Consider these avenues: - Internships: Seek internship opportunities at hardware companies, research labs, or electronics manufacturers. This provides hands-on experience working with real hardware and collaborating with experienced engineers. - Personal Projects: Build your own computer or embark on personal projects involving electronic components. This demonstrates your passion, initiative, and problem-solving skills. Continuous Learning: Embracing the Evolving Landscape The world of technology is constantly evolving. Stay updated with emerging trends in areas like: - Artificial Intelligence (AI): Hardware advancements are crucial for the development of powerful AI systems. - Quantum Computing: This novel technology requires specialized hardware designs. - Internet of Things (IoT): The proliferation of connected devices necessitates efficient and miniaturized hardware solutions. Career Paths for Hardware Engineers With your acquired skills and experience, you can explore various career avenues: - Hardware Design Engineer: Design and develop specific computer components like processors or motherboards. - Embedded Systems Engineer: Create hardware and software for specialized devices within larger systems (e.g., medical equipment, industrial control systems). - Field Service Engineer: Provide technical support and troubleshooting for hardware in customer environments. - Research and Development Engineer: Conduct research and contribute to the development of innovative hardware solutions for the future. Conclusion: Building the Future, One Component at a Time A career as a computer hardware engineer offers a rewarding path for those fascinated by technology and problem-solving. Dedication to education, a thirst for knowledge, and a passion for building the physical foundation of the digital world will equip you to excel in this dynamic and ever-evolving field. Remember, the journey is an integral part of the process. Embrace the challenges, hone your skills, and become a part of the team building the future of computing hardware.
epakconsultant
1,922,406
WordPress Username Enumeration Attacks: Understanding and Prevention
Introduction WordPress is without a doubt one of the most widely used content management systems...
0
2024-07-13T15:55:21
https://www.nilebits.com/blog/2024/07/wordpress-username-enumeration-attacks/
wordpress, cybersecurity, webdev, programming
Introduction WordPress is without a doubt one of the most widely used content management systems (CMS) in the world, powering over 40% of all websites on the internet. But because of its broad use, it is also a perfect target for bad actors looking to take advantage of security holes for all kinds of evil intents. Among these issues, username enumeration is one that WordPress site owners should be particularly aware of and take proactive measures to prevent. In this comprehensive guide, we delve deep into the realm of WordPress username enumeration attacks—what they are, how they work, and most importantly, how you can effectively prevent them. Whether you’re a seasoned WordPress developer, a site administrator, or a novice exploring the world of website security, this article aims to equip you with the knowledge and tools necessary to safeguard your WordPress site from this pervasive threat. Understanding Username Enumeration Understanding username enumeration and its effects on WordPress security is essential before delving into preventative measures. In short, username enumeration is the methodical process of doing systematic trial and error to find legitimate usernames connected with a WordPress site. By taking advantage of this vulnerability, attackers can get sensitive data, such legitimate usernames, which they can use to conduct more focused and possibly destructive assaults, like social engineering schemes or brute-force password attempts. In the following sections, we’ll explore how username enumeration works, why WordPress sites are particularly vulnerable to such attacks, and the real-world implications of overlooking this seemingly innocuous security flaw. Why WordPress is Vulnerable WordPress, renowned for its user-friendly interface and extensive plugin ecosystem, also presents several inherent vulnerabilities that make it susceptible to username enumeration attacks. Understanding these vulnerabilities is crucial for effectively mitigating risks and securing your WordPress site. 1. Default Login Mechanism WordPress uses a straightforward login mechanism where users enter their username and password to access the admin dashboard. While this simplicity enhances user experience, it also exposes usernames to potential enumeration. Attackers can exploit this by systematically testing different usernames until they discover valid ones, often through automated scripts. 2. Predictable Usernames Many WordPress sites use predictable usernames, such as ‘admin,’ ‘administrator,’ or variations of the site’s domain name. Attackers capitalize on these common patterns to guess valid usernames more easily. Additionally, WordPress historically defaulted to assigning ‘admin’ as the username during installation, compounding the risk. 3. Information Disclosure WordPress inadvertently leaks information that aids attackers in username enumeration. For example, error messages generated during failed login attempts may reveal whether a username exists on the site. Attackers leverage these subtle cues to build lists of valid usernames, thereby facilitating subsequent attacks. 4. Plugins and Themes The extensive use of plugins and themes in WordPress introduces additional attack vectors. Poorly coded or outdated plugins can inadvertently expose usernames through various mechanisms, including publicly accessible API endpoints or error handling routines that disclose sensitive information. 5. Brute-Force Attacks Once attackers obtain a list of valid usernames through enumeration, they often proceed with brute-force attacks to guess passwords. This iterative process, fueled by automated tools, exploits weak or commonly used passwords to gain unauthorized access to the WordPress admin panel. 6. Social Engineering Beyond technical exploits, username enumeration can fuel social engineering attacks. Armed with a list of valid usernames, attackers may craft convincing phishing emails or targeted messages to trick users into divulging their passwords or other sensitive information. Understanding these vulnerabilities underscores the importance of implementing robust security measures to mitigate the risks posed by username enumeration in WordPress. In the next sections, we’ll explore effective strategies and best practices to safeguard your WordPress site against these pervasive threats. Detecting Username Enumeration Attacks Detecting username enumeration attacks early is crucial for mitigating potential security risks and safeguarding your WordPress site. While preventing such attacks altogether is ideal, having effective detection mechanisms in place can help you respond promptly and mitigate potential damage. In this section, we explore various methods and tools to detect username enumeration attacks on your WordPress site: 1. Monitoring Login Attempts One of the initial indicators of username enumeration attempts is a sudden increase in failed login attempts. Monitoring your site’s login activity through access logs or security plugins allows you to identify patterns indicative of automated scripts systematically testing different usernames. 2. Analyzing Access Logs Regularly analyzing access logs provides valuable insights into login attempts and potential malicious activities. Look for repetitive patterns or spikes in failed login attempts from specific IP addresses or user agents, which may signal ongoing username enumeration efforts. 3. Captcha and Rate Limiting Implementing CAPTCHA challenges and rate-limiting mechanisms can effectively deter automated scripts used in username enumeration attacks. CAPTCHA requires users to solve challenges, while rate limiting restricts the number of login attempts per IP address within a specified time frame, making automated attacks less feasible. 4. Security Plugins Utilizing WordPress security plugins enhances your site’s ability to detect and mitigate username enumeration attacks. Many security plugins offer features such as real-time monitoring, IP blocking for suspicious activities, and detailed reports on login attempts, empowering you to take proactive measures against potential threats. 5. Web Application Firewalls (WAFs) Deploying a web application firewall (WAF) adds an additional layer of defense against username enumeration attacks and other malicious activities. WAFs analyze incoming traffic to block suspicious requests based on predefined rulesets, effectively mitigating known attack vectors targeting WordPress sites. 6. Intrusion Detection Systems (IDS) Intrusion Detection Systems (IDS) can detect and alert administrators to suspicious login patterns or anomalous behavior that may indicate username enumeration attempts. IDS solutions tailored for WordPress can provide real-time notifications and insights into potential security incidents. 7. Custom Monitoring Scripts For advanced users or developers, creating custom monitoring scripts specific to your WordPress environment can enhance detection capabilities. These scripts can analyze login logs, audit trail data, or API requests for unusual patterns that may indicate ongoing username enumeration activities. 8. Collaborative Threat Intelligence Participating in collaborative threat intelligence platforms or communities can provide valuable insights into emerging threats and known attack patterns targeting WordPress sites. Sharing information and best practices with peers can help bolster your site’s defenses against username enumeration and other security threats. By implementing a combination of these detection methods and tools, WordPress site owners can effectively monitor for and respond to username enumeration attacks, reducing the risk of unauthorized access and potential data breaches. In the next section, we’ll explore actionable strategies to prevent username enumeration attacks proactively. Impact of Username Enumeration Attacks Understanding the potential impact of username enumeration attacks is crucial for WordPress site owners to grasp the severity of this security vulnerability. While seemingly benign compared to more overt threats, username enumeration can pave the way for more sophisticated and damaging exploits. In this section, we explore the multifaceted impact of username enumeration attacks on WordPress sites: 1. Increased Vulnerability to Brute-Force Attacks Username enumeration serves as a precursor to brute-force attacks, where attackers use automated tools to systematically guess passwords associated with identified usernames. By confirming valid usernames through enumeration, attackers narrow down their focus and increase the likelihood of successfully compromising accounts with weak or reused passwords. 2. Compromised User Privacy Username enumeration exposes sensitive information about registered users on a WordPress site. Attackers can compile lists of valid usernames, which may include administrators, editors, or contributors, potentially exposing personal information associated with these accounts. This breach of privacy can have legal and reputational repercussions for site owners. 3. Elevation of Privileges Once attackers gain access to a WordPress account through username enumeration and subsequent password guessing, they may exploit vulnerabilities within the site or associated services. Depending on the user role associated with the compromised account, attackers can escalate privileges, gaining unauthorized access to sensitive data, administrative controls, or the ability to distribute malicious content. 4. Reputation Damage and Trust Issues A successful username enumeration attack can damage a WordPress site’s reputation and erode user trust. Compromised accounts may be used to distribute spam, phishing attempts, or malware, tarnishing the site’s credibility and potentially leading to blacklisting by search engines or security vendors. 5. Legal and Compliance Concerns In cases where compromised accounts contain sensitive user data, such as personally identifiable information (PII) or financial details, WordPress site owners may face legal liabilities and compliance obligations. Data protection regulations, such as GDPR or CCPA, impose stringent requirements on handling and safeguarding user information, necessitating swift and transparent response measures in the event of a security breach. 6. Operational Disruption and Recovery Costs Remediating the aftermath of a username enumeration attack entails operational disruptions and financial costs for WordPress site owners. Tasks such as restoring compromised accounts, investigating the root cause of the breach, and implementing enhanced security measures can strain resources and disrupt business continuity. 7. Reputational Damage to the WordPress Ecosystem Collectively, widespread username enumeration attacks can undermine trust in the WordPress ecosystem as a whole. Continued exploitation of known vulnerabilities highlights the importance of robust security practices and collaborative efforts to mitigate risks and protect user data across WordPress sites worldwide. Understanding these impacts underscores the imperative for WordPress site owners to prioritize security measures that address username enumeration vulnerabilities. In the subsequent section, we delve into actionable strategies and best practices to prevent username enumeration attacks and fortify WordPress site defenses. Preventing Username Enumeration in WordPress Improving the security posture of your WordPress website requires putting in place strong mechanisms to stop username enumeration. Site owners may greatly lower the risk of unwanted access and potential data breaches by proactively resolving vulnerabilities and putting best practices into place. We examine practical methods and tactics to stop username enumeration attacks in this section: 1. Use Strong and Unique Usernames Encourage users to create strong and unique usernames during registration or account creation. Discourage the use of predictable usernames such as ‘admin’ or variations of the site’s domain name, which are commonly targeted in enumeration attacks. 2. Implement Username Masking Modify WordPress login behavior to mask error messages that reveal whether a username exists on the site. By presenting generic error messages for both valid and invalid usernames during login attempts, you mitigate the ability of attackers to confirm valid usernames through enumeration. 3. Customize Login URLs Employ plugins or custom coding to change the default login URL of your WordPress site. This practice can deter automated scripts used in username enumeration attacks, as attackers must first identify the correct login URL before attempting to test usernames. 4. Implement CAPTCHA Challenges Integrate CAPTCHA challenges into the WordPress login process to verify human users and thwart automated scripts. CAPTCHA requires users to solve challenges, such as identifying distorted text or selecting specific images, before proceeding with login attempts, effectively blocking many automated attacks. 5. Implement Rate Limiting Enforce rate limiting measures to restrict the number of login attempts per IP address or user account within a specified time frame. By limiting the frequency of login attempts, you reduce the effectiveness of brute-force attacks that rely on rapid iteration to guess passwords associated with enumerated usernames. 6. Disable User Enumeration APIs Disable or restrict access to APIs and endpoints that inadvertently disclose user information, such as user enumeration APIs. WordPress plugins or custom coding can be utilized to modify default API behavior and prevent unauthorized access to sensitive user data. 7. Regularly Update WordPress Core, Themes, and Plugins Stay vigilant about updating WordPress core files, themes, and plugins to their latest versions. Updates often include security patches that address known vulnerabilities, including those exploited in username enumeration attacks. Enable automatic updates where possible to streamline this process. 8. Monitor and Analyze Login Attempts Regularly monitor login attempts and access logs for suspicious patterns or unusual activities that may indicate ongoing username enumeration attempts. Utilize WordPress security plugins or server-side logging mechanisms to facilitate real-time detection and response to potential threats. 9. Educate Users on Security Best Practices Educate WordPress users, including administrators, editors, and contributors, on security best practices such as creating strong passwords, enabling two-factor authentication (2FA), and recognizing phishing attempts. Empowering users with knowledge enhances overall site security and mitigates the impact of potential security incidents. 10. Implement Web Application Firewalls (WAFs) Deploy a web application firewall (WAF) to monitor and filter incoming traffic to your WordPress site. WAFs can detect and block malicious requests associated with username enumeration attacks, providing an additional layer of defense against sophisticated threats. By implementing these preventive measures collectively, WordPress site owners can strengthen their defenses against username enumeration attacks and reduce the risk of compromising sensitive user information or site integrity. In the next section, we’ll delve into additional security strategies and best practices to fortify your WordPress site against evolving threats. Hardening WordPress Security Using cutting-edge techniques and industry best practices to strengthen your website against a variety of security risks, such as username enumeration attacks, is known as “hardening WordPress security.” WordPress site owners may reduce risks and protect sensitive data from unauthorized access by taking a proactive approach to security. We examine key tactics and methods for strengthening WordPress security in this section: 1. Secure User Authentication Enhance user authentication processes by enforcing strong password policies and encouraging the use of complex passwords. Consider implementing two-factor authentication (2FA) to add an additional layer of verification beyond username and password, significantly reducing the risk of unauthorized access via enumerated usernames. 2. Disable XML-RPC Endpoint [Disable the XML-RPC](https://www.linkedin.com/pulse/what-xmlrpcphp-wordpress-why-you-should-disable-amr-saafan/) endpoint unless it’s required for specific functionalities. XML-RPC can be exploited in various ways, including brute-force attacks against enumerated usernames. Use plugins or server configurations to disable XML-RPC or limit its functionality to trusted IP addresses. 3. Restrict File Permissions Set strict file permissions on WordPress directories and files to prevent unauthorized access or modification. Limit write permissions to essential directories, such as wp-content/uploads, while ensuring that core WordPress files are only writable by the web server user. 4. Implement HTTPS Encryption Secure data transmission between users and your WordPress site by implementing HTTPS encryption using SSL/TLS certificates. HTTPS encrypts data in transit, protecting sensitive information such as login credentials and user sessions from interception or tampering. 5. Regularly Update and Patch Stay vigilant about updating WordPress core files, themes, and plugins to their latest versions. Updates often include security patches that address vulnerabilities exploited in username enumeration attacks and other threats. Enable automatic updates where possible to ensure timely protection. 6. Use Security Plugins Deploy reputable security plugins designed specifically for WordPress to enhance site protection. Security plugins offer features such as real-time monitoring, malware scanning, IP blocking for suspicious activities, and login attempt limits, bolstering defenses against automated attacks, including username enumeration. 7. Harden Database Security Secure your WordPress database by using unique database prefixes during installation to prevent SQL injection attacks. Regularly backup your database and implement access controls to restrict database privileges based on the principle of least privilege. 8. Implement Web Application Firewall (WAF) Deploy a web application firewall (WAF) to monitor and filter incoming traffic to your WordPress site. WAFs can detect and block malicious requests associated with username enumeration attacks, providing an additional layer of defense against sophisticated threats and zero-day exploits. 9. Monitor and Audit Site Activity Regularly monitor site activity, access logs, and security audit logs to detect unauthorized access or suspicious behavior indicative of username enumeration attempts. Utilize logging mechanisms provided by WordPress or server-side solutions to facilitate timely detection and response to potential security incidents. 10. Conduct Security Audits and Penetration Testing Periodically conduct security audits and penetration testing to identify vulnerabilities and assess the effectiveness of your security measures. Engage professional security experts or use automated tools to simulate attacks and uncover potential weaknesses before malicious actors exploit them. Implementing these hardened security practices collectively strengthens your WordPress site’s defenses against username enumeration attacks and other evolving threats. By prioritizing security measures and staying proactive, site owners can maintain the integrity and trustworthiness of their WordPress environments. In the next section, we’ll explore additional strategies to enhance WordPress security posture and ensure comprehensive protection against [cyber threats](https://www.nilebits.com/blog/2024/07/cybersecurity-the-importance-of-the-human-element/). References WordPress Security Team. (n.d.). Hardening WordPress. Retrieved from https://wordpress.org/support/article/hardening-wordpress/ OWASP. (n.d.). XML External Entity (XXE) Prevention Cheat Sheet. Retrieved from https://owasp.org/www-project-cheat-sheets/cheatsheets/XML_External_Entity_Prevention_Cheat_Sheet.html Sucuri Blog. (2023). A Look at WordPress Brute Force Attacks. Retrieved from https://blog.sucuri.net/2023/01/a-look-at-wordpress-brute-force-attacks.html Wordfence. (2023). 7 Tips for Improving Your WordPress Site’s Security. Retrieved from https://www.wordfence.com/blog/2023/05/7-tips-for-improving-your-wordpress-sites-security/ W3Techs. (2023). Usage Statistics and Market Share of WordPress for Websites. Retrieved from https://w3techs.com/technologies/details/cm-wordpress/all/all
amr-saafan
1,922,410
How to make a Parallax Effect with Html, Css and JS
Introduction Parallax Effect enhance the quality of a website. Today I will tell you how...
0
2024-07-13T17:15:47
https://dev.to/lakshita_kumawat/how-to-make-a-parallax-effect-with-html-css-and-js-ib3
javascript, tutorial, webdev
## Introduction Parallax Effect enhance the quality of a website. Today I will tell you how to make a parallax website in a few steps. You can use my [asset](https://github.com/Lakshita-Kumawat/Parallax-Website-Tutorial) to make the website and visit my parallax website on [github](https://lakshita-kumawat.github.io/Parallax-Website-Tutorial/) ! ## Let's Get Started: Step 1: Write html code ``` <!DOCTYPE html> <head> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Parallax website</title> <link rel="stylesheet" href="style.css"> </head> <body> <main> <section id="hero-section"> <div id="moon"></div> <img src="/images/haunted.png" alt="haunted house"> <h1 id="text">Parallex Website</h1> <img src="/images/hands.png" alt="hands"> </section> <section id="content"> <h1>Parallex Website</h1> <p>Dummy text. Make sure the content is long enough so that the visitor is able to scroll the website</p> </section> </main> <script src="script.js" defer></script> </body> </html> ``` Here I create a basic html document in which there are two section. One is for the images and second for the content. I have give id to both section so that I can select them easily. Keep in mind to arrange your assets images properly. If you arrange a big image before the smaller once, then the smaller image will hide behind it. Don't forget to link your css and javascript document. Step 2: Defining Css document ``` *{ margin: 0; padding: 0; } body{ background-color: rgb(12, 10, 20); } #hero-section{ display: flex; justify-content: center; align-items: center; height: 100vh; } ``` Firstly, define the hero section styling. You should use the same styling so that all your content will be at the center. You can also set background color according to your assets. ``` #hero-section img{ position: absolute; top: 0; left: 0; width: 100%; height: 40rem; pointer-events: none; } ``` Now, the images styling. We use `left` and `top` to zero so that every image is align in the left and top side. This will do the trick to style the images. Here, I use height to 40rem because my image was very big. You have to set the width and height according to your images. ``` #hero-section #text{ position: relative; font-size: 3rem; color: white; font-weight: bolder; } #hero-section #moon{ position: relative; bottom: 25%; background-color: rgb(220, 218, 218); width: 8rem; height: 8rem; border-radius: 100%; } ``` Now, style the Heading and the moon. So can set these styles according to you. I have set moon to 25% bottom and at the center and give a grey tone. ``` #content{ position: relative; background-color: rgb(96, 21, 33); padding: 2rem; z-index: 1; // this is important } #content h1{ font-size: 2.5rem; margin-bottom: 1rem; } #content p{ font-size: 1.2rem; } ``` At the end, I have give some styling to content section. You can also change the styling of the content according to your choice but dont forgot to add `z-index`. It will make it a level higher than other. For more information about `z-index` you can visit to [w3school](https://www.w3schools.com/cssref/pr_pos_z-index.php) or on any another website. Step 3: Writing the Javascript ``` let text = document.getElementById('text'); let moon = document.getElementById('moon'); window.addEventListener('scroll',() =>{ let value = window.scrollY; //console.log(value); text.style.marginTop = value * 2.5+'px'; moon.style.right = value * 2.5+'px'; }) ``` Now the main thing, javascript. It is preety simple. First I select the hero-section heading and the moon. Then, a `addEventListener` with a function which works when the page is scroll. Now, we will access the value of y-axis with `window.scrollY`. You can console log it if you want. Now, we will multiply the value with the `left` of moon with 2.5 so that the moon will move towards left, when you scroll the page. If you think that it is moving fast then you can change 2.5px with a smaller number then it will move slower than before. Then, we will also multiply the value with the `marginTop` of `hero-section` heading with 2.5px so that when you scroll the page, the heading will come downwards. But the heading will still appear if we scroll downwards! Don't worry, did you remember we apply `z-index` on the content section. So the heading will hide below the content section. And now its finish. ## Conclusion So, that all for making a parallax website. You can add more things with it to make your website look even better. And Thank You for reading till the end. I hope you like it and don't forget to tell me in the comments if you like the post or how can I make my posts even better. Also please visit my [github](https://github.com/Lakshita-Kumawat) !
lakshita_kumawat
1,922,411
Olá mundo!
Criei este perfil para compartilhar o que ando estudando, os projetos que estou desenvolvendo e as...
0
2024-07-15T02:16:26
https://dev.to/unimatrix2/ola-mundo-40ge
webdev, coding
Criei este perfil para compartilhar o que ando estudando, os projetos que estou desenvolvendo e as aventuras técnicas que resultam deles. Espero que os conteúdos que eu poste aqui sejam úteis ou ao menos informativos para quem os ler, e de repente ajude com algum problema similar que esteja enfrentando. Tentarei ser o mais acessível possível, é minha intenção fazer parte dessa rede/bolha de desenvolvedores brasileiros que se esforçam para produzir um conteúdo muito bom e que diversas vezes já me ajudou a aprender e resolver problemas, tanto em projetos pessoais quanto em produtos profissionais das firmas onde já trabalhei. Os posts serão escritos em pt-BR e pretendo abordar assuntos mais relacionados a projetos que estou construindo. Dessa forma, não vou produzir posts sobre assuntos mais básicos ou iniciantes das stacks/langs que uso ou vou usar, ao menos não inicialmente. Espero conseguir manter uma frequência semanal de posts, mas não sou de ferro e trabalho muito na semana. De qualquer maneira, uma calorosa recepção da pessoinha aqui do outro lado da tela!
unimatrix2
1,922,489
sed: Delimiter Issues
When doing variable substitution with sed things break if the value contains the delimiter used by...
0
2024-07-13T16:08:16
https://blog.marco.ninja/notes/technology/sed/sed-delimiter-issues/
bash, linux
--- title: sed: Delimiter Issues published: true date: 2024-05-16 00:00:00 UTC tags: - bash - linux canonical_url: https://blog.marco.ninja/notes/technology/sed/sed-delimiter-issues/ --- When doing variable substitution with sed things break if the value contains the delimiter used by sed. For example: ```bash MY_VAR="This works" echo "My value is MY_VALUE" | sed "s/MY_VALUE/$MY_VAR/g" MY_VAR="This/breaks" echo "My value is MY_VALUE" | sed "s/MY_VALUE/$MY_VAR/g" ``` ## Solution: Known Input If the input is well understood you can just change the delimiter used to resolve the issue: ```bash MY_VAR="This still works" echo "My value is MY_VALUE" | sed "s|MY_VALUE|$MY_VAR|g" MY_VAR="This/now/works" echo "My value is MY_VALUE" | sed "s|MY_VALUE|$MY_VAR|g" ``` In this example I used `|` but you can basically use any character, as long as it is not part of the strings being operated on. ## Solution: Unknown Input If the input is completely unknowable it might be best just to escape the control character used by sed in the input. ```bash MY_VAR="This still works" MY_VAR=$(echo ${MY_VAR} | sed -e "s#/#\\\/#g") echo "My value is MY_VALUE" | sed "s/MY_VALUE/$MY_VAR/g" MY_VAR="This/now/works" MY_VAR=$(echo ${MY_VAR} | sed -e "s#/#\\\/#g") echo "My value is MY_VALUE" | sed "s/MY_VALUE/$MY_VAR/g" ```
professorlogout
1,922,492
HaProxy: Think About DNS Resolution
By default HAProxy resolves all DNS names in it’s config on startup and then never again. This might...
0
2024-07-13T16:05:31
https://blog.marco.ninja/notes/technology/haproxy/haproxy-think-about-dns-resolution/
devops
--- title: HaProxy: Think About DNS Resolution published: true date: 2024-06-04 00:00:00 UTC tags: - devops canonical_url: https://blog.marco.ninja/notes/technology/haproxy/haproxy-think-about-dns-resolution/ --- By default HAProxy resolves all DNS names in it’s config on startup and then never again. This might cause issues down the road if DNS records, for example the ones for backends, change. This section of the documentation is a good starting point as it describes IP address resolution using DNS in HAProy really well:[https://docs.haproxy.org/3.0/configuration.html#5.3](https://docs.haproxy.org/3.0/configuration.html#5.3) Additionally this guide can also be helpful:[https://www.haproxy.com/documentation/haproxy-configuration-tutorials/dns-resolution/](https://www.haproxy.com/documentation/haproxy-configuration-tutorials/dns-resolution/)
professorlogout
1,922,493
PicoCSS Sticky Footer
A sticky footer using PicoCSS html, body { height: 100vh; } body &gt; footer { position:...
0
2024-07-13T16:04:19
https://blog.marco.ninja/notes/technology/css/picocss-sticky-footer/
webdev, css
--- title: PicoCSS Sticky Footer published: true date: 2024-06-10 00:00:00 UTC tags: - webdev - css canonical_url: https://blog.marco.ninja/notes/technology/css/picocss-sticky-footer/ --- A sticky footer using [PicoCSS](https://picocss.com) ``` html, body { height: 100vh; } body > footer { position: sticky; top: 100vh; } ```
professorlogout
1,922,495
Operational System the concept for dummies
the operational system concepts for dummies
0
2024-07-13T16:00:00
https://dev.to/pvgomes/operational-system-the-concept-for-dummies-12m3
os
--- title: Operational System the concept for dummies published: true description: the operational system concepts for dummies tags: #os # cover_image: https://direct_url_to_image.jpg # Use a ratio of 100:42 for best results. published_at: 2024-07-13 16:00 +0000 --- An operating system (OS) is the software that manages all the hardware and other software on a computer. It acts as an intermediary between the user and the computer hardware, allowing you to run applications and programs. Think of it as the master controller of your computer, ensuring everything runs smoothly and efficiently. Key Functions of an Operating System: Resource Management: Manages the computer’s hardware, including the CPU, memory, and storage. File System Management: Organizes and controls how data is stored and retrieved on disks. Process Management: Handles the execution of multiple programs at once. User Interface: Provides a way for users to interact with the computer, typically through a graphical user interface (GUI). Most Popular Operating Systems: Windows: Developed by Microsoft, it’s the most widely used OS for personal computers. Known for its user-friendly interface and broad software compatibility. **macOS**: Created by Apple, it’s used on Mac computers. Known for its sleek design and strong integration with other Apple products. Linux: An open-source OS that’s popular for servers and also has distributions (distros) for personal use, such as Ubuntu, Fedora, and Debian. Known for its stability and security. Android: Developed by Google, it’s the most popular OS for smartphones and tablets. Built on a modified version of the Linux kernel. iOS: Apple’s mobile operating system used on iPhones and iPads. Known for its smooth performance and strong ecosystem. In summary, an operating system is essential software that makes your computer usable, managing everything from basic tasks to running complex applications. The most popular ones include Windows, macOS, Linux, Android, and iOS.
pvgomes
1,922,554
Nginx-Proxy-Manager: Leichtgewichtiger Reverse-Proxy 🌍
Erfahre, wie Du Nginx Proxy Manager mit Docker installierst und als Reverse Proxy für HTTP, HTTPS und...
0
2024-07-13T18:00:15
https://blog.disane.dev/nginx-proxy-manager-der-leichtgewichtige-reverse-proxy/
reverseproxy, homelab, dns, webserver
![](https://blog.disane.dev/content/images/2024/06/nginx_proxy-manager-der-leichtgewichtige-reverse-proxy_banner.jpeg)Erfahre, wie Du Nginx Proxy Manager mit Docker installierst und als Reverse Proxy für HTTP, HTTPS und UDP einrichtest! 🌍 --- Der Nginx Proxy Manager (NPM) ist ein leistungsstarkes und benutzerfreundliches Tool zur Verwaltung von Reverse-Proxy-Konfigurationen. Er vereinfacht den Prozess der Einrichtung eines Reverse-Proxys und ermöglicht es Dir, mehrere Anwendungen von einer einzigen Oberfläche aus zu verwalten. In diesem Artikel erfährst Du, wie Du den Nginx Proxy Manager mit Docker installierst, wie er als Reverse-Proxy funktioniert und welche Vorteile er gegenüber einer traditionellen Netzwerkkonfiguration bietet. ## Warum Nginx Proxy Manager verwenden? 🤔 Der Nginx Proxy Manager erleichtert die Verwaltung von Webservern und SSL-Zertifikaten und bietet eine intuitive Weboberfläche zur Konfiguration von Nginx-Reverse-Proxys. Hier sind einige seiner herausragenden Funktionen: * **HTTP- und HTTPS-Unterstützung**: Einfaches Konfigurieren von HTTP- und HTTPS-Sites. * **Let's Encrypt Integration**: Automatische Verwaltung von SSL-Zertifikaten. * **UDP-Weiterleitungen**: Unterstützung für UDP-Weiterleitungen neben den traditionellen TCP-Weiterleitungen. ## Installation des Nginx Proxy Managers mit Docker 🐳 Die Installation des Nginx Proxy Managers mit Docker ist einfach und schnell. Hier ist eine Schritt-für-Schritt-Anleitung, wie Du Nginx Proxy Manager auf Deinem Server einrichtest: ### Vorbereitung Stelle sicher, dass Docker auf Deinem System installiert ist. Falls nicht, kannst Du Docker über die offiziellen Docker-Installationsanleitungen installieren. ### Docker-Compose Datei erstellen Erstelle eine `docker-compose.yml` Datei mit folgendem Inhalt: ```yaml version: '3' services: npm: image: jc21/nginx-proxy-manager:latest restart: unless-stopped ports: - "80:80" - "81:81" - "443:443" environment: DB_SQLITE_FILE: "/data/database.sqlite" volumes: - ./data:/data - ./letsencrypt:/etc/letsencrypt ``` ### Container starten Führe folgende Befehle aus, um den Container zu starten: ```bash mkdir -p data letsencrypt docker-compose up -d ``` Das war's! Nginx Proxy Manager läuft nun auf Deinem Server und ist über die IP-Adresse Deines Servers auf Port 81 erreichbar. ## Wie funktioniert ein Reverse Proxy? 🛠️ Ein Reverse Proxy ist ein Server, der Anfragen von Clients entgegennimmt und sie an einen oder mehrere Backend-Server weiterleitet. Er fungiert als Vermittler zwischen den Clients und den Servern, die die eigentlichen Inhalte bereitstellen. Hier ist ein einfaches Diagramm, das die Funktionsweise eines Reverse Proxys zeigt: ![](https://blog.disane.dev/content/images/2024/06/image-10.png) ### Vergleich mit einem Netzwerk ohne Reverse Proxy **Ohne Reverse Proxy**: * Jede Anwendung muss auf einem separaten Port oder einer separaten Subdomain laufen. * Verwaltung von SSL-Zertifikaten für jede Anwendung ist komplex und zeitaufwändig. * Skalierbarkeit und Lastverteilung sind schwieriger zu handhaben. **Mit Reverse Proxy**: * Alle Anfragen werden über einen zentralen Punkt verwaltet. * SSL-Zertifikate können zentral verwaltet und automatisch erneuert werden. * Einfachere Skalierung und Lastverteilung der Backend-Server. ## Integration in Dein Netzwerk 🌐 Um den Nginx Proxy Manager in Dein Netzwerk zu integrieren, musst Du die DNS-Einstellungen und die Weiterleitungsregeln konfigurieren. ### DNS-Konfiguration Konfiguriere die DNS-Einträge Deiner Domain so, dass sie auf die IP-Adresse des Servers zeigen, auf dem der Nginx Proxy Manager läuft. Dies kann über Deinen Domain-Registrar oder Deinen internen DNS-Server erfolgen. Dies kann auch z.B. dein AdGuard Home aus diesem Artikel sein <https://blog.disane.dev/adguard-home-ad-blocker-fur-alle-gerate-im-netzwerk> ### Weiterleitungsregeln einrichten Melde Dich bei der Weboberfläche des Nginx Proxy Managers an und richte Weiterleitungsregeln für Deine Anwendungen ein. Du kannst HTTP- und HTTPS-Weiterleitungen sowie UDP-Weiterleitungen konfigurieren. ## HTTPS-Unterstützung mit Let's Encrypt 🔒 Der Nginx Proxy Manager unterstützt die automatische Verwaltung von SSL-Zertifikaten über Let's Encrypt. Du kannst SSL-Zertifikate einfach über die Weboberfläche einrichten und verwalten. ### Vorteile eines Reverse Proxys * **Sicherheitsverbesserung**: Durch die zentrale Verwaltung von SSL-Zertifikaten und die Möglichkeit, nur notwendige Ports freizugeben. * **Einfachere Verwaltung**: Alle Anwendungen können von einer einzigen Oberfläche aus verwaltet werden. * **Leistungsverbesserung**: Durch Caching und Lastverteilung können die Backend-Server effizienter genutzt werden. ## Fazit 🎉 Der Nginx Proxy Manager ist ein leistungsstarkes und benutzerfreundliches Tool zur Verwaltung von Reverse-Proxies. Mit Funktionen wie HTTPS-Unterstützung, Let's Encrypt-Integration und UDP-Weiterleitungen bietet er eine umfassende Lösung für die Verwaltung von Webanwendungen. ### Verweise auf andere Artikel 📚 Interessierst Du Dich dafür, wie Du interne Dienste mit SSL sichern kannst? Schau Dir diesen Artikel an 👇🏼 [Interne Services mit externer DNS-Wildcard erreichen![Preview image](https://blog.disane.dev/content/images/2023/11/interne_services-mit-dns-wildcards-erreichen-und-https-absichern_banner-2.jpeg)In dem Artikel zeige ich dir, wie du es hinbekommst, deine internen Dienste, sowohl von extern als auch intern, mit (Sub-)Domains und HTTPS zu erreichen 🎉](https://blog.disane.dev/interne-services-mit-dns-wildcards-erreichen-und-https-absichern/) --- If you like my posts, it would be nice if you follow my [Blog](https://blog.disane.dev) for more tech stuff.
disane
1,922,496
Creating and Using a Personal Homebrew Tap
Creating and Using a Personal Homebrew Tap Follow these steps to create a personal...
0
2024-07-13T16:07:47
https://dev.to/target-ops/creating-and-using-a-personal-homebrew-tap-53bm
Creating and Using a Personal Homebrew Tap ========================================== Follow these steps to create a personal Homebrew tap and use it: 1\. Create a new GitHub repository ---------------------------------- - The repository name should follow the format `homebrew-<tap>`. - For example, if you want to create a tap named `mytap`, the repository name should be `homebrew-mytap`. 2\. Add formulae to the tap --------------------------- - In the repository, create a new file for each formula you want to add. - The file should be named `<formula>.rb` and contain the Ruby code for the formula. - For example, to add a formula named `myformula`, create a file named `myformula.rb`. 3\. Tap the repository ---------------------- - On your local machine, use the `brew tap` command followed by your GitHub username and the tap name: ``` brew tap <username>/mytap ``` Replace `<username>` with your GitHub username. 4\. Install formulae from the tap --------------------------------- You can now install formulae from your tap using the `brew install` command followed by the formula name: `myformula` Replace `myformula` with the name of the formula you want to install. Please replace `<username>`, `mytap`, and `myformula` with your actual GitHub username, tap name, and formula name respectively. Also, this is a very basic example. Depending on your needs, you might want to add more complex formulae to your tap.
uplift3r
1,922,497
Tijoladas Digitais Ep 3
Bem vindos ao recap semanal do canal!! (https://www.youtube.com/@TheDigitalBrickLayer). 2 vídeos no...
0
2024-07-13T16:11:36
https://dev.to/thedigitalbricklayer/tijoladas-digitais-ep-3-5ca0
Bem vindos ao recap semanal do canal!! (https://www.youtube.com/@TheDigitalBrickLayer). 2 vídeos no canal: Como hospedar sites estáticos através de s3 buckets na aws Vídeo: https://youtu.be/y8njwwVqa6M A jornada de pedreiro a arquiteto de soluções aws, falando um cadinho da minha jornada de aprendizado para retirar essa certificação ep: DNS, s3 e healthchecks Vídeo: https://youtu.be/mXKlqzLKI10 Fizemos duas lives também: Desafios de programação: https://www.youtube.com/watch?v=NdSaq6avnvQ https://www.youtube.com/watch?v=wfCbyOHjXd4 System Design: https://www.youtube.com/watch?v=dw1oKlFZt18 https://www.youtube.com/watch?v=Ue-h4zNIKz0 https://www.youtube.com/watch?v=yuP6LtWBBi8 Artigos Postados: https://dev.to/thedigitalbricklayer/automacoes-editando-shorts-com-programacao-27de https://www.linkedin.com/pulse/automa%C3%A7%C3%B5es-editando-shorts-com-programa%C3%A7%C3%A3o-vitor-hugo-de-castro-silva-ugrlf/?trackingId=tsDkooMSm%2FdeAT59EIUNLQ%3D%3D
thedigitalbricklayer
1,922,499
Creating and Using a Personal Homebrew Tap
Creating and Using a Personal Homebrew Tap Follow these steps to create a personal...
0
2024-07-13T16:15:28
https://dev.to/uplift3r/creating-and-using-a-personal-homebrew-tap-44ng
Creating and Using a Personal Homebrew Tap ========================================== Follow these steps to create a personal Homebrew tap and use it: 1\. Create a new GitHub repository ---------------------------------- - The repository name should follow the format `homebrew-<tap>`. - For example, if you want to create a tap named `mytap`, the repository name should be `homebrew-mytap`. 2\. Add formulae to the tap --------------------------- - In the repository, create a new file for each formula you want to add. - The file should be named `<formula>.rb` and contain the Ruby code for the formula. - For example, to add a formula named `myformula`, create a file named `myformula.rb`. 3\. Tap the repository ---------------------- - On your local machine, use the `brew tap` command followed by your GitHub username and the tap name: ``` brew tap <username>/mytap ``` Replace `<username>` with your GitHub username. 4\. Install formulae from the tap --------------------------------- You can now install formulae from your tap using the `brew install` command followed by the formula name: `myformula` Replace `myformula` with the name of the formula you want to install. Please replace `<username>`, `mytap`, and `myformula` with your actual GitHub username, tap name, and formula name respectively. Also, this is a very basic example. Depending on your needs, you might want to add more complex formulae to your tap.
uplift3r
1,922,501
My First AWS EC2 Project: Hosting a Static Website
Are you looking to break into cloud computing? Let me share my recent experience setting up a static...
0
2024-07-13T16:17:13
https://dev.to/jesse_adu_akowuah_/my-first-aws-ec2-project-hosting-a-static-website-1ab5
Are you looking to break into cloud computing? Let me share my recent experience setting up a static website on Amazon Web Services (AWS) using an EC2 instance. This project was a great introduction to cloud infrastructure and helped me gain hands-on experience with AWS. Getting Started with EC2 First, I logged into the AWS console and searched for EC2. After navigating to the Instances section, I clicked "Create Instance." For this project, I chose a Linux server with a t3.micro instance type—perfect for learning as it's part of the AWS Free Tier. Setting Up the Instance Once my instance was running, I clicked on the instance ID to access its information page. The next step was connecting to the instance via SSH. I used Git Bash, ensuring I was in the same directory as my downloaded key pair (.pem) file. To connect, I ran these commands: ``` chmod 400 keyparfilename.pem ssh -i keyparfilename.pem ubuntu@ec2-public-IP-address.AWSareaID.compute.amazonaws.com ``` Installing Necessary Software After successfully connecting, I switched to root access with `sudo su -`. Then, I installed and started Nginx: ``` apt install nginx -y systemctl start nginx systemctl enable nginx ``` I also installed unzip to help with unpacking the website files: ``` apt install unzip ``` Deploying the Website Next, I downloaded and unzipped the website files: ``` curl "name-of-zipped-file" -o "preferred-file-name.zip" unzip preferred-file-name.zip ``` Finally, I copied all the files to the Nginx web directory: ``` cp -r * /var/www/html/ ``` Viewing the Result With everything set up, I copied the public IP address from the AWS console and pasted it into a web browser. Voila! My website was live. Challenges and Learnings While the process seems straightforward now, I encountered a few hiccups along the way. Understanding the EC2 dashboard took some time, and I initially struggled with SSH connections. However, careful reading of AWS documentation, Youtube videos, our slack channel and some troubleshooting helped me overcome these obstacles. This project taught me valuable lessons about cloud infrastructure, Linux command line, and web servers. It's amazing to see how quickly you can deploy a website using cloud services! If you're looking to start your cloud computing journey, I highly recommend trying out a similar project. It's a great way to get hands-on experience and build your confidence in working with AWS. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ztacp51z78fy8zyk9mip.png) Screenshot of the Hosted Website Website template accessed from : https://www.tooplate.com/view/2135-mini-finance
jesse_adu_akowuah_
1,922,555
Nginx Proxy Manager: Lightweight reverse proxy 🌍
Learn how to install Nginx Proxy Manager with Docker and set it up as a reverse proxy for HTTP, HTTPS...
0
2024-07-13T18:02:49
https://blog.disane.dev/en/nginx-proxy-manager-lightweight-reverse-proxy/
reverseproxy, homelab, dns, webserver
![](https://blog.disane.dev/content/images/2024/07/nginx-proxy-manager-der-leichtgewichtige-reverse-proxy_banner.jpeg)Learn how to install Nginx Proxy Manager with Docker and set it up as a reverse proxy for HTTP, HTTPS and UDP! 🌍 --- The Nginx Proxy Manager (NPM) is a powerful and user-friendly tool for managing reverse proxy configurations. It simplifies the process of setting up a reverse proxy and allows you to manage multiple applications from a single interface. In this article, you will learn how to install Nginx Proxy Manager with Docker, how it works as a reverse proxy and what advantages it offers over a traditional network configuration. ## Why use Nginx Proxy Manager? 🤔 Nginx Proxy Manager makes it easy to manage web servers and SSL certificates and provides an intuitive web interface for configuring Nginx reverse proxies. Here are some of its outstanding features: * **HTTP and HTTPS support**: Easily configure HTTP and HTTPS sites. * **Let's Encrypt integration**: Automatically manage SSL certificates. * **UDP redirects**: Support for UDP redirects in addition to traditional TCP redirects. ## Installing Nginx Proxy Manager with Docker 🐳 Installing Nginx Proxy Manager with Docker is quick and easy. Here is a step-by-step guide on how to set up Nginx Proxy Manager on your server: ### Preparation Make sure that Docker is installed on your system. If not, you can install Docker using the official Docker installation instructions. ### Create Docker-Compose file Create a `docker-compose.yml` file with the following content: ```yaml version: '3' services: npm: image: jc21/nginx-proxy-manager:latest restart: unless-stopped ports: - "80:80" - "81:81" - "443:443" environment: DB_SQLITE_FILE: "/data/database.sqlite" volumes: - ./data:/data - ./letsencrypt:/etc/letsencrypt ``` ### Start container Execute the following commands to start the container: ```bash mkdir -p data letsencrypt docker-compose up -d ``` That's it! Nginx Proxy Manager is now running on your server and can be reached via the IP address of your server on port 81. ## How does a reverse proxy work? 🛠️ A reverse proxy is a server that receives requests from clients and forwards them to one or more backend servers. It acts as an intermediary between the clients and the servers that provide the actual content. Here is a simple diagram showing how a reverse proxy works: ![](https://blog.disane.dev/content/images/2024/06/image-10.png) ### Comparison with a network without reverse proxy **Without reverse proxy**: * Each application must run on a separate port or subdomain. * Managing SSL certificates for each application is complex and time-consuming. * Scalability and load balancing are more difficult to manage. **With reverse proxy**: * All requests are managed from a central point. * SSL certificates can be centrally managed and automatically renewed. * Easier scaling and load balancing of backend servers. ## Integration into your network 🌐 To integrate the Nginx Proxy Manager into your network, you need to configure the DNS settings and the forwarding rules. ### DNS configuration Configure the DNS entries of your domain so that they point to the IP address of the server on which the Nginx Proxy Manager is running. This can be done via your domain registrar or your internal DNS server. This can also be, for example, your AdGuard Home from this article <https://blog.disane.dev/adguard-home-ad-blocker-fur-alle-gerate-im-netzwerk> ### Set up forwarding rules Log in to the Nginx Proxy Manager web interface and set up forwarding rules for your applications. You can configure HTTP and HTTPS redirects as well as UDP redirects. ## HTTPS support with Let's Encrypt 🔒 The Nginx Proxy Manager supports the automatic management of SSL certificates via Let's Encrypt. You can easily set up and manage SSL certificates via the web interface. ### Advantages of a reverse proxy * **Security improvement**: Through the centralized management of SSL certificates and the ability to release only necessary ports. * **Easier management**: All applications can be managed from a single interface. * **Performance improvement**: Backend servers can be used more efficiently thanks to caching and load balancing. ## Conclusion 🎉 The Nginx Proxy Manager is a powerful and user-friendly tool for managing reverse proxies. With features such as HTTPS support, Let's Encrypt integration and UDP forwarding, it offers a comprehensive solution for managing web applications. ### References to other articles 📚 Interested in how to secure internal services with SSL? Check out this article 👇🏼 [Reach internal services with an external DNS wildcard![Preview image](https://blog.disane.dev/content/images/2023/11/interne_services-mit-dns-wildcards-erreichen-und-https-absichern_banner-2.jpeg)In this article, I'll show you, how to reach your internal services, both externally and internally, with (sub)domains and HTTPS 🎉](https://blog.disane.dev/internal-services-reach-with-dns-wildcards-and-secure-https/) --- If you like my posts, it would be nice if you follow my [Blog](https://blog.disane.dev) for more tech stuff.
disane
1,922,502
Elevate Your Business with Digital Elevate: Your Premier SEO, Guest Posting, Link Building, and Digital Marketing Partner
In today's hyper-competitive digital landscape, achieving visibility and standing out among countless...
0
2024-07-13T16:17:24
https://dev.to/digital_elevatee/elevate-your-business-with-digital-elevate-your-premier-seo-guest-posting-link-building-and-digital-marketing-partner-4jbl
In today's hyper-competitive digital landscape, achieving visibility and standing out among countless online businesses is more challenging than ever. Whether you are a startup looking to establish a robust online presence or an established enterprise aiming to scale new heights, mastering the intricacies of digital marketing is crucial. This is where **Digital Elevate** steps in, your ultimate partner for SEO, guest posting, link building, and comprehensive digital marketing services. The Digital Elevate Difference [Whatsapp](https://wa.me/message/2WMP3QJLRD3DI1) At Digital Elevate, we understand that every business is unique, with distinct goals, challenges, and audiences. Our approach is tailored to meet your specific needs, ensuring that every strategy we implement is aligned with your business objectives. Here’s how we set ourselves apart: 1. Customized SEO Strategies Keyword Research and Analysis: We conduct in-depth research to identify the most relevant and high-traffic keywords for your industry. Our analysis goes beyond simple metrics, incorporating user intent and market trends to pinpoint opportunities for growth. On-Page Optimization: Our experts optimize every aspect of your website, from meta tags and headers to content and images, ensuring that your site is search-engine-friendly and user-centric. Technical SEO: We delve into the technical elements of your site, addressing issues like site speed, mobile-friendliness, and crawlability to enhance your site’s performance and search engine rankings. 2.Effective Guest Posting Services Quality Content Creation: Our team of skilled writers crafts compelling, high-quality content that resonates with your target audience and meets the standards of top-tier websites. Strategic Outreach:We leverage our extensive network of authoritative blogs and websites to secure guest post placements that drive traffic and boost your site’s credibility. White-Hat Techniques:All our guest posting practices adhere to the highest ethical standards, ensuring sustainable growth and avoiding any risk of penalties from search engines. 3.Robust Link Building Link Analysis and Audits: We conduct comprehensive audits to assess your current backlink profile, identifying strengths and areas for improvement. High-Quality Backlinks: Our team focuses on acquiring backlinks from reputable and relevant sources, enhancing your site’s authority and trustworthiness. Competitive Analysis: We analyze your competitors’ link-building strategies to identify opportunities and develop a customized approach that gives you a competitive edge. 4.Holistic Digital Marketing Solutions Content Marketing:We create and distribute valuable, relevant content designed to attract, engage, and convert your target audience. Social Media Management: Our social media experts develop and implement strategies that enhance your brand’s visibility and engagement across platforms. PPC Advertising:We design and manage pay-per-click campaigns that drive targeted traffic and deliver measurable results. Email Marketing: Our email marketing strategies nurture leads and build lasting relationships with your audience, driving conversions and customer loyalty. Why Choose Digital Elevate? Choosing the right digital marketing partner can make all the difference in achieving your business goals. Here’s why Digital Elevate is the right choice for you: Expert Team: Our team comprises seasoned professionals with extensive experience in SEO, guest posting, link building, and digital marketing. We stay updated with the latest industry trends and best practices to deliver cutting-edge solutions. Proven Results: We have a track record of delivering measurable results for our clients, helping them achieve higher rankings, increased traffic, and improved ROI. Transparent Reporting: We believe in complete transparency and provide regular reports and updates, ensuring you are always informed about the progress and performance of your campaigns. Client-Centric Approach: Your success is our success. We prioritize your needs and work closely with you to develop strategies that align with your business objectives and drive real results. Partner with Digital Elevate Today In a rapidly evolving digital landscape, staying ahead of the curve requires expertise, innovation, and a strategic approach. At Digital Elevate, we are committed to helping you navigate the complexities of digital marketing and achieve sustainable growth. Let us be your trusted partner in elevating your business to new heights. Contact us today to learn more about our services and discover how we can help you succeed in the digital world. Whatsapp:+923289640270 Mail: digital.elevatee@gmail.com
digital_elevatee
1,922,503
How to Build Dynamic Grafana Dashboards and Visualize Open-Source Community Data
Introduction In “How to Visualize Open Source Community Data”, we introduced how to fetch...
0
2024-07-13T16:17:29
https://dev.to/justlorain/how-to-build-dynamic-grafana-dashboards-and-visualize-open-source-community-data-4caa
github, opensource, tutorial, webdev
## Introduction In [“How to Visualize Open Source Community Data”](https://dev.to/justlorain/how-to-visualize-and-analyze-data-in-open-source-communities-1l35), we introduced how to fetch data through the GitHub GraphQL API and visualize and display the data using MySQL and Grafana Dashboards. This article will continue this topic, focusing on how to build a dynamic Grafana Dashboard using [Variables](https://grafana.com/docs/grafana/latest/dashboards/variables/). ## Static Panels and Dynamic Panels ### Static Panels In this context, static panels refer to panels where the content of the queries is "hard-coded." Imagine you want to monitor the load of your backend server `backend_server_1` using Grafana. You hard-code `backend_server_1` in the query, and it works fine. However, when one server is not enough to support your application, you might need to add `backend_server_2`, `backend_server_3`, etc. In this case, the initial query will become problematic, and you will need to modify the query every time you add or remove a server. This approach is cumbersome and not scalable. Take the panel showing the star count changes of various repositories from the previous article as an example: ![example1](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5a41224u1dvhoy12ovcv.png) In the corresponding SQL query fetching data, the repository names are hard-coded. Such static panels are not advisable. Do not create such panels. ![example2](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zoumhwe94g5e3ojttoyk.png) ### Dynamic Panels By utilizing the Variables feature provided by Grafana Dashboards, we can easily create a series of panels that dynamically change based on the set variables. Compared to static panels, dynamic panels are more flexible and natural by writing queries that adapt to different types of variables. ## Creating Variables Grafana Dashboard supports the creation of various types of variables (as shown in the table below). The methods of creation and usage are quite similar across different types. Here, we will mainly discuss `Query` type variables. If you want to learn about other types of variables, you can refer to the [official documentation](https://grafana.com/docs/grafana/latest/dashboards/variables/add-template-variables/). | Variable type | Description | | :---------------- | :----------------------------------------------------------- | | Query | Query-generated list of values such as metric names, server names, sensor IDs, data centers, and so on. | | Custom | Define the variable options manually using a comma-separated list. | | Text box | Display a free text input field with an optional default value. | | Constant | Define a hidden constant. | | Data source | Quickly change the data source for an entire dashboard. | | Interval | Interval variables represent time spans. | | Ad hoc filters | Key/value filters that are automatically added to all metric queries for a data source (Prometheus, Loki, InfluxDB, and Elasticsearch only). | | Global variables | Built-in variables that can be used in expressions in the query editor. | | Chained variables | Variable queries can contain other variables. | To create a variable, go to the Variables tab in the Dashboard Settings and click `New variable`. ![example3](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g1cst55k0ivesbwi3qkv.png) After setting the variable type to `Query`, we can write a query to set the actual value of this variable. Here, we create a simple `login` variable to query the names of all GitHub organizations from the database. Since we use MySQL as the data source, we only need to write a simple SQL query. ![example4](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5724889csspgo9htj4nd.png) The effect of the `login` variable after creation is as follows: all organizations are queried and displayed in a dropdown list, and we can select an entry as the actual value of the `login` variable. ![example5](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rt34p34bvtvjbp86sgxk.png) **Variables can also reference each other.** For example, if we want to create a `repos` variable that shows the names of all repositories under an organization, we can use the previously created `login` variable in the query for the `repos` variable using the `$varname` syntax. For more syntax for using variables, refer to the [official documentation](https://grafana.com/docs/grafana/latest/dashboards/variables/variable-syntax/). ![example6](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3vgjjj55z8xiic1jzkxd.png) Through the `Show dependencies` option in the Variables tab, you can clearly see the dependencies between variables. ![example7](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nyh4fhssucwjls3z194m.png) **We can also choose to set multiple values as the actual value of a variable.** ![example8](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w3s133oczouaipqzhwrk.png) Now, with the `Multi-value` option enabled, we can select multiple repository names under a specific organization as the value of the `repos` variable by referencing the `login` variable. ![example9](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a4g89oa68quprozil7c2.png) ## Using Variables To use variables, embed them in specific queries or expressions using the `$varname` or `${var_name}` syntax, as we did when creating the `repos` variable. To illustrate the flexibility and advantages of dynamic panels over static panels, let's take the initial **panel showing the star count changes of various repositories** as an example. Now, we have the `repos` variable that can select multiple repository names under the organization chosen by the `login` variable and use them as the value of the `repos` variable. Thus, we can write our SQL query as follows: ![example10](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iwatnotnlxj7aywyyhfn.png) As shown in the figure: - We selected `cloudwego` as the value of the `login` variable and chose the `netpoll`, `kitex`, and `hertz` repositories in the `repos` variable. - In the SQL query, we used the `repos` variable with the `$varname` syntax. - Finally, our chart shows the star count change trends for the three selected repositories. This approach is much more elegant than hard-coding the repository names in the query. Another **important** point is that this usage requires adding an extra Transform to process the queried data. The `Multi-frame time series` Transform can use the returned string values as labels, enabling separate display of time series for each label. ![example11](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dh1q55uf6ri7m0go6hxh.png) ## Summary That concludes this article. We have explored the use of Variables in Grafana Dashboard through practical examples, allowing us to easily create flexible dynamic panels. By utilizing features like Grafana Dashboard Variables, the [OPENALYSIS](https://github.com/B1NARY-GR0UP/openalysis) project can more flexibly perform visual analysis of the configured open-source community data. I believe this will help you better build and develop your open-source community. ## References - https://github.com/B1NARY-GR0UP/openalysis - https://grafana.com/docs/grafana/latest/dashboards/variables/ - https://grafana.com/docs/grafana/latest/dashboards/variables/add-template-variables/ - https://grafana.com/docs/grafana/latest/dashboards/variables/variable-syntax/
justlorain
1,922,504
🚀 How to Start a Discord Bot Using Node.js with Google Bard API 🤖
Prerequisites Node.js installed. Discord account with a server. Google Cloud account...
0
2024-07-13T16:20:33
https://dev.to/mrimran/how-to-start-a-discord-bot-using-nodejs-with-google-bard-api-b5o
discord, node, bot
## Prerequisites 1. **Node.js** installed. 2. **Discord account** with a server. 3. **Google Cloud account** with Bard API access. ## Step 1: Set Up Your Project 1. **Create a New Directory:** ```bash mkdir discord-bot-bard cd discord-bot-bard ``` 2. **Initialize a Node.js Project:** ```bash npm init -y ``` 3. **Install Required Packages:** ```bash npm install discord.js axios dotenv ``` ## Step 2: Create Your Discord Bot 1. **Create a New Bot Application:** - Go to the [Discord Developer Portal](https://discord.com/developers/applications). - Create a new application. - Add a bot under the "Bot" tab. 2. **Get Your Bot Token:** - Copy your bot token from the "Bot" tab. 3. **Invite Your Bot to Your Server:** - Go to the "OAuth2" tab and generate an invite URL with necessary permissions. - Invite your bot using the generated URL. ## Step 3: Integrate Google Bard API 1. **Set Up Google Cloud Project:** - Create a new project in the [Google Cloud Console](https://console.cloud.google.com/). - Enable the Bard API. - Create an API key. 2. **Store Your API Keys:** Create a `.env` file: ```env DISCORD_TOKEN=your-discord-bot-token BARD_API_KEY=your-google-bard-api-key ``` ## Step 4: Write Your Bot Code Create an `index.js` file and add the following code: ```javascript require('dotenv').config(); const { Client, GatewayIntentBits } = require('discord.js'); const axios = require('axios'); const client = new Client({ intents: [GatewayIntentBits.Guilds, GatewayIntentBits.GuildMessages, GatewayIntentBits.MessageContent] }); client.once('ready', () => { console.log(`Logged in as ${client.user.tag}!`); }); client.on('messageCreate', async (message) => { if (message.author.bot) return; if (message.content.startsWith('!bard')) { const query = message.content.replace('!bard', '').trim(); if (!query) { message.channel.send('Please provide a prompt for Bard.'); return; } try { const response = await axios.post( 'https://bard.googleapis.com/v1/creations:generate', { prompt: query }, { headers: { 'Authorization': `Bearer ${process.env.BARD_API_KEY}`, 'Content-Type': 'application/json', }, } ); const bardResponse = response.data.response; message.channel.send(bardResponse); } catch (error) { console.error('Error fetching Bard response:', error); message.channel.send('Sorry, I could not fetch a response from Bard.'); } } }); client.login(process.env.DISCORD_TOKEN); ``` ## Step 5: Run Your Bot Run your bot with: ```bash node index.js ``` ## 🎉 Conclusion Your Discord bot is now online and ready to generate creative content using the Google Bard API! Have fun interacting with it and expanding its features. Happy coding! 💻✨
mrimran
1,922,505
LLNL Hackathon: Chat GPT, GraphRAG, & Vector DataBase Pipeline with LangChain
Overview Created an LLM Pipeline using LangChain, GraphRAG, and Vector Databases for the...
0
2024-07-13T16:50:50
https://dev.to/fran_can/llnl-hackathon-chat-gpt-graphrag-vector-database-pipeline-with-langchain-3hgc
###Overview Created an **LLM Pipeline** using **LangChain**, **GraphRAG**, and Vector Databases for the data secure local instance of ChatGPT(LivChat) at Lawrence Livermore National Labs(LLNL) to allow users access to LLNL-specific information through LivChat.The LLM Pipeline creates clearance level-dependent indexes of Knowledge Graphs and Vector Databases, and queries both to synthesize a response for the user. ##Step By Step Process Lawrence Livermore National Labs(LLNL) has a local instance of ChatGPT 3.5 & 4o(LivChat) on site to minimize sensitive data risks that arise when user queries are sent off site to OpenAI(ChatGPT), Microsoft(Copilot), etc. We proposed an improvement to the local model to **allow users to be able to ask this local ChatGPT instance questions that require internal LLNL knowledge** that OpenAI doesn't have access to when training ChatGPT. The improvement consists of using **LangChain**🦜🔗 to provide a method for LivChat to query clearance level specific data from Knowledge Graphs using GraphRAG, and a Vector Database using Pinecone, to then have LivChat read both queries and synthesize a response for the user. A flow chart of the overall process is shown below. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wcqgra1xcebsa9uf7okx.png)
fran_can
1,922,506
Code Smell 258 - Secrets in Code
The Dangers of Hardcoding Secrets TL;DR: Use a secret manager to avoid hardcoding sensitive...
9,470
2024-07-13T16:25:27
https://maximilianocontieri.com/code-smell-258-secrets-in-code
webdev, beginners, programming, security
*The Dangers of Hardcoding Secrets* > TL;DR: Use a secret manager to avoid hardcoding sensitive information. # Problems - Security risk - Hard to update by operations teams - Code exposure - Data breaches - Audit Fails # Solutions 1. Use a [secrets manager](https://en.wikipedia.org/wiki/Key_management) 2. Use Environment variables outside the code 3. Encrypted storage # Context Writing secrets as plain text directly into your codebase exposes your code to significant security risks. Hardcoded secrets such as API keys, passwords, database credentials, and tokens can be easily exposed if your code is shared or compromised. Use a secret manager to store and manage your secrets. This strategy will reduce the risk of data breaches and make it easier to update and rotate secrets as needed. # Sample Code ## Wrong [Gist Url]: # (https://gist.github.com/mcsee/9f6389d74995cdebda3e81f5e9831fbe) ```python import requests api_key = "LILAS_PASTIA" response = requests.get("https://api.example.com", headers={"Authorization": f"Bearer {api_key}"}) ``` ## Right [Gist Url]: # (https://gist.github.com/mcsee/8ce54f7836bdc9552d505b9d350ee8d1) ```python import os import requests api_key = os.environ.get("API_KEY") # This is just an example. Might also be not as secure response = requests.get("https://api.example.com", headers={"Authorization": f"Bearer {api_key}"}) ``` # Detection [X] Automatic You can detect this smell by searching your codebase for hardcoded strings that resemble secrets. Code reviews and commercial security static analysis tools can also help identify these patterns. # Tags - Security # Level [x] Intermediate # AI Generation AI code generators might create this smell if they were trained with code datasets with hardcoded secrets. Always review generated code to ensure secrets are handled securely. # AI Detection Gemini, Claude, and ChatGPT detected the hardcoded secrets and suggested changes to the code. # Conclusion Using a secret manager enhances the security and maintainability of your code by ensuring that sensitive information is stored securely and can be easily managed and updated. Many repl and public codebases have a secret manager as an external utility. Make it a habit to handle all secrets with care and never let them slip into your codebase. # Relations {% post https://dev.to/mcsee/code-smell-215-deserializing-object-vulnerability-3hbj %} {% post https://dev.to/mcsee/code-smell-189-not-sanitized-input-853 %} # More Info [Stack Overflow](https://stackoverflow.com/questions/70559637/github-copilot-giving-away-api-keys-how-can-i-protect-my-keys) [GitHub Copilot security concerns](https://vlad-rad.medium.com/github-copilot-security-conserns-d4209f0d5c28) # Disclaimer Code Smells are my [opinion](https://dev.to/mcsee/i-wrote-more-than-90-articles-on-2021-here-is-what-i-learned-1n3a). # Credits Photo by [saeed karimi](https://unsplash.com/@saeedkarimi) on [Unsplash](https://unsplash.com/photos/woman-in-white-long-sleeve-shirt-kissing-girl-in-white-long-sleeve-shirt-JrrWC7Qcmhs) * * * > Passwords are like underwear: you don’t let people see it, you should change it very often, and you shouldn’t share it with strangers. _Chris Pirillo_ {% post https://dev.to/mcsee/software-engineering-great-quotes-26ci %} * * * This article is part of the CodeSmell Series. {% post https://dev.to/mcsee/how-to-find-the-stinky-parts-of-your-code-1dbc %}
mcsee
1,922,507
Delicious Traeger Recipes: Elevate Your Outdoor Cooking Game
Traeger grills have revolutionized outdoor cooking, bringing the art of smoking and grilling to...
0
2024-07-13T16:33:31
https://dev.to/zeeshan_aliseo_5d04c6686/delicious-traeger-recipes-elevate-your-outdoor-cooking-game-557n
webdev, javascript, beginners, programming
Traeger grills have revolutionized outdoor cooking, bringing the art of smoking and grilling to backyard chefs everywhere. These versatile wood pellet grills provide consistent heat and a smoky flavor that can transform ordinary ingredients into extraordinary dishes. Whether you're a novice griller or a seasoned pitmaster, these [Traeger recipes](https://onerecikmb.com/) will inspire you to create mouthwatering meals that impress your friends and family. 1. Smoked Brisket Ingredients: 1 whole packer brisket (10-12 lbs) 1/4 cup kosher salt 1/4 cup black pepper 1/4 cup garlic powder 1/4 cup paprika Wood pellets (hickory or mesquite recommended) Instructions: Preheat your Traeger grill to 225°F. Trim the brisket, removing excess fat and silver skin. In a small bowl, mix the salt, pepper, garlic powder, and paprika. Rub the mixture all over the brisket, ensuring it is evenly coated. Place the brisket on the grill, fat side up. Smoke the brisket for 8-10 hours, or until the internal temperature reaches 195-205°F. Remove the brisket from the grill and let it rest for at least 30 minutes before slicing. 2. Traeger Smoked Salmon Ingredients: 2 lbs salmon fillet 1/4 cup brown sugar 1/4 cup kosher salt 1/2 tsp black pepper 1/2 tsp garlic powder 1/2 tsp onion powder Lemon slices (optional) Wood pellets (cherry or apple recommended) Instructions: Preheat your Traeger grill to 180°F. In a small bowl, combine the brown sugar, salt, black pepper, garlic powder, and onion powder. Rub the mixture over the salmon fillet. Place the salmon directly on the grill grates, skin side down. Smoke the salmon for 3-4 hours, or until the internal temperature reaches 145°F. Remove from the grill and let it rest for a few minutes before serving. Garnish with lemon slices if desired. 3. BBQ Pulled Pork Ingredients: 1 pork shoulder (8-10 lbs) 1/4 cup mustard 1/4 cup apple cider vinegar 1/4 cup brown sugar 1/4 cup paprika 2 tbsp kosher salt 2 tbsp black pepper 1 tbsp garlic powder 1 tbsp onion powder Wood pellets (hickory or oak recommended) Instructions: Preheat your Traeger grill to 225°F. In a small bowl, mix the mustard and apple cider vinegar. Rub this mixture all over the pork shoulder. In another bowl, combine the brown sugar, paprika, salt, black pepper, garlic powder, and onion powder. Rub this mixture onto the pork shoulder. Place the pork shoulder on the grill, fat side up. Smoke the pork shoulder for 12-14 hours, or until the internal temperature reaches 195-205°F. Remove the pork from the grill and let it rest for at least 30 minutes. Shred the pork with two forks and mix with your favorite BBQ sauce. 4. Traeger Smoked Chicken Wings Ingredients: 3 lbs chicken wings 2 tbsp olive oil 1/4 cup Traeger chicken rub or your favorite poultry seasoning 1/4 cup honey 1/4 cup hot sauce Wood pellets (mesquite or pecan recommended) Instructions: Preheat your Traeger grill to 225°F. Toss the chicken wings in olive oil, then season them with the Traeger chicken rub. Place the wings directly on the grill grates. Smoke the wings for 1 hour, then increase the temperature to 350°F. Cook for an additional 30-45 minutes, or until the wings are crispy and the internal temperature reaches 165°F. In a small bowl, mix the honey and hot sauce. Toss the wings in the sauce before serving. 5. Traeger Smoked Mac and Cheese Ingredients: 1 lb elbow macaroni 1/4 cup butter 1/4 cup all-purpose flour 3 cups milk 1 cup heavy cream 4 cups shredded cheddar cheese 1 cup grated Parmesan cheese 1 tsp salt 1/2 tsp black pepper 1/2 tsp paprika 1/2 cup bread crumbs Wood pellets (apple or cherry recommended) Instructions: Preheat your Traeger grill to 350°F. Cook the macaroni according to package instructions and set aside. In a large saucepan, melt the butter over medium heat. Add the flour and cook for 1-2 minutes, stirring constantly. Slowly whisk in the milk and heavy cream. Cook until the mixture thickens, about 5-7 minutes. Remove from heat and stir in the cheddar cheese, Parmesan cheese, salt, pepper, and paprika until the cheese is melted and the sauce is smooth. Combine the cheese sauce with the cooked macaroni and transfer to a greased baking dish. Sprinkle bread crumbs on top. Place the baking dish on the grill and smoke for 25-30 minutes, or until the top is golden and bubbly. 6. Traeger Grilled Vegetables Ingredients: 1 zucchini, sliced 1 bell pepper, sliced 1 red onion, sliced 1 cup cherry tomatoes 2 tbsp olive oil 1 tsp salt 1/2 tsp black pepper 1/2 tsp garlic powder 1/2 tsp dried oregano Wood pellets (oak or maple recommended) Instructions: Preheat your Traeger grill to 375°F. In a large bowl, toss the vegetables with olive oil, salt, pepper, garlic powder, and dried oregano. Spread the vegetables in a single layer on a baking sheet. Place the baking sheet on the grill and cook for 20-25 minutes, or until the vegetables are tender and slightly charred. Serve hot as a side dish or over rice for a delicious vegetarian main course. Conclusion Traeger grills open up a world of culinary possibilities, allowing you to experiment with flavors and techniques that elevate your cooking. From smoked brisket and salmon to BBQ pulled pork and grilled vegetables, these recipes are sure to become staples in your grilling repertoire. Enjoy the smoky, rich flavors that only a Traeger grill can provide and make every meal an outdoor feast to remember.
zeeshan_aliseo_5d04c6686
1,922,508
Buy verified BYBIT account
Buy verified BYBIT account In the evolving landscape of cryptocurrency trading, the role of a...
0
2024-07-13T16:41:53
https://dev.to/allennfinn525/buy-verified-bybit-account-2mcp
Buy verified BYBIT account In the evolving landscape of cryptocurrency trading, the role of a dependable and protected platform cannot be overstated. Bybit, an esteemed crypto derivatives exchange, stands out as a platform that empowers traders to capitalize on their expertise and effectively maneuver the market. This article sheds light on the concept of Buy Verified Bybit Accounts, emphasizing the importance of account verification, the benefits it offers, and its role in ensuring a secure and seamless trading experience for all individuals involved. https://dmhelpshop.com/product/buy-verified-bybit-account/ https://dmhelpshop.com/product/buy-verified-bybit-account/ What is a Verified Bybit Account? Ensuring the security of your trading experience entails furnishing personal identification documents and participating in a video verification call to validate your identity. This thorough process is designed to not only establish trust but also to provide a secure trading environment that safeguards against potential threats. By rigorously verifying identities, we prioritize the protection and integrity of every individual’s trading interactions, cultivating a space where confidence and security are paramount. Buy verified BYBIT account Verification on Bybit lies at the core of ensuring security and trust within the platform, going beyond mere regulatory requirements. By implementing robust verification processes, Bybit effectively minimizes risks linked to fraudulent activities and enhances identity protection, thus establishing a solid foundation for a safe trading environment. Verified accounts not only represent a commitment to compliance but also unlock higher withdrawal limits, empowering traders to effectively manage their assets while upholding stringent safety standards. Advantages of a Verified Bybit Account Discover the multitude of advantages a verified Bybit account offers beyond just security. Verified users relish in heightened withdrawal limits, presenting them with the flexibility necessary to effectively manage their crypto assets. This is especially advantageous for traders aiming to conduct substantial transactions with confidence, ensuring a stress-free and efficient trading experience. Procuring Verified Bybit Accounts The concept of acquiring buy Verified Bybit Accounts is increasingly favored by traders looking to enhance their competitive advantage in the market. Well-established sources and platforms now offer authentic verified accounts, enabling users to enjoy a superior trading experience. Buy verified BYBIT account. https://dmhelpshop.com/product/buy-verified-bybit-account/ Just as one exercises diligence in their trading activities, it is vital to carefully choose a reliable source for obtaining a verified account to guarantee a smooth and reliable transition. Conclusionhow to get around bybit kyc Understanding the importance of Bybit’s KYC (Know Your Customer) process is crucial for all users. Bybit’s implementation of KYC is not just to comply with legal regulations but also to safeguard its platform against fraud. Although the process might appear burdensome, it plays a pivotal role in ensuring the security and protection of your account and funds. Embracing KYC is a proactive step towards maintaining a safe and secure trading environment for everyone involved. Ensuring the security of your account is crucial, even if the KYC process may seem burdensome. By verifying your identity through KYC and submitting necessary documentation, you are fortifying the protection of your personal information and assets against potential unauthorized breaches and fraudulent undertakings. Buy verified BYBIT account. Safeguarding your account with these added security measures not only safeguards your own interests but also contributes to maintaining the overall integrity of the online ecosystem. Embrace KYC as a proactive step towards ensuring a safe and secure online experience for yourself and everyone around you. How many Bybit users are there? With over 2 million registered users, Bybit stands out as a prominent player in the cryptocurrency realm, showcasing its increasing influence and capacity to appeal to a wide spectrum of traders. The rapid expansion of its user base highlights Bybit’s proactive approach to integrating innovative functionalities and prioritizing customer experience. This exponential growth mirrors the intensifying interest in digital assets, positioning Bybit as a leading platform in the evolving landscape of cryptocurrency trading. With over 2 million registered users leveraging its platform for cryptocurrency trading, Buy Verified ByBiT Accounts has witnessed remarkable growth in its user base. Bybit’s commitment to security, provision of advanced trading tools, and top-tier customer support services have solidified its position as a prominent competitor within the cryptocurrency exchange market. For those seeking a dependable and feature-rich platform to engage in digital asset trading, Bybit emerges as an excellent choice for both novice and experienced traders alike. Enhancing Trading Across Borders Leverage the power of buy verified Bybit accounts to unlock global trading prospects. Whether you reside in bustling financial districts or the most distant corners of the globe, a verified account provides you with the gateway to engage in safe and seamless cross-border transactions. The credibility that comes with a verified account strengthens your trading activities, ensuring a secure and reliable trading environment for all your endeavors. A Badge of Trust and Opportunity By verifying your BYBIT account, you are making a prudent choice that underlines your dedication to safe trading practices while gaining access to an array of enhanced features and advantages on the platform. Buy verified BYBIT account. With upgraded security measures in place, elevated withdrawal thresholds, and privileged access to exclusive opportunities, a verified BYBIT account equips you with the confidence to maneuver through the cryptocurrency trading realm effectively. Why is Verification Important on Bybit? Ensuring verification on Bybit is essential in creating a secure and trusted trading space for all users. It effectively reduces the potential threats linked to fraudulent behaviors, offers a shield for personal identities, and enables verified individuals to enjoy increased withdrawal limits, enhancing their ability to efficiently manage assets. By undergoing the verification process, users safeguard their investments and contribute to a safer and more regulated ecosystem, promoting a more secure and reliable trading environment overall. Buy verified BYBIT account. Conclusion In the ever-evolving landscape of digital cryptocurrency trading, having a Verified Bybit Account is paramount in establishing trust and security. By offering elevated withdrawal limits, fortified security measures, and the assurance that comes with verification, traders are equipped with a robust foundation to navigate the complexities of the trading sphere with peace of mind. Discover the power of ByBiT Accounts, the ultimate financial management solution offering a centralized platform to monitor your finances seamlessly. With a user-friendly interface, effortlessly monitor your income, expenses, and savings, empowering you to make well-informed financial decisions. Buy verified BYBIT account. Whether you are aiming for a significant investment or securing your retirement fund, ByBiT Accounts is equipped with all the tools necessary to keep you organized and on the right financial path. Join today and take control of your financial future with ease. Contact Us / 24 Hours Reply Telegram:dmhelpshop WhatsApp: +1 ‪(980) 277-2786 Skype:dmhelpshop Email:dmhelpshop@gmail.com
allennfinn525
1,922,509
O que é Design system e por que usar
Entenda o que é um Design System e suas vantagens
0
2024-07-13T18:45:16
https://dev.to/wps13/o-que-e-design-system-e-por-que-usar-4dij
designsystem, braziliandevs
--- title: O que é Design system e por que usar published: true description: Entenda o que é um Design System e suas vantagens tags: designsystem, braziliandevs # cover_image: https://direct_url_to_image.jpg # Use a ratio of 100:42 for best results. # published_at: 2024-07-13 16:38 +0000 --- ## O que é um design system? Design system é uma biblioteca que é usada para organizar o tema de uma aplicação, incluindo suas variações. Ele contém os componentes mais comuns, como botões, entrada de texto e barra de navegação. Define as cores a serem usadas, desde cores primarias a terciárias, usadas em destaques, estados do sistema (erro/sucesso por exemplo) etc, também define diferentes usos de texto, como títulos. Além disso, inclui variações de tema (claro/noturno), assim como fontes e espaçamentos. ## Exemplo Aqui temos um exemplo de um design system simplificado, contendo algumas variações de fontes, cores e componentes, como botão e input. Nesse caso, as fontes poderiam ter usos diversos como títulos, corpo de texto, legendas etc, podendo isso ser indicado no design system. Já as cores são de acordo com o uso, como a cor principal, aqui chamada de primária, que é usada nos lugares de destaque como em botão de ação, tela de inicialização (splash) etc, temos cores relacionadas ao estado da aplicação, como aqui demonstrado com a cor para erro, onde poderia ter também sucesso e neutro, variando de acordo com a necessidade. ![Exemplo de um design system no figma, mostrando variações de fontes, cores, botão e input](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uhtqgq6xtbs4848cjobx.png) ## Pontos ao considerar se irá ou não ter um design system Ter um design system resulta em uma centralização dos componentes, permitindo reusar em diferentes aplicações, gerando assim uma padronização e facilitando a manutenção, uma vez que ao atualizar em um local irá refletir nos demais. Mas como nem tudo são flores, ao decidir criar e usar um design system têm-se um maior esforço inicial, tanto para decidir o que irá compor como para criar os componentes e suas variações. ## Considerações finais Ter ou não um design system depende da necessidade de sua aplicação, precisando assim avaliar os prós e contras caso a caso. Em aplicações que trabalhei e que tinham um design system foi bastante útil, como ao criar templates já agiliza por ter alguns componentes pré-definidos e em casos de mudança de marca, facilita a retafotação por não precisar ir em cada componente alterar manualmente.
wps13
1,922,510
Be careful when using NULL in PostgreSQL
When creating the data model, we often use the NULL value to signify a missing value in certain...
0
2024-07-13T16:47:35
https://dev.to/quachthanhhmd/be-careful-when-using-null-in-postgresql-5503
postgres, 3vl, intermediate, null
> When creating the data model, we often use the NULL value to signify a missing value in certain fields for specific rows. However, be cautious with NULL values, as they can lead to data mismatches when performing the SQL queries. The following article will highlight common issues encountered by new developers when using SQL queries and provide strategies to resolve these problems. ## What is the null value in SQL? A NULL value is **used to represent a data value that does not exist**. In other words, It's a placeholder for the missing or the value that users do not know at the current time. To illustrate, please consider the question: _how many Apple stores are in the world?_ If the answer is `0`, it means that we have no Apple stores in the world. But if the answer is "NULL", it stands for we don't know about it, and maybe we need to check on the internet and respond to it later. Damn.... we're getting into quite a bit of theory. Please get back to the main topic to find the magic of NULL value. ## Why Null value is special in SQL? Imagine that Graham was a newbie developer of a **ABC** startup company. In this company, he has the role of managing the HR system. Throughout this article, we will concentrate on the **employees** table likes: ```SQL CREATE TABLE employees ( employee_id serial primary key, employee_name varchar(50), age numeric, base_salary numeric, additional_salary numeric, CONSTRAINT ck_age CHECK ( age > 0 ), CONSTRAINT ck_base_salary CHECK ( base_salary >= 0 ), CONSTRAINT ck_additional_salary CHECK ( additional_salary >= 0 ) ); INSERT INTO employees (employee_name, age, base_salary, additional_salary) VALUES ('Graham', 24, 2000, 1000), ('Lynx', 25, 1300, null), ('Tian', 23, 1300, null), ('Kent', 32, 1040, null), (null, 28, 3000, null), (null, 50, 1000, null); ``` ### NULL value cannot be used with arithmetic operators. On a beautiful day, his manager - Max, told him: "Please help me calculate the salary of our employees, you can calculate by sum `base_salary` with `additional_salary`". "Yes sir, it's so easy, give me 5 seconds" - Graham said. Since he was a junior backend developer who wasn't familiar with SQL queries, he wrote naive SQL like: ```SQL SELECT employee_id, employee_name, age, (base_salary + additional_salary) as monthly_salary FROM employees; ``` ![Salary calculation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8wr9iad9b6p0i0igthfk.png) "_This is my result, it's so easy,_": Graham said. However, upon reviewing Graham's report, Max, who came from a technical background, angrily remarked, "_You forgot to convert NULL values to zero, didn't you?_". Surely you already know what happened. This problem caused by NULL fundamentally changes how it interacts with arithmetic operations. In other words, we cannot use NULL with any arithmetic operation such as: `+, -, *, /,...` To resolve this issue, we might use two approaches: - **Convert NULL to zero**: in PostgreSQL, the COALESCE function is used to convert NULL to the specific default value. For example: ```SQL SELECT employee_id, employee_name, age, COALESCE(base_salary, 0) + COALESCE(additional_salary, 0) as monthly_salary FROM employees; ``` ![coalese salary](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qk55y4mg8quy09u8ofk7.png) - **Setting a default value**: when designing the employee table, we can set default values for `base_salary` and `additional_salary` to `0`. Please keep in mind that we can not use the default value for all circumstances. To illustrate, if you want to calculate **BMI**, we have to calculate by the formula: `weight (kg) / [height (m)]2`. If you convert, for example, the weight to zero, the result will be: ```SQL SELECT COALESCE(NULL, 0) / POWER(1.8, 2) ``` The result will be zero (*this result should be NULL to show the unknown BMI caused by missing information*). It definitely causes some confusion for users looking at it. :memo:_**Summary**: be careful when using NULL with arithmetic operators since any arithmetic operation involving NULL will result in NULL. Consider using the default value with the nullable column when performing the calculation._ ### NULL value cannot be used in the comparison operation. Ignoring Graham's previous mistake since he's just a new member, Max assigned him a new task: "*Please find the employees named Graham, Lynx, Tian, and employees who did not input their names in our system"*. Graham thinks: "*Hmm, it's also so easy, I will make up for my previous mistake*". He wrote the SQL query: ``` SQL SELECT employee_id, employee_name, age FROM public.employees WHERE employee_name IN ('Graham', 'Lynx', 'Tian', NULL) ``` The result of this SQL was: ![in](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dcyjroam2wk8kupw4km4.png) "*Yeah, this is the information of Graham, Lynx, and Tian. Moreover, wonderful, all of our employees have updated their names, my boss*". After looking at this result, Tom has some suspicions about the veracity of the reports. Originally a technical person, he has double-checked in the database. ```SQL SELECT employee_id, employee_name, age FROM employees ``` Surprisingly, the result was: ![all-data](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/izgadlp204m0d47zi9mx.png) Do you know what is happening in this example? the answer is "**the NULL value cannot be equal or unequal to any value, so that cannot perform any comparison on NULL values"** It means that we cannot use `IN, NOT IN, =, <>, >, <, >=, <=,...` to compare with NULL. To correct this report, we have to write a SQL query like: ``` SQL SELECT employee_id, employee_name, age FROM public.employees WHERE employee_name in ('Graham', 'Lynx', 'Tian') OR employee_name IS null ``` <a name="not-in"></a>Moreover, please be careful when using the NULL value in conjunction with NOT IN. For instance, Graham wishes to retrieve all employees except those who haven't provided their name or his name. If he employs NOT IN in this scenario: ``` SQL SELECT employee_id, employee_name, age FROM employees WHERE employee_name NOT IN ('Graham', null) ``` Bummm, it seems like you have imagined the previous SQL results. We don't have any records in this query. ![NO Datea](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0si0lxbfnwsi2pgnbdfr.png) Why is it that when we use the `IN` operator, we retrieve multiple records, but when we use `NOT IN`, we end up with no results at all? Let's move to the next section to deep comprehend this issue. :memo:_*Summary**: NULL only operates with `IS` and `IS NOT NULL`._ ### NULL value with conjunctive operator. *To clarify, In SQL, conjunctive operations are used to combine conditions in a query. The main conjunctive operators are:* *1. **AND**: This operator combines two or more conditions, and all conditions must be true for the overall condition to be true.* *2. **OR**: This operator combines two or more conditions, and at least one condition must be true for the overall condition to be true.* Ok, enough theory :), let's get started on this section. As the previous section, NULL can not perform with any comparison operators, have you considered what occurs when employing conjunctive operators with NULL values? #### Using NULL with AND Let's back to Graham's workplace, he wants to find no-name employees and those more than 30 years old. If he performs: ``` SQL SELECT employee_id, employee_name, age FROM EMPLOYEES WHERE employee_name = NULL AND age > 30 ``` Hmmm, the result is that Graham cannot get any data: ![No-data](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m4fm1w22uxf0l49hn8zy.png) It's so complicated, Graham tries to investigate what happens when the where condition `USER_NAME = NULL AND AGE > 30`. He can use: ``` SQL SELECT employee_id, employee_name, age, employee_name = NULL AND age > 30 FROM EMPLOYEES; ``` After executing this query, he gets the answer: ![Checking IN condition](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a9bcjuwlypuqb11tmnij.png) Weird, why do we have 2 records with null values? and the others are false? Graham noticed that when he used `NULL AND FALSE` the result was `FALSE`. In contrast, `NULL AND TRUE`, the result was NULL. Since where statement only matches and gets records when the query is true, he cannot retrieve any records. He tries to double-check again by the SQL: ```SQL SELECT NULL AND TRUE AS NULL_TRUE, NULL AND FALSE AS NULL_FALSE; ``` Wonderful, his statement was true. ![Check 3VL](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rggki3cnxww9egegppo0.png) #### Using NULL with OR Let's slightly modify Graham's previous request: **"finding no-name employees OR those more than 30 years old."**. We continue sightly change the Graham's SQL: ```SQL SELECT employee_id, employee_name, age FROM EMPLOYEES WHERE employee_name = NULL OR age > 30 ``` The results may surprise you a bit. We have 2 records in this case. ![OR condition](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2rh1g02nqjvjlyn5n7p0.png) We also try to investigate a previous topic: ```SQL SELECT employee_id, employee_name, age, USER_NAME = NULL OR AGE > 30 FROM EMPLOYEES; ``` ![Investigate](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7va647kqv9cyu5f3fql3.png) In this scenario, `NULL OR TRUE `gave the `TRUE` and `NULL OR FALSE` retrieve the `NULL` value. #### Clarify some points We moved on to 2 sections about NULL with the conjunctive operator. Please keep in mind that to clarify this section, Graham uses `USER_NAME = NULL` (actually, he must use `USER_NAME IS NULL`). In PostgreSQL, dealing with NULL values involves adhering to **Three-valued logic (3VL).** The following picture will clarify more detail about 3VL: ![3VL](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jw40k3vqqwevsmi2p1o2.png) If you wish to delve deeper into this theory, please refer to the following resource for more information: [Three-valued logic - Wikipedia](https://en.wikipedia.org/wiki/Three-valued_logic) ## Conclusion Managing `NULL` values can pose significant challenges when using certain operators in SQL statements. Carelessness in handling `NULL` values can lead to mismatches and affect user experience. Some individuals opt to avoid `NULL` values by substituting them with empty strings or default values. However, this approach can create confusion when attempting to represent unknown or undefined data in a table. ## References - Three-Valued Logic [Three-valued logic - Wikipedia](https://en.wikipedia.org/wiki/Three-valued_logic) - Postgres Documentation: [PostgreSQL](https://www.postgresql.org/) ## After Credit In section 2, we had issues when using [IN and NOT IN](#not-in) we didn't have any records, but when using `NULL` with `IN`, we found some records except `NULL`. ### Using IN with NULL In PostgreSQL, performing the SQL with IN like: ``` SQL SELECT column_name(s) FROM table_name WHERE column_name IN (value1, value2, NULL, ...); ``` When executing, PostgreSQL will convert this SQL to `column_name = value1 OR column_name = value2 OR column_name = NULL OR ...` like ``` SQL SELECT column_name(s) FROM table_name WHERE column_name = value1 OR column_name = value2 OR column_name = NULL OR ... ``` You might already be aware of this, right? due to Three-valued logic, `TRUE OR NULL` evaluates to `TRUE`. This explains why all records that meet the `WHERE` condition are returned, except those where the value is `NULL`. ### Using NOT IN with NULL Similar to the previous section, we execute the SQL statement: ```SQL SELECT column_name(s) FROM table_name WHERE column_name NOT IN (value1, value2, NULL, ...); ``` PostgreSQL will convert it to the SQL like: ```SQL SELECT column_name(s) FROM table_name WHERE column_name != value1 AND column_name != value2 AND column_name != NULL AND .. ``` When using `NOT IN` in PostgreSQL, each row in `table_name` is checked to determine if the value in `column_name` does not exist within the specified list of values from a subquery or an explicit list. According to Three-valued logic (3VL), `NULL != NULL` evaluates to `NULL`. Additionally, operations such as `NULL AND TRUE` result in `NULL`, and `NULL AND FALSE` result in `FALSE`. Therefore, based on these logical rules, the statement within the query cannot be evaluated as `true`. Consequently, regardless of the values included in the `NOT IN` list, the query will yield zero results. To deal with this problem we can use one of the following two ways: - **Using `AND` condition** ``` SQL SELECT column_name(s) FROM table_name WHERE column_name NOT IN (value1, value2,...) AND column is not NULL ``` - **Using NOT EXISTS** If we use `NOT EXISTS`, we have to convert the value list to a temporary table or SQL select. ``` SQL SELECT * FROM EMPLOYEES e WHERE NOT EXISTS ( SELECT FROM (VALUES ('Graham'), (NULL)) AS t (user_name) WHERE t.employee_name = e.employee_name OR e.employee_name IS NULL ); ``` In practice, it's recommended to use `EXISTS` and `NOT EXISTS` for better performance, but we'll cover this topic in upcoming articles. <hr/> Thanks for reading this section! I hope that this article can be helpful when performing the SQL query. If you have any questions, or discussions or notice any mistakes, please put comments at the end of this section. See you in the next article :) _Author: Thanh Quach_
quachthanhhmd
1,922,511
Creating a Simple Real-Time Chat Application with Socket.IO
Let's build a simple chat app using Socket.IO with chat rooms. Here's the live site link Simple Chat....
0
2024-07-13T17:36:46
https://dev.to/sanx/creating-a-simple-real-time-chat-application-with-socketio-33j2
webdev, javascript, beginners, tutorial
Let's build a simple chat app using Socket.IO with chat rooms. Here's the live site link [Simple Chat](https://simple-chat-ycdq.onrender.com/). Don't judge the UI 😅. I know it is not good and it is not even responsive. For now, let's focus on the concepts and functionality. Before diving into the code, let's cover some concepts about WebSockets and Socket.IO. **WebSockets** WebSockets is a protocol that provides full-duplex communication, meaning it allows a connection between the client and the server, enabling real-time, two-way communication. **Socket IO** Socket.IO is a JavaScript library that simplifies the use of WebSockets. It provides an abstraction over WebSockets, making real-time communication easier to implement. Let's dive into the coding part. Make sure Node.js is installed on your computer. **Folder Structure** ```json server/ ├── node_modules/ ├── public/ │ ├── chat.html │ ├── createRoom.html │ ├── index.html │ ├── JoinRoom.html │ ├── script.js │ └── styles.css ├── .env ├── .gitignore ├── package-lock.json ├── package.json └── server.js ``` **Initialize the Project** ```js npm init -y ``` install the required packages, first, let's install the express and nodemon ```js npm install express nodemon ``` let's configure the nodemon in package.json ```json "scripts": { "test": "echo \"Error: no test specified\" && exit 1", "start": "node server.js", "dev":"nodemon server.js" }, ``` let's set up our application, and create a file name `server.js` or `index.js` as you wish. ```js const express = require("express"); const env = require("dotenv"); const app = express(); const port = process.env.PORT || 3000; const { createServer } = require("node:http"); const server = createServer(app); //when the user enters the localhost:3000, the request is handled by this route handler app.get('/', (req, res) => { //get request res.send('<h1>Hello world</h1>'); }); server.listen(port, () => { //listen on the port console.log(`server started ${port}`); }); ``` socket.io requires direct access to the underlying HTTP server to establish WebSocket connections. so we require the `createServer` from the HTTP module. and listen on the port. ```js npm run dev //start the server ``` we are not going to use any frontend frameworks, so let's serve the HTML static file from the server.  create a folder `public` and create an `index.html` inside the `public` folder ```js const path = require("path"); //require the path module app.use(express.static("public")); //make the folder as static //in the app.get let's send the html app.get("/", (req, res) => { res.sendFile(path.join(__dirname, "public", "index.html")); }); server.listen(port, () => { //listen on the port console.log(`server started ${port}`); }); ``` Now if you open the localhost, you can see the HTML page. let's integrate socket io ```js npm install socket.io ``` ```js const { Server } = require('socket.io'); //require const io = new Server(server); //initialize the new instance of the socket.io by passing the server object //listen on the connection event for incoming sockets io.on('connection', (socket) => { console.log('user connected'); }); ``` Now in the index.html ```html <body> <ul id="messages"></ul> <form id="form" action=""> <input id="input" autocomplete="off" /><button>Send</button> </form> <script src="/socket.io/socket.io.js"></script> <script> const socket = io(); //establish a connection </script> </body> ``` the above script is used to enable real-time communication between the client and the server. when you open localhost you can see the `user connected` as the output. **Emit some events** The main idea of socket.io is that you can send and receive any data you want. JSON and Binary data are also supported. Let's make it so that when the user sends the message, the server gets it as a chat message event.  in the index.html ```html <script src="/socket.io/socket.io.js"></script> <script> const socket = io(); const form = document.getElementById('form'); const input = document.getElementById('input'); form.addEventListener('submit', (e) => { e.preventDefault(); if (input.value) { socket.emit('message', input.value); input.value = ''; } }); </script> ``` and print out the message received from the client. ```js io.on('connection', (socket) => { socket.on('message', (msg) => { console.log('message: ' + msg); }); }); ``` **Broadcasting** Now we have received the message from the client, then we need to broadcast it to all the users. socket.io gives the `io.emit()` method, to emit the events or messages. ```js io.on('connection', (socket) => { socket.on('message', (msg) => { io.emit('message', msg); }); }); ``` So, now the messages from the client are broadcasted to all the users, we need to capture the message and include it in the page. ```html <script src="/socket.io/socket.io.js"></script> <script> const socket = io(); const form = document.getElementById('form'); const input = document.getElementById('input'); const messages = document.getElementById('messages'); form.addEventListener('submit', (e) => { e.preventDefault(); if (input.value) { socket.emit('message', input.value); input.value = ''; } }); socket.on('message', (msg) => { const item = document.createElement('li'); item.textContent = msg; messages.appendChild(item); }); </script> ``` the output image is shown below. ![output](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vgl0xd2pm4lntvg6r9dg.png) We have done the basic chat application, let's make chat rooms. **How do chat rooms work?** socket io provides functionality for the rooms, so the event is emitted, and only the room members can see the message. ![chat room concept](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2819r1o1t8atjs59ggpt.png) Now I am going to create a 3 html file in the `public` folder. `chat.html` , `createRoom.html` , `joinRoom.html` I am going to shift the code from the `index.html` to the `chat.html`. In the `index.html` ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Index</title> </head> <body> <div class="main"> <div class="text"> <h1>Chat</h1> </div> <button id="create-room">Create Room</button> <button id="join-room">Join Room</button> </div> <script> const createRoom = document.getElementById("create-room"); const joinRoom = document.getElementById("join-room"); createRoom.addEventListener("click", () => { location.href = "./createRoom.html"; }); joinRoom.addEventListener("click", () => { location.href = "./JoinRoom.html"; }); </script> </body> </html> ``` In the `createRoom.html` ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Create Room</title> </head> <body> <div class="form-container"> <h1>Create Room</h1> <form action="/create" method="post"> <input type="text" placeholder="Enter the room name" name="roomName" required> <input type="submit" value="Create Room"> </form> </div> </body> </html> ``` In the `JoinRoom.html` ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Join Room</title> </head> <body> <div class="form-container"> <h1>Join Room</h1> <form action="/joinRoom" method="post"> <input type="text" placeholder="Enter the room name" name="Join" required> <input type="submit" value="Join Room"> </form> </div> </body> </html> ``` So let me explain what I have done in this file, in the index.html I have made a UI with two buttons to create a room or join a room, it will be redirected to the respective files when it is clicked. **Creating the room** In `createRoom.html`, there's a form with the POST method, and the action is set to `/create`. When submitted, the request is sent to the server. ```html <form action="/create" method="post"> <input type="text" placeholder="Enter the room name" name="roomName" required> <input type="submit" value="Create Room"> </form> ``` in `server.js` ```js app.use(express.urlencoded({ extended: true })); //parse the data const rooms = []; //to store the rooms app.post("/create", (req, res) => { const roomName = req.body.roomName; //get the room name from the body if (rooms.includes(roomName)) { //see if the room already exists res.send("room already exits"); } else if (roomName) { rooms.push(roomName); //push the room name to the array res.redirect(`/chat.html?room=${roomName}`); //redirect to the chat.html with the room name } else { res.redirect("/createRoom.html"); } }); ``` we are not using any databases to store the room names, for this tutorial let's use an array to store the room names. let's do the similar functionalities for the join room in `server.js` ```js app.post("/joinRoom", (req, res) => { const roomName = req.body.Join; if (rooms.includes(roomName)) { res.redirect(`/chat.html?room=${roomName}`); } else { res.send("enter the valid room"); } }); ``` Let's make a separate JavaScript file `script.js` in the public. I have copied all the JS from the chat.html to a separate file. `script.js` ```js const socket = io(); function getUrlname(name) { const urlPara = new URLSearchParams(window.location.search); return urlPara.get(name); } const roomName = getUrlname("room"); if (roomName) { socket.emit("join room", { roomName }); } else { console.log("room is not valid"); } const form = document.getElementById("form"); const input = document.getElementById("input"); const messages = document.getElementById("messages"); form.addEventListener("submit", (e) => { e.preventDefault(); if (input.value) { socket.emit("message", { roomName, message: input.value }); input.value = ""; } }); socket.on("message", ({ message }) => { if (!message) { alert("room is not valid"); } else { const item = document.createElement("li"); console.log(message); item.textContent = message; messages.appendChild(item); } }); // Connection state recovery const disconnectBtn = document.getElementById("disconnect-btn"); disconnectBtn.addEventListener("click", (e) => { e.preventDefault(); if (socket.connected) { disconnectBtn.innerText = "Connect"; socket.disconnect(); } else { disconnectBtn.innerText = "Disconnect"; socket.connect(); } }); ``` In the server, after the create room or join room is handled it is redirected to the chat.html page, we have also sent the room name in the search params.  In the above code, we are extracting the room name from the search params using `URLSearchParams` and checking if the room is valid. Each message from the client is emitted with the message and with the room name. With the help of this we can now check from which room the message is from. in `server.js` ```js io.on("connection", (socket) => { socket.on("join room", ({ roomName }) => { if (rooms.includes(roomName)) { socket.join(roomName); } }); socket.on("message", ({ roomName, message }) => { if (rooms.includes(roomName)) { io.to(roomName).emit("message", { message }); } else { socket.emit("message", { message: false }); } }); }); ``` socket provides a `join` method to join the room, and we check where the message is from, and the message is broadcasted to only that room member. this is done by the `io.to(roomName).emit("message",{message})` Now open multiple tabs on the browser create different rooms join in that room and check if this works. here's the full code for `server.js` ```js const express = require("express"); const app = express(); const env = require("dotenv"); const path = require("path"); const { Server } = require("socket.io"); const { createServer } = require("node:http"); const server = createServer(app); env.config(); const port = process.env.PORT; app.use(express.static("public")); app.use(express.json()); app.use(express.urlencoded({ extended: true })); const io = new Server(server, { connectionStateRecovery: {}, }); const rooms = []; app.get("/", (req, res) => { res.sendFile(path.join(__dirname, "public", "index.html")); }); app.post("/create", (req, res) => { const roomName = req.body.roomName; if (rooms.includes(roomName)) { res.send("room already exits"); } else if (roomName) { rooms.push(roomName); res.redirect(`/chat.html?room=${roomName}`); } else { res.redirect("/createRoom.html"); } }); app.post("/joinRoom", (req, res) => { const roomName = req.body.Join; if (rooms.includes(roomName)) { res.redirect(`/chat.html?room=${roomName}`); } else { res.send("enter the valid room"); } }); io.on("connection", (socket) => { socket.on("join room", ({ roomName }) => { if (rooms.includes(roomName)) { socket.join(roomName); } }); socket.on("message", ({ roomName, message }) => { if (rooms.includes(roomName)) { io.to(roomName).emit("message", { message }) } else { socket.emit("message", { message: false }); } }); }); server.listen(port, () => { console.log(`server started ${port}`); }); ``` we have done it!! and finally, let me tell one concept that is called connection state recovery. In the server code, there is one line ```js const io = new Server(server, { connectionStateRecovery: {}, }); ``` and in the client script.js ```js // Connection state recovery const disconnectBtn = document.getElementById("disconnect-btn"); disconnectBtn.addEventListener("click", (e) => { e.preventDefault(); if (socket.connected) { disconnectBtn.innerText = "Connect"; socket.disconnect(); } else { disconnectBtn.innerText = "Disconnect"; socket.connect(); } }); ``` it is used to handle the disconnections by pretending that there was no disconnection. this feature will temporarily store all the events that are sent by the server and will try to restore the state when the client reconnects. This is it!!! here's the link for the source code - [GitHub](https://github.com/sanjayr-12/simple-chat) here's the live site link - [Simple Chat](https://simple-chat-ycdq.onrender.com/) Thank You!!! Follow me on: [Linkedin](https://www.linkedin.com/in/sanjay-r-ab6064294/), [Medium](https://medium.com/@sanjayxr), [Instagram](https://www.instagram.com/_sanjayxr_12_/), [X](https://twitter.com/sanjayxr_12)
sanx
1,922,513
How to 10x downsize fonts - Building a font optimizer
I was using fonts wrong. I usually got most of my fonts from Google Fonts. For a long time, I just...
0
2024-07-14T09:41:12
https://dev.to/wimadev/how-to-10x-downsize-fonts-29nl
webdev, docker, python, beginners
I was using fonts wrong. I usually got most of my fonts from [Google Fonts](https://fonts.google.com/). For a long time, I just downloaded the font files from their website and added them directly to my projects. Not realizing, that this was a mistake... <img src="https://media.giphy.com/media/d1E1msx7Yw5Ne1Fe/giphy.gif?cid=790b7611vajjg0vdwxg81fljpl51zs5i69uzbn8e2vcnodvc&ep=v1_gifs_search&rid=giphy.gif&ct=g" > I just blindly assumed, that what can be downloaded from Google must be the gold standard, and can be used right away. **The problem is:** The fonts that can be downloaded from Google (not talking about using the CDN here), and many other providers, are not optimized for the web. The files can be several hundred KBs large and therefore significantly increase the bundle size and loading times. My good friend and CSS magician [mrflix](https://github.com/mrflix) recently showed me 2 simple optimizations that drastically decreased the font file sizes, that I want to share with you today. With these 2 tips, I was able to bring down a 232 KB font down to only 20 KB, that's more than a 10x improvement! And going even further: I'll also show you, how we can create our own font optimization script, turn it into a webapp and deploy it, so that we end up with something similar to https://fontconverter.com. You can find all the code from this demo in this repository: [font-optimizer](https://github.com/sliplane-support/font-optimizer) ## What I am going to cover: 1. [How to optimize fonts for the web](#how-to-optimize-fonts-for-the-web) 2. [Create your own font optimization script](#create-your-own-font-optimization-script) 3. [Turn the script into a web app](#turn-the-script-into-a-web-app) 4. [Containerize the app with Docker](#containerize-the-app-with-docker) 4. [Deploy the app with Sliplane](#deploy-the-app-with-sliplane) 5. [Summary](#summary) ## How to optimize fonts for the web <img src="https://media.giphy.com/media/l396Sok6H728brWlG/giphy.gif?cid=790b7611p1zjs29w5ytc05lgx024bc3ih406vyfsakg5rd1p&ep=v1_gifs_search&rid=giphy.gif&ct=g"> **Step 1:** Convert the font to woff2 (Web Open Font Format 2.0). What you can download from Google is usually in ttf format (True Type), an uncompressed font format used by operating systems and printers. woff2 on the other hand is designed for the web, the font format is compressed and therefore significantly smaller. It is nowadays [supported in most browsers](https://caniuse.com/woff2) but if you want to be extra safe, you can use woff. There are some free online font converters out there, e.g. https://fontconverter.com, https://www.fontconverter.io, https://www.fontconverter.org, where you can upload your font and have them convert it to woff2. But we can do even better! **Step 2:** We can optimize the font even further by excluding glyphs, that we don't need. Fonts typically contain a bunch of special characters or glyphs in foreign alphabets (e.g. greek or cyrillic), that you might not need for your project. By excluding them, we can downsize even further. So let's start writing our own font optimizer! Not only is it more efficient, but we also get some additional privacy and security benefits, since we don't need to share any data and we know that what's coming back is virus free. ## Create your own font optimization script There is a popular Python library called [fonttools](https://pypi.org/project/fonttools/). We can use the `pyftsubset` command to throw out any unused glyphs and convert our ttf fonts to woff2. I stole this little bash script from [mrflix](https://github.com/mrflix) (slightly modified): ```bash # path to .ttf file that should be converted input_file=$1 # get rid of the extension filename=$(echo "$input_file" | cut -f 1 -d '.') # settings for pyftsubset # Define a subset of unicodes and layout features to be included. Here: # Basic Latin, Latin-1 Supplement, Double Quotation Marks, €, „, “, “, EN Dash, EM Dash, Minus, EM Space, EN Space unicodes="U+0020-007F,U+0080-00FF,U+201E,U+201C,U+20AC,U+201E,U+201C,U+201D,U+2013,U+2014,U+2212,U+2002,U+2003" layout_features="tnum,ss01,ss02,ss03,ss04,ss05,ss06,ss07,ss08,ss09,ss10,ss11,ss12,ss13,ss14,ss15" flavor="woff2" # run the command pyftsubset ${input_file} --unicodes=${unicodes} --layout_features=${layout_features} --flavor=${flavor} --output-file=${filename}.woff2 ``` > Note: Bash scripts are not natively supported on Windows. Check out the next section to see how you can run this with Python. The script defines a subset of unicode characters and layout features that should be included in our optimized font and then executes the `pyftsubset` command. Simply add or remove any other unicode characters that you might need or want to discard for your project. You can checkout [this site](https://jrgraphix.net/r/Unicode/), to get the unicodes. Before we can use the script, we need to make sure [Python](https://www.python.org/) 3.8 or later is installed on our system. Follow the official installation guide from the Python website. Next, open we open terminal in our project and run ```bash python3 -m venv myenv ``` This creates a virtual environment named `myenv`, where we can install python dependencies to avoid dependency conflicts with the host. We activate the environment with ```bash source myenv/bin/activate ``` Our shell should change. All subsequent commands are now executed in this virtual environment. > You can exit the virtual environment with the `deactivate` command Let's install `fonttools` and it's sub dependency `brotli`by running ```bash pip install brotli fonttools ``` Next, we create a file named `font-optimizer.sh` and copy the bash code from above in there. Make the file executable by running ```bash chmod +x font-optimzer.sh ``` We can now call the optimizer by running ```bash ./font-optimizer.sh font-file.ttf ``` The command should spit out an optimized `font-file.woff2`. I tested it with the font [Playwrite Cuba](https://fonts.google.com/specimen/Playwrite+CU) from Google fonts and compressed the light font weight down from 232 KB to 20 KB. In comparison: By only using fontconverter.com I ended up with 77 KB. ## Turn the script into a web app To push things even further, let me show you, how we can turn this little script into a web app that we can deploy and use from anywhere. I will keep it very minimal here, so feel free to extend this example application as you like. Since we are dealing with Python here, let's use it for our web app, too! The only issue is, I don't really know Python very well... 😓 Thanks to GPT, this is shouldn't be a problem. <img src="https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExemkyamVlYjNrZ3E3c3BwZjBubXN0bWgyY3BtZDJpYjRvNnZjYjZpciZlcD12MV9naWZzX3NlYXJjaCZjdD1n/waUr4629vk5QhD558r/giphy.gif" > My initial prompt looked like this: > Create a Python webapp that converts ttf files to woff2 using the subset command from fonttools And after around 11 iterations and some manual adjustments I ended up with this `app.py` file: ```python from flask import Flask, request, jsonify, send_file import io from fontTools.subset import Subsetter, Options from fontTools.ttLib import TTFont app = Flask(__name__) @app.route('/') def index(): return ''' <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>TTF to WOFF2 Converter</title> </head> <body> <h1>TTF to WOFF2 Converter</h1> <form action="/convert" method="post" enctype="multipart/form-data"> <label for="ttfFile">Upload TTF file:</label> <input type="file" id="ttfFile" name="ttfFile" accept=".ttf" required> <button type="submit">Convert</button> </form> </body> </html> ''' @app.route('/convert', methods=['POST']) def upload_file(): # some basic validation if 'ttfFile' not in request.files: return jsonify({'error': 'No file part'}), 400 file = request.files['ttfFile'] if file.filename == '': return jsonify({'error': 'No selected file'}), 400 if not file.filename.endswith('.ttf'): return jsonify({'error': 'File is not a TTF font'}), 400 # Read the TTF file into memory ttf_data = io.BytesIO(file.read()) # Convert TTF to WOFF2 in memory try: # Load the font font = TTFont(ttf_data) # Subsetting options options = Options() options.flavor = 'woff2' options.layout_features = [ 'tnum', 'ss01', 'ss02', 'ss03', 'ss04', 'ss05', 'ss06', 'ss07', 'ss08', 'ss09', 'ss10', 'ss11', 'ss12', 'ss13', 'ss14', 'ss15' ] options.unicodes = ( list(range(0x0020, 0x007F + 1)) + list(range(0x0080, 0x00FF + 1)) + [0x201E, 0x201C, 0x20AC, 0x201E, 0x201C, 0x201D, 0x2013, 0x2014, 0x2212, 0x2002, 0x2003] ) # Subsetting the font subsetter = Subsetter(options=options) subsetter.populate(unicodes=options.unicodes) subsetter.subset(font) # Save the subsetted font to WOFF2 woff2_data = io.BytesIO() font.flavor = 'woff2' font.save(woff2_data) woff2_data.seek(0) # Send the WOFF2 file as a response return send_file(woff2_data, mimetype='font/woff2', as_attachment=True, download_name=file.filename.replace('.ttf', '.woff2')) except Exception as e: return jsonify({'error': 'Conversion failed', 'details': str(e)}), 500 if __name__ == '__main__': app.run(host='0.0.0.0', port=5000, debug=True) ``` > Note: This is very minimal example and should only serve as a starting point for you to further develop it, make it beatiful and sprinkle in a bit of your favorite JavaScript framework 🪄 The app spins up a simple web server using the [flask](https://flask.palletsprojects.com/en/3.0.x/) framework. On GET / it serves HTML code containing a form to upload ttf font files. On submit the files are sent to the `/convert` endpoint. This endpoint does some basic validation first, and converts the font file in memory using the `fonttools` library. Before we can give it a try, we need to also install `flask` by running ```bash pip install flask ``` Then execute the script with ```bash python app.py ``` The app should now be running on `http://localhost:5000` 🥳 ## Containerize the app with Docker If you have not heard of Docker yet, Docker is a handy tool, that allows you to bundle your application together with all the depenedencies it needs. This bundle is called "Docker Image" and you can run instances of this image which are called "Containers". Remember that you had to install Python, Flask, brotli and fonttools manually, in order to get the optimzer running? You can put all of these installation instructions inside a `Dockerfile` like this: ```Dockerfile # Start from a base image, that already has Python installed FROM python:3.12.4-slim # Install any additionally needed packages RUN pip install flask brotli fonttools # Copy our app.py file into the container COPY app.py . # Run the application CMD ["python", "app.py"] ``` Now we only need to install Docker once, instead of every single dependendency for every single app that we ever want to run, which can lead to dependency conflicts, if e.g. one app uses Python 3 and another one Python 2... It's similar to a virtual machine, but more lightweight, since Docker images do not include the operating system. For development purposes use [Docker Desktop](https://docs.docker.com/guides/getting-started/get-docker-desktop/) and follow the installation instructions on their website. Once Docker is installed we can build the Dockerfile into a Docker image by running ```bash docker build . -t font-optimizer ``` in our app directory. The image will be tagged "font-optimizer". To run it, use ```bash docker run -p 5000:5000 font-optimizer ``` This command will spin up a container, that runs our app. For security reasons, containers are shielded off from the host machine. Everything that happens in a container, stays in the container. To access the app inside the container, we used the `-p 5000:5000` option, which tells docker to forward everything that happens inside the container on port 5000 to our host port 5000. So we can now access the app by visiting http://localhost:5000 🥳 There are two more modification we should probably to do. 1. First, we get a warning, that states, we are running a development server and we should use a web server like [gunicorn](https://gunicorn.org/) to run the app in production mode. 2. Second, it makes sense to pin our dependencies to specific versions, so that we don't run into compatibility issues, since running the installation commands without a pinned version, will always fetch the newest. Let's create a `requirements.txt` file, and move our dependency definitions there: ```txt flask==3.0.3 brotli==1.1.0 fonttools==4.53.1 gunicorn==22.0.0 ``` Now update the `Dockerfile` to ```Dockerfile # Use an official Python runtime as a parent image FROM python:3.12.4-slim # COPY requirements.txt into the container COPY requirements.txt . # Install dependencies defined in requirements.txt RUN pip install -r requirements.txt # Copy app.py into the container COPY app.py . # Run the application CMD ["gunicorn", "-w", "4", "-b", "0.0.0.0:5000", "app:app"] ``` ### Deploy the app with Sliplane To deploy this application I will use [Sliplane](https://sliplane.io?utm_source=font-optimizer). Sliplane is a simple Docker hosting platform, that I co founded. All we have to do is push our code to GitHub and connect the repo to Sliplane. > To try out the app, you can create free demo servers. Just take note that demo servers are for testing purposes and will be deleted after 48 hours. Login to GitHub and create a new repository named "font-optimizer". Open a terminal in your project folder and run ```bash # initialize a local repository git init # select the files, that you want to include in your repo (staging) # . means everything inside your current directory git add . # include your staged files alongside a commit message git commit -m "initial commit" # add a remote branch git remote add origin <INSERT REMOTE GITHUB PATH HERE> # push your local code to the remote repo git push -u origin main ``` Next, login to Sliplane with your GitHub account and do the following: - Create a new Project named "Font Optimizer" - Navigate to the project and click on "Deploy Service" - Select the server you want to deploy to, or create a new one if needed - Select "Repository" as the deploy source. - Search for "font-optimizer" in the repositories list. > If your repository does not show up, you need to configure access to the repository first. Click on "Configure Repository Access" and select the "font-optimizer" repository. Take note, that it might take a minute for the newly connected repo to show up here. You can hit "Refresh list" to see, if the repo is available. - After selecting the repo, go with the default settings and hit "Deploy" at the bottom of the page After the deployment finished, you can access your app via on your own sliplane.app domain! 🥳 ## Summary Using raw ttf fonts slows down your website drastically. Optimize your fonts by 1. converting ttf fonts to woff2 2. throwing out any unused glyphs We used the Python library [fonttools](https://pypi.org/project/fonttools/) to create a simple optimization script and turned it into a webapp. We then used [Docker](https://docker.com) to containerize the app and [Sliplane](https://sliplane.io?utm_source=font-optimizer) to deploy it. You can find all the code in this repository: [font-optimizer](https://github.com/sliplane-support/font-optimizer) Hope you learned something. Share, Like, Comment, Subscribe.
wimadev
1,922,514
AOP Concepts and Terminology: A Comprehensive Guide
Aspect-Oriented Programming (AOP) is a paradigm in software development that aims to increase...
0
2024-07-13T16:53:26
https://dev.to/nikhilxd/aop-concepts-and-terminology-a-comprehensive-guide-4fbe
webdev, beginners, programming, tutorial
Aspect-Oriented Programming (AOP) is a paradigm in software development that aims to increase modularity by allowing the separation of cross-cutting concerns. This blog will delve into the key concepts and terminology associated with AOP, providing a detailed understanding of how it works and why it's useful. ### Table of Contents 1. Introduction to AOP 2. Core Concepts of AOP - Aspect - Join Point - Advice - Pointcut - Introduction - Weaving 3. Types of Advice - Before Advice - After Returning Advice - After Throwing Advice - After (Finally) Advice - Around Advice 4. AOP Implementations - Spring AOP - AspectJ 5. Benefits of AOP 6. Conclusion ### 1. Introduction to AOP AOP is a programming technique that complements object-oriented programming (OOP) by allowing developers to modularize concerns that cut across multiple classes or modules. Examples of cross-cutting concerns include logging, transaction management, security, and error handling. Instead of scattering these concerns throughout the codebase, AOP allows them to be defined in one place. ### 2. Core Concepts of AOP #### Aspect An **aspect** is a module that encapsulates a concern that cuts across multiple classes or methods. In AOP, aspects are implemented as regular classes annotated with special annotations or XML configurations. Aspects can contain multiple advices, pointcuts, and other AOP-related constructs. #### Join Point A **join point** is a specific point in the execution of a program, such as the execution of a method or the handling of an exception. In the context of AOP, join points are the points where an aspect can be applied. Join points are typically method calls or field accesses in the program's execution flow. #### Advice **Advice** is the action taken by an aspect at a particular join point. In other words, it defines what the aspect does and when it does it. There are several types of advice, each determining the timing of the aspect's execution relative to the join point. #### Pointcut A **pointcut** is an expression that matches join points. Pointcuts allow aspects to be applied selectively rather than indiscriminately to all join points. By defining pointcuts, developers can specify the exact join points where advice should be executed. Pointcuts are defined using expressions that match method signatures, field accesses, and other program constructs. #### Introduction An **introduction** (also known as an inter-type declaration) allows an aspect to declare that a class implements an interface or has certain methods or fields. This is useful for adding new behavior to existing classes without modifying their source code. #### Weaving **Weaving** is the process of applying aspects to a target object to create a new proxied object. Weaving can be done at different times: - **Compile-time**: Aspects are woven into the code during the compilation process. - **Load-time**: Aspects are woven when the classes are loaded into the JVM. - **Runtime**: Aspects are woven dynamically at runtime using proxy-based mechanisms. ### 3. Types of Advice #### Before Advice **Before advice** runs before the join point. It is typically used for tasks like logging, security checks, or any pre-processing logic. #### After Returning Advice **After returning advice** runs after the join point completes normally. It can be used for post-processing tasks, such as logging the method's return value or cleaning up resources. #### After Throwing Advice **After throwing advice** runs if the join point throws an exception. This type of advice is useful for error handling and logging exceptions. #### After (Finally) Advice **After (finally) advice** runs after the join point finishes, regardless of its outcome (whether it completes normally or throws an exception). It is often used for releasing resources or performing cleanup tasks. #### Around Advice **Around advice** surrounds a join point. It has the ability to control whether the join point should proceed and can alter the behavior both before and after the join point execution. This type of advice is the most powerful and flexible. ### 4. AOP Implementations #### Spring AOP **Spring AOP** is a framework for implementing aspects in Java applications. It uses proxy-based mechanisms for weaving aspects at runtime. Spring AOP is a key module in the Spring Framework, providing declarative transaction management, method interception, and other features. #### AspectJ **AspectJ** is a powerful and mature AOP framework that extends Java with additional constructs for defining aspects. It supports compile-time, load-time, and runtime weaving. AspectJ provides a rich set of pointcut expressions and is widely used for its comprehensive features. ### 5. Benefits of AOP - **Modularity**: AOP promotes better separation of concerns, making the codebase more modular and easier to maintain. - **Reusability**: Aspects can be reused across different parts of the application. - **Maintainability**: Centralizing cross-cutting concerns in aspects reduces code duplication and makes the codebase easier to manage. - **Flexibility**: AOP allows developers to add new behavior to existing code without modifying the code itself. ### 6. Conclusion Aspect-Oriented Programming is a powerful paradigm that enhances modularity and maintainability in software development. By understanding and applying the core concepts and terminology of AOP, developers can effectively manage cross-cutting concerns and create more modular and maintainable applications. Whether using Spring AOP or AspectJ, AOP provides the tools needed to keep codebases clean and efficient.
nikhilxd
1,922,515
Buy GitHub Accounts
https://dmhelpshop.com/product/buy-github-accounts/ Buy GitHub Accounts GitHub, a renowned platform...
0
2024-07-13T16:55:12
https://dev.to/allennfinn525/buy-github-accounts-463
ERROR: type should be string, got "https://dmhelpshop.com/product/buy-github-accounts/\n\nBuy GitHub Accounts\nGitHub, a renowned platform for hosting and collaborating on code, is essential for developers at all levels. With millions of projects worldwide, having a GitHub account is a valuable asset for seasoned programmers and beginners alike. However, the process of creating and managing an account can be complex and time-consuming for some.\n\nThis is where purchasing GitHub accounts becomes advantageous. By buying a GitHub account, individuals can streamline their development journey and access the numerous benefits of the platform efficiently. Whether you are looking to enhance your coding skills or expand your project collaborations, a purchased GitHub account can be a practical solution for optimizing your coding experience.\n\nhttps://dmhelpshop.com/product/buy-github-accounts/\n\nWhat is GitHub Accounts\nGitHub accounts serve as user profiles on the renowned code hosting platform GitHub, where developers collaborate, track code changes, and manage version control seamlessly. Creating a GitHub account provides users with a platform to exhibit their projects, contribute to diverse endeavors, and engage with the GitHub community. Buy verified BYBIT account\n\nYour GitHub account stands as your virtual identity on the platform, capturing all your interactions, contributions, and project involvement. Embrace the power of GitHub accounts to foster connections, showcase your skills, and enhance your presence in the dynamic world of software development. Buy GitHub Accounts.\n\nhttps://dmhelpshop.com/product/buy-github-accounts/\n\n\n\nCan You Buy GitHub Accounts?\n Rest assured when considering our buy GitHub Accounts service, as we distinguish ourselves from other PVA Account providers by offering 100% Non-Drop PVA Accounts, Permanent PVA Accounts, and Legitimate PVA Accounts. Our dedicated team ensures instant commencement of work upon order placement, guaranteeing a seamless experience for you. Embrace our service without hesitation and revel in its benefits.\n\nGitHub stands as the largest global code repository, playing a pivotal role in the coding world, especially for developers. It serves as the primary hub for exchanging code and engaging in collaborative projects.\n\nHowever, if you find yourself without a GitHub account, you may be missing out on valuable opportunities to share your code, learn from others, and contribute to open-source projects. A GitHub account not only allows you to showcase your coding skills but also enhances your \nprofessional network and exposure within the developer community.\n\n\nhttps://dmhelpshop.com/product/buy-github-accounts/\n\n\nAccess To Premium Features\nUnlock a realm of possibilities and boost your productivity by harnessing the full power of Github’s premium features. Enjoy an array of benefits by investing in Github accounts, consolidating access to premium tools under a single subscription and saving costs compared to individual purchases. Buy GitHub Accounts.\n\nhttps://dmhelpshop.com/product/buy-github-accounts/\n\nCultivating a thriving Github profile demands dedication and perseverance, involving continuous code contributions, active collaboration with peers, and diligent repository management. Elevate your development journey by embracing these premium features and optimizing your workflow for success on Github.\n\nhttps://dmhelpshop.com/product/buy-github-accounts/\n\nGitHub private repository limits\nFor those of you who actively develop and utilize GitHub for managing your personal coding projects, consider the storage limitations that may impact your workflow. GitHub’s free accounts, which currently allow for up to three personal repositories, may prove stifling if your coding demands surpass this threshold. In such cases, upgrading to a dedicated buy GitHub account emerges as a viable remedy.\n\nTransitioning to a paid GitHub account not only increases repository limits but also grants a myriad of advantages, including unlimited collaborators access, as well as premium functionalities like GitHub Pages and GitHub Actions. Thus, if your involvement in personal projects confronts space constraints, transitioning to a paid account can seamlessly accommodate your expanding requirements.\n\nGitHub Organization Account\nWhen managing a team of developers, leveraging a GitHub organization account proves invaluable. This account enables the creation of a unified workspace where team members can seamlessly collaborate on code, offering exclusive features beyond personal accounts like the ability to edit someone else’s repository. Buy GitHub Accounts.\n\nEstablishing an organization account is easily achieved by visiting github.com and selecting the “Create an organization” option, wherein you define a name and configure basic settings. Once set up, you can promptly add team members and kickstart collaborative project work efficiently.\n\nTypes Of GitHub Accounts\nInvesting in a GitHub account (PVA) offers access to exclusive services typically reserved for established accounts, such as beta testing programs, early access to features, and participation in special GitHub initiatives, broadening your range of functionality.\n\nBy purchasing a GitHub account, you contribute to a more secure and reliable environment on the GitHub platform. A bought GitHub account (PVA) allows for swift account recovery solutions in case of account-related problems or unexpected events, guaranteeing prompt access restoration to minimize any disruptions to your workflow.\n\nAs a developer utilizing GitHub to handle your code repositories for personal projects, the matter of personal storage limits may be of significance to you. Presently, GitHub’s complimentary accounts are constrained to three personal repositories. Buy GitHub Accounts.\n\nShould your requirements surpass this restriction, transitioning to a dedicated GitHub account stands as the remedy. Apart from elevated repository limits, upgraded GitHub accounts provide numerous advantages, including access to unlimited collaborators and premium functionalities like GitHub Pages and GitHub Actions.\n\nThis ensures that if your undertakings encompass personal projects and you find yourself approaching storage boundaries, you have viable options to effectively manage and expand your development endeavors. Buy GitHub Accounts.\n\nWhy are GitHub accounts important?\nGitHub accounts serve as a crucial tool for anyone seeking to establish a presence in the tech industry. Regardless of your experience level, possessing a GitHub account equates to owning a professional online portfolio that highlights your skills and ventures to potential employers or collaborators.\n\nThrough GitHub, individuals can exhibit their coding proficiency and projects, fostering the display of expertise in multiple programming languages and technologies. This not only aids in establishing credibility as a developer but also enables prospective employers to evaluate your capabilities and suitability for their team effectively. Buy GitHub Accounts.\n\nBy maintaining an active GitHub account, you can effectively demonstrate a profound dedication to your field of expertise. Employers are profoundly impressed by individuals who exhibit a robust GitHub profile, as it signifies a genuine enthusiasm for coding and a willingness to devote significant time and energy to refining their abilities.\n\nThrough consistent project sharing and involvement in open source projects, you have the opportunity to showcase your unwavering commitment to enhancing your capabilities and fostering a constructive influence within the technology community. Buy GitHub Accounts.\n\nConclusion\nFor developers utilizing GitHub to host their code repositories, exploring ways to leverage coding skills for monetization may lead to questions about selling buy GitHub accounts, a practice that is indeed permissible. However, it is crucial to be mindful of pertinent details before proceeding. Buy GitHub Accounts.\n\nNotably, GitHub provides two distinct account types: personal and organizational. Personal accounts offer free access with genuine public storage, in contrast to organizational accounts. Before delving into selling a GitHub account, understanding these distinctions is essential for effective decision-making and navigating the platform’s diverse features.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 ‪(980) 277-2786\nSkype:dmhelpshop\n"
allennfinn525
1,922,517
Day 24 Task - 90daysofdevops : Complete Jenkins CI/CD Project
What is GitHub Webhook GitHub Webhooks are a feature of the GitHub platform that allow developers to...
0
2024-07-13T17:08:17
https://dev.to/oncloud7/day-24-task-90daysofdevops-complete-jenkins-cicd-project-270j
aws, awschallenge, devops, jenkins
**What is GitHub Webhook** GitHub Webhooks are a feature of the GitHub platform that allow developers to receive notifications about events that occur in a GitHub repository. Webhooks are HTTP callbacks that are triggered by specific events in a repository, such as a new commit, pull request, or issue being created or updated. They provide a way to integrate external systems or services with GitHub and automate workflows based on repository activity. When an event occurs, GitHub sends a POST request to a specified URL (endpoint) configured by the developer, containing information about the event. Developers can secure their webhooks by using a secret token to verify the authenticity of incoming requests, ensuring that only valid requests are processed. **Advantages of GitHub WebHook** Using a webhook can provide many benefits to a programmer according to Indeed. Some of the benefits are: **Increase work efficiency**:- Webhooks are a simple and effective way to send information from one application to another without complex procedures or the risk of missing important data. Unlike other APIs that regularly check for data, webhooks can immediately push data from events into other applications as soon as it's available. **Easier automation**:- Webhooks allow for easier automation of data transfer processes and user-defined actions for triggering events in software programs and applications. With webhooks, data can be instantly transferred, making it useful for creating repeated events based on the same triggering event. **Accurate data transfer**:- Webhooks in programming provide specificity by allowing direct connections between specific parts of an application, without needing to connect multiple code elements to transfer data. This makes setting up a webhook faster and easier compared to using other APIs or callbacks. It also helps in keeping the code clean and understandable by reducing the amount of complicated code in the program. **Easy setup:-**Using webhooks to connect applications often requires less setup time and effort compared to other methods. This is because webhooks use HTTP, which is a widely used internet protocol for transferring documents between web browsers and servers. Since most websites already use HTTP, adding a webhook to an application can be done easily without creating new infrastructure in your code. **Seamless integration**:- If you're making a new app and want to link it with other apps using webhooks, it's usually easy because many apps support webhook integration. This is useful if you want to make an app that sends notifications, messages, or events based on existing app activity. You can even make services that let users link and integrate the apps they want and make personalized events and actions for their work or productivity. **How to create Webhooks?** To create a GitHub webhook: 1.Go to the settings page for your repository. 2.Click on the "Webhooks & Services" tab. 3.Then, click on the "Add webhook" button. 4.In the webhook configuration, you will need to provide the following information: Payload URL: This is the URL that will receive the webhook notifications. Events: Select the events that you want to be notified about. Secret: This is a secret key that will be used to verify the authenticity of the webhook notifications. 5.Once you have configured the webhook, click on the "Create webhook" button. **Task-01** Fork this repository: To Fork Repository Go To location of Project then there is FORK option on top right side. Click on it you will able to fork Project. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5z34oxwoze7tospzx3ty.png) Create a connection to your Jenkins job and your GitHub Repository via GitHub Integration : Connection to your Jenkins Job with GitHub projects You can done through GitHub Hook : Go to project settings -> click on webhook -> Add webhook ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i3r5f2ijofu85iknztbk.png) Now for Installing GitHub Integration plugin in Jenkins Open your jenkins dashboard. Click on the Manage Jenkins button on your Jenkins dashboard Click on Manage Plugins Install GitHub Integration plugin ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lieanj6ns5lokicyi4ae.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p524dn6i1my6y3y5geiu.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/whe3h3izxs4shyt8ia8o.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e5j0a86mdj1sdrckstyl.png) Read About GitHub WebHooks and make sure you have CICD setup **Task-02** In the Execute shell run the application using Docker compose ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f8znrt7ahtwuzp81gks6.png) You will have to make a Docker Compose file for this Project NOTE : Install the docker compose on your server ( like for ubuntu - docker-compose install -y) After build you can check console output. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8aw7vbhvtlpui0c8jqdj.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m0b0s7dpwaj839390974.png) You will have to make a Docker Compose file for this Project (Can be a good open source contribution) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q6su8r1rbkdm69wh4xfr.png)
oncloud7
1,922,518
Linux Commands You Should Master
Linux commands form the backbone of navigating and managing a Linux system efficiently through the...
0
2024-07-13T17:15:20
https://dev.to/rubiin/linux-commands-you-should-master-2kmd
linux, terminal, cli, unix
--- title: Linux Commands You Should Master published: true description: tags: linux,terminal,cli,unix cover_image: https://www.redhat.com/sysadmin/sites/default/files/styles/full/public/2020-02/blur-bright-business-codes-207580.jpg?itok=eUyGgeea # Use a ratio of 100:42 for best results. # published_at: 2024-07-13 16:58 +0000 --- Linux commands form the backbone of navigating and managing a Linux system efficiently through the terminal. Whether you're a beginner or an experienced user, mastering these commands will greatly enhance your productivity and control over your system. In this article, here are the top Linux commands that every user should master for effective terminal usage Navigation and File Management ### cd: Change directory ```sh cd directory_name ``` ### ls: List directory contents ```bash ls options directory_path ``` ### pwd: Print working directory ```bash pwd ``` ### cp: Copy files and directories ```bash cp source_file destination_file ``` ### mv: Move (rename) files and directories ```bash mv source destination ``` ### rm: Remove files and directories ```bash rm file_name ``` ### mkdir: Make directories ```bash mkdir directory_name ``` ### rmdir: Remove empty directories ```bash rmdir directory_name ``` ### cat: Concatenate and display files ```bash cat file_name ``` ### less/more: View file contents interactively (one screen at a time) ```bash less file_name more file_name ``` ### head/tail: View the beginning or end of a file ```bash head file_name tail file_name ``` ### grep: Search for patterns in files ```bash grep pattern file_name ``` ### find: Search for files in a directory hierarchy ```bash find directory_path options ``` ### ln: Create links between files ```bash ln -s target_file link_name ``` ### chmod: Change file permissions ```bash chmod permissions file_name ``` ### chown: Change file owner and group ```bash chown owner:group file_name ``` ## Process Management ### ps: Display information about active processes ```bash ps ``` ### kill: Terminate processes ```bash kill process_id ``` ### top/htop: Display system processes in real-time ```bash top htop ``` ## System Information ### df: Display disk space usage ```bash df options ``` ### du: Estimate file space usage ```bash du options file_name ``` ### free: Display amount of free and used memory in the system ```bash free ``` ### uname: Print system information ```bash uname -a ``` ### uptime: Show how long the system has been running ```bash uptime ``` ## Network Management ### ping: Check the connectivity to a server or network device ```bash ping hostname_or_ip ``` ### ifconfig/ip: Display and configure network interfaces ```bash ip addr show ``` ### netstat: Print network connections, routing tables, interface statistics, etc. ```bash netstat options ``` ### wget/curl: Download files from the internet ```bash wget URL curl -O URL ``` ## System Administration ### sudo: Execute a command as the superuser (root) ```bash sudo command ``` ### shutdown/reboot: Shutdown or reboot the system ```bash shutdown options reboot ``` ### service/systemctl: Control system services (systemd-based systems) ```bash systemctl start|stop|restart service_name ``` ### journalctl: Query and display system logs ```bash journalctl options ``` ### passwd: Change user password ```bash passwd ``` ## Text Processing ### awk: A versatile programming language for pattern scanning and processing ```bash awk 'pattern { action }' file ``` ### sed: Stream editor for filtering and transforming text ```bash sed 's/search/replace/g' file ``` ### cut: Remove sections from each line of files ```bash cut options file ``` ### sort: Sort lines of text files ```bash sort options file ``` ### uniq: Report or omit repeated lines ```bash uniq options file ``` ### wc: Print newline, word, and byte counts for each file ```bash wc options file ``` ## Compression and Archiving ### tar: Archive files and directories ```bash tar options archive_name files ``` ### gzip/gunzip: Compress or decompress files ```bash gzip file gunzip file.gz ``` ### bzip2/bunzip2: Another compression utility ```bash bzip2 file bunzip2 file.bz2 ``` ## Miscellaneous ### echo: Display a line of text or variables ```bash echo "Hello, world!" ``` ### date: Display or set the system date and time ```bash date ``` ### watch: Execute a program periodically, showing output fullscreen ```bash watch command ``` ### alias: Create an alias for a command ```bash alias short_name='command sequence' ``` ### history: Display command history ```bash history ``` ### whoami: Display the current username ```bash whoami ``` ### touch: Change file timestamps or create empty files ```bash touch file_name ``` ### scp/rsync: Securely copy files between hosts ```bash scp file user@host:destination rsync options source destination ``` Mastering these Linux commands will empower you to efficiently manage files, processes, networks, and more directly from the terminal. Whether you're a system administrator, developer, or Linux enthusiast, these commands are indispensable tools for your daily workflow. Happy Linux command-line hacking!
rubiin