id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,909,657
Navigating the Maze: Exploring Cybersecurity Certifications (OSCP, CEH, CISSP)
The cybersecurity landscape demands a skilled workforce equipped with the right knowledge and...
0
2024-07-03T04:16:50
https://dev.to/epakconsultant/navigating-the-maze-exploring-cybersecurity-certifications-oscp-ceh-cissp-3gmn
The cybersecurity landscape demands a skilled workforce equipped with the right knowledge and certifications. This article delves into three prominent certifications – OSCP (Offensive Security Certified Professional), CEH (Certified Ethical Hacker), and CISSP (Certified Information Systems Security Professional) – helping you understand their distinctions and identify the path that aligns with your career goals. Understanding Certification Types: Offensive vs. Defensive Security Cybersecurity can be broadly categorized into two areas: offensive security (ethical hacking) and defensive security. The chosen certification path depends on your career aspirations: - Offensive Security Certifications: Focus on simulating attacker behavior, identifying vulnerabilities, and exploiting them in a controlled environment. These certifications are ideal for those pursuing careers in penetration testing, vulnerability research, or red teaming. - Defensive Security Certifications: Emphasize broad knowledge of information security principles, best practices, and risk management strategies. These certifications cater to individuals seeking roles in security architecture, security operations, or security management. 1. OSCP: The Hands-On Hacking Badge The OSCP certification validates your practical skills in penetration testing and ethical hacking. It's known for its demanding nature, requiring candidates to successfully complete a 24-hour lab environment that simulates real-world hacking scenarios. - Focus: Offensive Security - Skills Assessed: Penetration testing methodologies, vulnerability exploitation, exploit development, post-exploitation techniques. - Benefits: Highly regarded in the industry, demonstrates strong practical skills in ethical hacking. - Considerations: Requires strong technical skills and experience with Linux and scripting languages. [Unlock Your Cybersecurity Potential: The Essential Guide to Acing the CISSP Exam: Conquer the CISSP](https://www.amazon.com/dp/B0D42PRZD8) 2. CEH: Laying the Foundation in Ethical Hacking The CEH certification provides a comprehensive foundation in ethical hacking methodologies and tools. It emphasizes theoretical knowledge through an objective-based exam and serves as a good starting point for aspiring ethical hackers. - Focus: Offensive Security - Skills Assessed: Ethical hacking principles, information security concepts, vulnerability analysis, system penetration testing methodologies. - Benefits: Widely recognized certification, good entry point for those new to ethical hacking. - Considerations: Knowledge-based exam, may not emphasize practical skills as heavily as OSCP. 3. CISSP: The Gold Standard in Cybersecurity The CISSP certification signifies a broad and deep understanding of information security best practices and methodologies. It's considered the gold standard in the field, demonstrating a well-rounded security professional. - Focus: Defensive Security - Skills Assessed: Security and risk management, asset security, security architecture and engineering, communication and network security, identity and access management (IAM), security assessment and testing, security operations. - Benefits: Highly respected and sought-after certification, opens doors to leadership and management roles in cybersecurity. - Considerations: Requires experience in the cybersecurity field, broad knowledge base across various security domains. Choosing the Right Certification: Aligning with Your Goals The ideal certification for you depends on your experience level and career aspirations: - For Beginners: If you're new to cybersecurity, consider CEH as a foundational step before pursuing more advanced certifications like OSCP. - For Penetration Testers: OSCP is a must-have certification to validate your practical skills and gain industry recognition. - For Security Professionals: CISSP demonstrates a comprehensive understanding of information security and positions you for leadership roles. Beyond Certifications: A Holistic Approach Remember, certifications are valuable credentials, but they are just one piece of the puzzle. Continuously update your knowledge, stay abreast of emerging threats, and gain practical experience to truly excel in the cybersecurity domain. Conclusion The cybersecurity landscape offers a diverse range of career paths. By understanding the distinctions between OSCP, CEH, and CISSP, you can choose the certification that aligns with your interests and propels you forward in your cybersecurity journey. Remember, certifications are a valuable tool, but ongoing learning and practical experience are key to success in this dynamic field.
epakconsultant
1,909,654
Unveiling the Arsenal: Exploring Essential Cybersecurity Tools
In the ever-evolving landscape of cybersecurity, having the right tools at your disposal is...
0
2024-07-03T04:10:45
https://dev.to/epakconsultant/unveiling-the-arsenal-exploring-essential-cybersecurity-tools-3e0d
cybersecurity
In the ever-evolving landscape of cybersecurity, having the right tools at your disposal is paramount. This article delves into four powerful tools – Metasploit, Burp Suite, Nmap, and Wireshark – exploring their functionalities and highlighting their applications in ethical hacking and vulnerability assessments. Remember, these tools are for authorized penetration testing only, and using them for malicious purposes is strictly prohibited. 1. Metasploit: The Pen Tester's Toolkit Imagine a comprehensive library of exploits, payloads, and encoders prepped for ethical hacking endeavors. That's the essence of Metasploit. It's an open-source framework that empowers penetration testers with the following capabilities: - Exploit Database: Metasploit boasts a vast repository of exploits for various software vulnerabilities. These exploits can be leveraged to simulate real-world attacks and identify potential weaknesses in a system. - Payload Delivery: Metasploit facilitates the creation and delivery of payloads – malicious code designed to achieve specific objectives on a compromised system. - Auxiliary Modules: The framework offers a collection of auxiliary modules that can be used for tasks like reconnaissance, privilege escalation, and maintaining access to compromised systems. - Automation Capabilities: Metasploit supports automation through its scripting languages, allowing testers to automate repetitive tasks within penetration testing engagements. [Unlock Your Cybersecurity Potential: The Essential Guide to Acing the CISSP Exam: Conquer the CISSP](https://www.amazon.com/dp/B0D42PRZD8) Applications of Metasploit: - Vulnerability Assessment: Identify and exploit vulnerabilities within systems to assess their security posture. - Security Awareness Training: Simulate attacks in a controlled environment to educate users about cybersecurity best practices. - Developing Custom Exploits: Advanced users can leverage Metasploit's framework to develop custom exploits for emerging vulnerabilities. 2. Burp Suite: The Web Application Hacker's Companion Web applications are prime targets for cyberattacks. Burp Suite emerges as a comprehensive platform for web application security testing: - Proxy Interception: Intercept web traffic between the browser and the server, allowing testers to manipulate requests and responses to identify vulnerabilities. - Intruder Module: Craft and automate attacks like fuzzing and parameter tampering to uncover potential injection flaws within web applications. - Scanner Module: Utilize Burp Suite's built-in scanner to identify common web application vulnerabilities like SQL injection and XSS (Cross-Site Scripting). - Extensibility: The platform offers extensibility through plugins, allowing users to integrate additional functionalities for specific testing needs. Applications of Burp Suite: - Web Application Penetration Testing: Identify and exploit vulnerabilities in web applications to assess their security strength. - Security Code Review: Complement code reviews by testing applications for vulnerabilities that might be missed during manual code analysis. - Web Application Security Awareness Training: Demonstrate web application attack vectors to developers and security teams. 3. Nmap: The Network Mapper Before launching an attack, it's crucial to understand the target network infrastructure. Nmap, a free and open-source network scanner, provides a wealth of information: - Port Scanning: Identify open ports on a target system and determine the services running on those ports. - Operating System Detection: Nmap can attempt to fingerprint the operating system running on a target device based on network responses. - Service Version Detection: Identify the version of a service running on a specific port, which can be helpful in pinpointing known vulnerabilities associated with that version. - Scripting Capabilities: Nmap offers scripting capabilities for advanced users to automate complex network scanning tasks. Applications of Nmap: - Network Reconnaissance: Identify devices, services, and potential vulnerabilities within a network during penetration testing engagements. - Network Security Assessments: Evaluate the overall security posture of a network by identifying potential weaknesses. - Network Inventory Management: Maintain an up-to-date inventory of network devices and services. 4. Wireshark: The Network Traffic Decoder Understanding network traffic is vital for troubleshooting network issues and identifying suspicious activity. Wireshark, a free and open-source network protocol analyzer, empowers users with the following capabilities: - Packet Capture: Capture network traffic flowing across a network interface, allowing for detailed analysis. - Packet Decryption: With proper configurations, Wireshark can decrypt certain network protocols to expose the underlying data being transmitted. - Protocol Analysis: Wireshark dissects network packets and displays the data at various protocol layers, offering a deep dive into network communication. - Filtering and Search: Filter and search through captured traffic based on specific criteria to pinpoint anomalies or identify specific network activities. Applications of Wireshark: Network Troubleshooting: Diagnose network issues by analyzing captured traffic and identifying potential causes of network performance problems.
epakconsultant
1,909,595
Graphql Fundamental
GraphQL provides a complete and understandable description of the API, including both "API endpoints"...
27,944
2024-07-03T04:07:28
https://dev.to/jacktt/graphql-fundamental-236k
graphql
GraphQL provides a complete and understandable description of the API, including both "API endpoints" and response data schema. It empowers clients to request exactly what they need, ensuring that the server returns only the requested fields instead of a fixed response schema in RESTful API. To do that, GraphQL declares a schema that includes types, queries, mutations,... ## Schema A schema will look like this ```graphql type Author { id: ID name: String books: [Book!] } type Book { id: ID author: [Author!] } type Query { getBooks: [Book!] getBook(id: ID): Book } type Mutation { addBook(name: String, author_id: String): Book } ``` ## Client request Graphql uses POST method to communicate between client and server. A request body looks like this ```json { "query": "query whatever_name {\n getBooks {\n name\n }\n \n getBook(1) {\n name\n }\n}", "variables": {} } ``` Output will look like ```json { "data": { "getBooks": [ { "name": "Hello world" } ], "getBook": { "name": "Hello 1" } } } ``` ### Query Let's zoom in on to the `query` field: ```graphql query whatever_name { getBooks { name } getBook(1) { name } } ``` A request can call more than one query. To make it easier to imagine, with GraphQL, you can call multiple RESTful APIs in one request. As you can see, the response will consist of corresponding responses in a key-value format, with the key being the query name and the value being the response data. ## Variable For reusability, a query can declare dynamic arguments and then inject the values by declaring variables in the request body. Let's rewrite the query above appling dynamic arguments: ```graphql query whatever_name($id: String) { getBooks { name } getBook($id) { name } } ``` Request body: ```json { "query": "query whatever_name($id: String) {\n getBooks {\n name\n }\n \n getBook($id) {\n name\n }\n}, "variables": { "id": 1 } } ``` ### Explore a Graphql Endpoint You can request the following query to receive types of any Graphql. _(I extracted it from Postman)_ https://gist.github.com/huantt/3d8db80c878082e1237ab531a8e7ee26
jacktt
1,909,652
Simplifying Tag Management: Unveiling the Benefits of Google Tag Manager
Juggling multiple marketing and analytics tags on your website can be a tedious task. Enter Google...
0
2024-07-03T04:05:52
https://dev.to/epakconsultant/simplifying-tag-management-unveiling-the-benefits-of-google-tag-manager-13d1
gtm
Juggling multiple marketing and analytics tags on your website can be a tedious task. Enter Google Tag Manager (GTM), a free tool that simplifies tag management, streamlines workflows, and empowers you to optimize your website's performance. This article delves into the core functionalities of GTM and explores the numerous advantages it offers for websites of all sizes. [Mastering OWL 2 Web Ontology Language: From Foundations to Practical Applications](https://www.amazon.com/dp/B0CT93LVJV) Understanding Google Tag Manager: A Central Hub for Website Tags Imagine a central control panel for all your website's marketing and analytics tags. That's precisely what Google Tag Manager offers. It acts as an intermediary between your website and various third-party tags, such as: - Analytics Tags: Google Analytics, conversion tracking pixels, and other analytics tags can be integrated and managed through GTM. - Marketing Tags: Implement tags for remarketing campaigns, social media integrations, and A/B testing tools, all within the GTM interface. - Custom Tags: Craft custom HTML, JavaScript, or image tags to track specific user interactions or implement unique functionalities. Benefits of Using Google Tag Manager for Your Website By adopting Google Tag Manager, you unlock a plethora of advantages for your website: - Simplified Tag Management: No more editing website code directly every time you need to add or modify a tag. GTM provides a user-friendly interface for managing all your tags from a single location. - Reduced Errors: Eliminate the risk of errors that might arise from manual code edits. GTM's user interface helps ensure proper tag implementation, reducing the potential for malfunctioning tags. - Improved Efficiency: Save time and effort by managing tags centrally. GTM streamlines the process of adding, updating, and removing tags, freeing you to focus on other website optimization tasks. - Enhanced Organization: GTM provides a clear overview of all your website's tags, their purposes, and their configurations. This organization simplifies maintenance and troubleshooting. - Increased Agility: Need to deploy a new tag for a marketing campaign quickly? GTM allows for swift tag deployment without requiring code changes to your website. - Collaboration: GTM offers team collaboration features, enabling multiple users to manage tags with defined access levels. This is particularly valuable for marketing and web development teams working together. - Tag Version Control: GTM maintains a history of tag versions, allowing you to revert to a previous version if necessary. This ensures data integrity and provides a safety net if issues arise with a new tag implementation. - Advanced Features: GTM offers powerful features like user management, container versions, and custom triggers. These functionalities cater to complex website tag management scenarios. Beyond the Basics: Leveraging GTM's Potential While the core benefits focus on simplified tag management, GTM offers additional capabilities to enhance your website: - Data Layer Integration: Implement a data layer to capture website user interactions and make this data readily available to all your tags. This simplifies data collection and utilization by various tracking tools. - Trigger-Based Tag Firing: Control when tags fire on your website using triggers. You can define specific user actions or page events that trigger the activation of specific tags, ensuring relevant data collection. - Container Previews and Debugging: GTM provides tools for previewing and debugging tag implementations before publishing changes to your live website. This helps identify and rectify any potential issues before they impact real users. Conclusion Google Tag Manager is an invaluable tool for website owners and developers seeking a streamlined and efficient approach to tag management. By leveraging its capabilities, you can simplify workflows, minimize errors, and gain greater control over your website's analytics and marketing efforts. Regardless of your website's size or complexity, GTM offers a compelling solution to optimize your online presence and gain valuable insights into user behavior. Explore the world of Google Tag Manager and unlock the potential for a more efficient and data-driven website.
epakconsultant
1,909,650
Installing Canvas LMS: A Step-by-Step Guide for Different Systems
Canvas LMS, a popular learning management system, empowers educators and institutions to create...
0
2024-07-03T04:01:31
https://dev.to/epakconsultant/installing-canvas-lms-a-step-by-step-guide-for-different-systems-2l67
Canvas LMS, a popular learning management system, empowers educators and institutions to create engaging online learning experiences. But before diving into its functionalities, installing Canvas LMS on your system is the first crucial step. This guide explores the installation process for different operating systems, equipping you to set up Canvas and embark on your eLearning journey. Understanding Installation Options: Self-Hosting vs. Cloud-Based Canvas LMS offers two primary deployment options: - Self-Hosted Installation: You install and manage Canvas LMS on your own server infrastructure. This approach provides greater control and customization but requires technical expertise for setup and maintenance. - Cloud-Based Deployment: Canvas LMS is hosted on a cloud platform managed by Instructure, the creators of Canvas. This option eliminates the need for server management but offers less control over the environment. Installing Canvas LMS on Windows For a relatively straightforward installation experience, consider the Windows option (if self-hosting is your chosen path): 1. Download the Installer: Head to the Instructure website and download the Canvas LMS installer for Windows. 2. Run the Installer: Double-click the downloaded installer file and follow the on-screen instructions. The installer will guide you through steps like accepting the license agreement and choosing an installation directory. 3. Configure Database Settings: During installation, you'll be prompted to configure the database settings for Canvas. Ensure you have a compatible database management system (DBMS) like PostgreSQL set up before proceeding. 4. Complete the Installation: Once the configuration details are provided, the installer will handle the installation process. 5. Access Canvas LMS: After successful installation, open a web browser and navigate to the URL specified during the setup (typically http://localhost:3000). You can now access the Canvas LMS administration panel to configure your learning environment. [The Lucrative Path to Becoming a Successful Notary Loan Signing Agent](https://www.amazon.com/dp/B0D8LLR31S) Installing Canvas LMS on Ubuntu For a more technical installation, Ubuntu offers a command-line approach: 1. Prerequisites: Ensure you have Ubuntu installed with essential dependencies like Git, Ruby, Node.js, and Yarn. 2. Clone the Canvas LMS Repository: Use the git command to clone the official Canvas LMS repository from GitHub. 3. Install Dependencies: Run the necessary commands to install dependencies required for Canvas LMS to function. 4. Configure Database Settings: Similar to the Windows installation, configure the database settings for Canvas, specifying connection details for your chosen DBMS. 5. Install and Compile Assets: Run scripts to install additional dependencies and compile assets needed for Canvas LMS. 6. Install and Configure Apache: Set up an Apache web server on your Ubuntu system and configure it to serve Canvas LMS content. 7. Obtain SSL Certificate (Optional): For enhanced security, consider obtaining an SSL certificate for your Canvas LMS installation. 8. Configure Virtual Hosts: Configure Apache virtual hosts to point to the Canvas LMS directory within your server. 9. Setup Automated Jobs and Firewall Rules: Configure automated jobs for background tasks and establish firewall rules to secure your Canvas LMS installation. 10. Enable Canvas Rich Content Editor (Optional): Enable the Canvas Rich Content Editor for a more comprehensive content creation experience within your LMS. Prerequisites: Ensure you have Ubuntu installed with essential dependencies like Git, Ruby, Node.js, and Yarn. Clone the Canvas LMS Repository: Use the git command to clone the official Canvas LMS repository from GitHub. Install Dependencies: Run the necessary commands to install dependencies required for Canvas LMS to function. Configure Database Settings: Similar to the Windows installation, configure the database settings for Canvas, specifying connection details for your chosen DBMS. Install and Compile Assets: Run scripts to install additional dependencies and compile assets needed for Canvas LMS. Install and Configure Apache: Set up an Apache web server on your Ubuntu system and configure it to serve Canvas LMS content. Obtain SSL Certificate (Optional): For enhanced security, consider obtaining an SSL certificate for your Canvas LMS installation. Configure Virtual Hosts: Configure Apache virtual hosts to point to the Canvas LMS directory within your server. Setup Automated Jobs and Firewall Rules: Configure automated jobs for background tasks and establish firewall rules to secure your Canvas LMS installation. Enable Canvas Rich Content Editor (Optional): Enable the Canvas Rich Content Editor for a more comprehensive content creation experience within your LMS. 1. Sign Up for a Canvas LMS Account: Visit the Instructure website and create a Canvas LMS account. 2. Choose a Subscription Plan: Select a subscription plan that aligns with your needs and user base. 3. Configure Your LMS: Instructure provides a user-friendly interface for configuring your cloud-based Canvas LMS instance. This includes setting up user accounts, courses, and other learning environment aspects. Conclusion This guide has provided a roadmap for installing Canvas LMS on Windows and Ubuntu, along with insights into the cloud-based deployment process. Remember, the level of technical expertise required varies depending on your chosen installation method. For a smooth installation journey, ensure you have the necessary technical skills or resources if opting for self-hosting. Cloud-based deployment offers a simpler approach but comes with limitations in terms of customization. Regardless of the chosen method, Canvas LMS empowers you to create a robust and engaging online learning environment.
epakconsultant
1,909,649
Optimization + Tuning Spark
Other Issues and How to Address Them We have also touched on another very common issue with Spark...
0
2024-07-03T04:00:20
https://dev.to/congnguyen/optimization-tuning-spark-1dme
**Other Issues and How to Address Them** We have also touched on another very common issue with Spark jobs that can be harder to address: everything working fine but just taking a very long time. So what do you do when your Spark job is (too) slow? **Insufficient resources** Often while there are some possible ways of improvement, processing large data sets just takes a lot longer time than smaller ones even without any big problem in the code or job tuning. Using more resources, either by increasing the number of executors or using more powerful machines, might just not be possible. When you have a slow job it’s useful to understand - how much data you’re actually processing (compressed file formats can be tricky to interpret), - if you can decrease the amount of data to be processed by filtering or aggregating to lower cardinality, - and if resource utilization is reasonable. There are many cases where different stages of a Spark job differ greatly in their resource needs: loading data is typically I/O heavy, some stages might require a lot of memory, others might need a lot of CPU. Understanding these differences might help to optimize the overall performance. Use the Spark UI and logs to collect information on these metrics. If you run into out of memory errors you might consider increasing the number of partitions. If the memory errors occur over time you can look into why the size of certain objects is increasing too much during the run and if the size can be contained. Also, look for ways of freeing up resources if garbage collection metrics are high. Certain algorithms (especially ML ones) use the driver to store data the workers share and update during the run. If you see memory issues on the driver check if the algorithm you’re using is pushing too much data there. **Data skew** If you drill down on the Spark UI to the task level you can see if certain partitions process significantly more data than others and if they are lagging behind. Such symptoms usually indicate a skewed data set. Consider implementing the techniques mentioned in this lesson: - add an intermediate data processing step with an alternative key - adjust the spark.sql.shuffle.partitions parameter if necessary The problem with data skew is that it’s very specific to a data set. You might know ahead of time that certain customers or accounts are expected to generate a lot more activity but the solution for dealing with the skew might strongly depend on how the data looks like. If you need to implement a more general solution (for example for an automated pipeline) it’s recommended to take a more conservative approach (so assume that your data will be skewed) and then monitor how bad the skew really is. **Inefficient queries** Once your Spark application works it’s worth spending some time to analyze the query it runs. You can use the Spark UI to check the DAG and the jobs and stages it’s built of. Spark’s query optimizer is called Catalyst. While Catalyst is a powerful tool to turn Python code to an optimized query plan that can run on the JVM it has some limitations when optimizing your code. It will for example push filters in a particular stage as early as possible in the plan but won’t move a filter across stages. It’s your job to make sure that if early filtering is possible without compromising the business logic than you perform this filtering where it’s more appropriate. It also can’t decide for you how much data you’re shuffling across the cluster. Remember from the first lesson how expensive sending data through the network is. As much as possible try to avoid shuffling unnecessary data. In practice, this means that you need to perform joins and grouped aggregations as late as possible. When it comes to joins there is more than one strategy to choose from. If one of your data frames are small consider using broadcast hash join instead of a hash join. **Further reading** Debugging and tuning your Spark application can be a daunting task. There is an ever growing community out there though always sharing new ideas and working on improving Spark itself and tooling that makes using Spark easier. So if you have a complicated issue don’t hesitate to reach out to others (via user mailing lists, forums, and Q&A sites). You can find more information on tuning [Spark](https://spark.apache.org/docs/latest/tuning.html ) and [Spark SQL](https://spark.apache.org/docs/latest/sql-performance-tuning.html ) in the documentation. Source udacity courses DE nano-degree
congnguyen
1,909,254
JWT for Developers: Behind the Scenes.
90% of developers just use the jsonwebtoken library without really understanding what’s happening...
0
2024-07-03T03:58:03
https://dev.to/andres_fernandez_05a8738d/jwt-for-developers-behind-the-scenes-445p
node, javascript, api, security
90% of developers just use the jsonwebtoken library without really understanding what’s happening behind the scenes. Be part of the percentage that does. Grab a coffee and enjoy the learning journey ☕🤯. **Technical definition JWT:** The Internet Engineering Task Force (IETF) is a globally recognized organization responsible for the creation and promotion of internet standards. [In its document RFC 7519](https://datatracker.ietf.org/doc/html/rfc7519), the IETF defines JWT (JSON Web Tokens) as: "A compact, URL-safe means of representing claims to be transferred between two parties. The claims in a JWT are encoded as a JSON object that is used as the payload of a JSON Web Signature (JWS) structure or as the plaintext of a JSON Web Encryption (JWE) structure, enabling the claims to be digitally signed or integrity protected with a Message Authentication Code (MAC) and/or encrypted." ---- Up to this point, we know the basics of JWTs ✅: **A JWT consists of three parts:** - Header. - Payload. - Signature. **Library Trust:** We trust that the jsonwebtoken library works well to keep our app secure. **Format:** flow to get a JWT looks something like this: ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fztercceff10z1s3cgzf.png) ---- **Uses of JWTs 🦾** JWTs signed with an algorithm and a secret key are known as JWS (JSON Web Signature). You have probably encountered them, and their use goes beyond merely assigning permissions for applications to access certain resources. Some of their most common uses include: > - Authentication: Verifying the identity of a user. > - Authorization: Granting or denying access to resources. > - Federated Identity: Allowing multiple systems to share identity > information. > - Client-Side Sessions: Storing session information on the client > side. > - Client-Side Secrets: Keeping secrets on the client side without server storage. Let's emphasize authorization. After the proper validation of a user's authentication, we need to assign a ticket for them to access certain protected resources of our applications. ---- ## JWT breakdown ![Image](https://camo.githubusercontent.com/f87e0ba2f9cb5c35b5e97cd5dc2408f42f50cef4b568d3554ac8058494b23680/68747470733a2f2f6a77742e696f2f696d672f62616467652d636f6d70617469626c652e737667) We are going to focus on creating a signed JWT (JSON Web Token), commonly known as a JWT with a Signature, to establish secure communication between two parties that can validate its authenticity. There are two types of signed JWTs: the first is signed using the SHA algorithm, and the second is signed using the RSA algorithm (the latter will be covered in another guide). Let's start building our JWT (Signature). We need a few things first. 1. Function to convert JWT parts to Base64URL format ✔️. 2. Secret Key ✔️. 3. We use the native Node.js crypto module to sign it ✔️. **Why use Base64-URL?** This process ensures that the result is a valid text in URLs. Convert binary data to Base64, Modify Special Characters, Replace + with -,Replace / with _, Remove any padding characters (=). ```typescript const crypto = require("crypto"); function base64urlEncode(data: string) { return Buffer.from(data) .toString("base64") .replace(/=/g, "") .replace(/\+/g, "-") .replace(/\//g, "_"); } } ``` **Generate Secret Key 🔑** Now we will create our Secret Key, this must be stored securely, it is used to validate the signature, not to share. Avoid choosing short or obvious combinations. It is recommended to generate secure bytes like this one: ```typescript const crypto = require('crypto'); // Generate a 256-bit (32-character) secret key const secretKey = crypto.randomBytes(32).toString('hex'); ``` --- **Signature 🔏.** In this section, we explore the backbone of cybersecurity: cryptography! The standard recommends using the HMAC function with SHA-256, known as HS256 in the JWA spec. It also recommends RSA for tokens signed with private keys, a topic we will cover in another guide: **HMAC (Hash-based Message Authentication Code).** HMAC in Node.js uses a secret key and a hash function to create a unique code (MAC) that verifies the integrity and authenticity of a message. Here's a simplified explanation of what happens behind the scenes: 1. **Combine Key and Message**: The secret key and the message are combined in a specific way. 2. **Apply Hash Function**: This combination is then passed through a hash function (like SHA-256) twice. 3. **Generate Code**: The result is a fixed-size code that is unique to the given message and key. This code ensures that the message cannot be altered by a cybercriminal without invalidating the signature. If even a single bit in the message changes, the HMAC code will be different, indicating unauthorized tampering. In this way, HMAC protects against malicious modification attempts. Function to create Signature 🛡️. ```typescript const crypto = require('crypto'); function createSignature( encodedHeader: string, encodedPayload: string, secret: string ) { const data = `${encodedHeader}.${encodedPayload}`; return crypto.createHmac("sha256", secret).update(data).digest("base64url"); } ``` Graphical example of signature creation 🎨 🖌️. ![box](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/islcw0w6vclkchkv0hqy.png) --- `Completed example` Here we have our completed example. As you can see, we can create our own token signing system. This is how libraries that use the JWT standard function. I encourage you to experiment with Node.js's Crypto module and implement your own solution. ```typescript const header = { alg: "HS256", typ: "JWT", }; const payload = { sub: "1234567890", name: "John Doe", iat: Math.floor(Date.now() / 1000), }; const encodedHeader = base64urlEncode(JSON.stringify(header)); const encodedPayload = base64urlEncode(JSON.stringify(payload)); const secret = "your-256-bit-secret"; function createSignature( encodedHeader: string, encodedPayload: string, secret: string ) { const data = `${encodedHeader}.${encodedPayload}`; return crypto.createHmac("sha256", secret).update(data).digest("base64url"); } const signature = createSignature(encodedHeader, encodedPayload, secret); const jwt = `${encodedHeader}.${encodedPayload}.${signature}`; ``` ### Resume The token signature is the result of passing the header, payload, and a secret through a hash function, obtaining a fixed value. This value acts as an impenetrable barrier for cybercriminals, ensuring that the data has not been modified and maintaining the integrity and authenticity of the information. | JWT(Signature)| value | | ------------- |:-------------:| | Integrity | ✅ | Authenticity | ✅ | | Privacity | ❌ | Remember 📌. Not to include sensitive information in the payload of your JWTs, as they are only encoded, not encrypted. The JWT signature ensures the integrity and authenticity of the token but does not guarantee the privacy of the data. --- Upcoming guides: > Best practices for creating your JWTs. > Validate JWTs like a pro. > Know the main attacks and common failures when implementing JWTs.
andres_fernandez_05a8738d
1,909,647
Learning Conditional Statements in Python
Introduction Conditional statements are an essential part of programming that allows you...
0
2024-07-03T03:53:49
https://dev.to/davitacols/learning-conditional-statements-in-python-3224
webdev, python, softwaredevelopment
## Introduction Conditional statements are an essential part of programming that allows you to execute certain pieces of code based on specific conditions. In Python, these are typically implemented using `if`, `elif`, and `else` statements. This guide will help you understand how to use these statements effectively. ## Basic `if` Statement The `if` statement allows you to execute a block of code only if a specified condition is true. ### Syntax ```bash if condition: # code to execute if condition is true ``` ### Example ```bash x = 10 if x > 5: print("x is greater than 5") ``` ## `if-else` Statement The `if-else` statement allows you to execute one block of code if the condition is true and another block of code if the condition is false. ### Syntax ```bash if condition: # code to execute if condition is true else: # code to execute if condition is false ``` ### Example ```bash x = 3 if x > 5: print("x is greater than 5") else: print("x is not greater than 5") ``` ## `if-elif-else` Statement The `if-elif-else` statement is used to check multiple conditions. The first condition that evaluates to true will have its block of code executed, and the rest will be skipped. ### Syntax ```bash if condition1: # code to execute if condition1 is true elif condition2: # code to execute if condition2 is true else: # code to execute if none of the above conditions are true ``` ### Example ```bash x = 7 if x > 10: print("x is greater than 10") elif x > 5: print("x is greater than 5 but less than or equal to 10") else: print("x is 5 or less") ``` # Nested `if` Statements You can also nest `if` statements within each other to check multiple conditions. ## Syntax ```python if condition1: if condition2: # code to execute if both condition1 and condition2 are true ``` ### Example ```python x = 8 y = 12 if x > 5: if y > 10: print("x is greater than 5 and y is greater than 10") ``` # Logical Operators Logical operators such as `and`, `or`, and `not` can be used to combine multiple conditions in a single if statement. ## Example with `and` ```python x = 7 y = 12 if x > 5 and y > 10: print("Both conditions are true") ``` ## Example with `or` ```python x = 4 y = 12 if x > 5 or y > 10: print("At least one condition is true") ``` ## Example with `not` ```python x = 4 if not x > 5: print("x is not greater than 5") ``` # Simple Program to Illustrate Conditional Statements Here is a simple Python program to illustrate the use of conditional statements (`if`, `elif`, and `else`). ```python # Get user input age = int(input("Enter your age: ")) # Check the age and print the appropriate message if age < 0: print("Invalid age") elif age < 18: print("You are a minor.") elif age <= 65: print("You are an adult.") else: print("You are a senior.") ``` ## Output `age = int(input("Enter your age: "))` ## Conclusion Conditional statements are a fundamental part of controlling the flow of your Python programs. By using `if`, `elif`, and `else` statements effectively, along with logical operators, you can create more complex and useful programs. Practice writing your own conditional statements to become more comfortable with these concepts.
davitacols
1,909,646
ROC-AUC Curve in Machine Learning
In machine learning, evaluating the performance of your models is crucial. One powerful tool for this...
0
2024-07-03T03:51:52
https://dev.to/harsimranjit_singh_0133dc/roc-auc-curve-in-machine-learning-55ig
In machine learning, evaluating the performance of your models is crucial. One powerful tool for this purpose is the ROC-AUC curve. This article will explore what the ROC-AUC curve is, and how it works. ## Understanding the ROC Curve The ROC curve visually represents the model's performance across all possible classification thresholds. It plots the **True Positive Rate(TPR)** on the y-axis and the **False Positive Rate(FPR) on the x-axis. - **TPR (True Positive Rate)**: Also known as recall or sensitivity, it measures the proportion of actual positive cases classified by the model. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2f5551cq9t47kjvj6zlp.png) - **FPR (False Positive Rate)**: it measures the proportion of actual negative cases incorrectly classified as positive by the model. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/otmj38ttvmlnoaqnwqz4.png) ## Plotting the ROC Curve To plot a ROC curve, you vary the threshold for classifying positive and negative samples. At each threshold, you calculate the TPR and FPR, which gives you a point on the ROC curve. By connecting these points, you create the ROC curve. ## Interpreting the ROC Curve The ideal ROC curve hugs the top-left corner of the plot, indicating a high TPR and a low FPR. The closer the ROC curve is to this corner, the better the model. Conversely, a ROC curve along the diagonal line from (0,0) to (1,1) indicates a model with no discrimination ability. ## Area Under the ROC Curve (AUC) The AUC provides a single number summary of the ROC curve. It represents the probability that a randomly chosen positive instance is ranked higher than a randomly chosen negative instance. An AUC of 0.5 indicates no discrimination, while an AUC of 1.0 represents perfect discrimination. ## Advantages of the ROC-AUC Curve The ROC-AUC curve has several advantages. It is threshold-independent, meaning it evaluates model performance across all thresholds. This makes it robust, especially in scenarios with imbalanced datasets where metrics like accuracy can be misleading. ## Practical Considerations ROC-AUC is particularly useful in fields like medical diagnostics and fraud detection, where the costs of false positives and false negatives differ. However, it's essential to consider the specific context of your problem when interpreting AUC values. ## ROC-AUC in Practice Here's a brief guide on how to plot ROC curves and calculate AUC using Python's scikit-learn library: ``` from sklearn.metrics import roc_curve, auc import matplotlib.pyplot as plt fpr, tpr, _ = roc_curve(y_true, y_scores) roc_auc = auc(fpr, tpr) plt.figure() plt.plot(fpr, tpr, color='darkorange', lw=2, label='ROC curve (area = %0.2f)' % roc_auc) plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.0]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver Operating Characteristic') plt.legend(loc="lower right") plt.show() ``` ## Conclusion The ROC-AUC curve is a vital tool for evaluating the performance of classification models. By understanding and correctly interpreting this metric, you can gain deeper insights into your model's strengths and weaknesses. Remember to use ROC-AUC alongside other metrics to get a comprehensive evaluation of your models.
harsimranjit_singh_0133dc
1,909,645
when i was fix the tag i was have loss google toporanking
how to created the SEO Code the SEO Optimize i was just input Head but the content less 1000 word ...
0
2024-07-03T03:49:42
https://dev.to/athip_duanthana_36ce9ed64/when-i-was-fix-the-tag-i-was-have-loss-google-toporanking-2888
how to created the SEO Code the SEO Optimize i was just input Head but the content less 1000 word the Google seacrh only index little word How i will to fix that visit mysite with Keyword [รับทำ SEO](https://www.allconnective.com/) [บริษัททำSEO ](https://www.allconnective.com/)....
athip_duanthana_36ce9ed64
1,909,644
Exploring Leonardo AI: Features, Benefits, and Applications
Artificial intelligence (AI) is rapidly transforming our world, and Leonardo AI is at the forefront...
0
2024-07-03T03:48:34
https://dev.to/jettliya/exploring-leonardo-ai-features-benefits-and-applications-3ncg
Artificial intelligence (AI) is rapidly transforming our world, and Leonardo AI is at the forefront of this revolution. But what exactly is Leonardo AI, and how does it work? Let's dive in and explore the fascinating world of Leonardo AI. **Understanding Leonardo AI** **Origins and Development** Leonardo AI is a state-of-the-art artificial intelligence system developed by leading experts in the field. It leverages the latest advancements in machine learning, natural language processing, and computer vision to provide innovative solutions across various industries. **Key Features and Capabilities** Leonardo AI is designed to be versatile, efficient, and user-friendly. Some of its key features include: Advanced machine learning algorithms that can analyze vast amounts of data quickly and accurately. Natural language processing capabilities that allow it to understand and generate human language. Computer vision technology that enables it to interpret and process visual information. Robust data analytics tools that provide valuable insights and predictions. **Core Components of Leonardo AI** **Machine Learning Algorithms** At the heart of Leonardo AI are its powerful machine learning algorithms. These algorithms are designed to learn from data, identify patterns, and make decisions with minimal human intervention. This self-learning capability allows Leonardo AI to improve over time and adapt to new challenges. **Natural Language Processing (NLP)** NLP is a critical component of Leonardo AI, enabling it to understand, interpret, and generate human language. This technology is used in various applications, from chatbots and virtual assistants to sentiment analysis and language translation. **Computer Vision** Computer vision allows Leonardo AI to process and interpret visual information from the world around it. This includes recognizing objects, analyzing images, and even understanding video content. This capability is essential for applications such as autonomous vehicles, security systems, and medical imaging. **Data Analytics** Data analytics is another **[vital component of Leonardo AI](https://aichief.com/leonardo-ai/)**. It involves collecting, processing, and analyzing large datasets to uncover patterns, trends, and insights. This information is then used to make informed decisions and predictions, driving business success. **How Leonardo AI Works** **Data Collection and Processing** The first step in the Leonardo AI process is data collection. This involves gathering relevant data from various sources, such as sensors, databases, and user interactions. Once the data is collected, it is processed and cleaned to ensure accuracy and consistency. **Training the Model** Next, the data is used to train the AI model. This involves feeding the data into the machine learning algorithms and allowing them to learn from it. The model is continually refined and adjusted to improve its accuracy and performance. **Implementing Machine Learning** Once the model is trained, it can be implemented in real-world applications. This involves integrating the AI system into existing workflows and processes, where it can analyze data, make predictions, and automate tasks. **Real-World Applications** Leonardo AI is used in a wide range of real-world applications. For example, it can be used in healthcare to analyze medical images, in finance to detect fraudulent transactions, and in retail to personalize customer experiences. **Applications of Leonardo AI** **Healthcare** In the healthcare industry, Leonardo AI is used to analyze medical images, diagnose diseases, and recommend treatments. This technology can improve patient outcomes, reduce costs, and enhance the overall quality of care. **Finance** In finance, Leonardo AI is used to detect fraudulent transactions, analyze market trends, and provide investment recommendations. This helps financial institutions to manage risk, increase profitability, and provide better services to their customers. **Retail** In retail, Leonardo AI is used to personalize customer experiences, optimize inventory management, and analyze sales data. This technology can help retailers to increase sales, improve customer satisfaction, and streamline operations. **Manufacturing** In manufacturing, Leonardo AI is used to monitor production processes, predict equipment failures, and optimize supply chains. This can lead to increased efficiency, reduced downtime, and lower operational costs. **Customer Service** In customer service, Leonardo AI is used to power chatbots, analyze customer feedback, and provide personalized support. This technology can improve response times, increase customer satisfaction, and reduce the workload for human agents. **Benefits of Using Leonardo AI** **Efficiency and Productivity** One of the main benefits of using Leonardo AI is its ability to increase efficiency and productivity. By automating routine tasks and providing valuable insights, Leonardo AI allows businesses to operate more effectively and achieve better results. **Accuracy and Reliability** Leonardo AI is designed to be highly accurate and reliable. Its advanced algorithms can analyze data with a high degree of precision, reducing the risk of errors and improving decision-making. **Scalability** Leonardo AI is scalable, meaning it can be easily adapted to meet the needs of different businesses and industries. Whether you are a small startup or a large corporation, Leonardo AI can be customized to suit your specific requirements. **Cost-Effectiveness** By automating tasks and providing valuable insights, Leonardo AI can help businesses to reduce costs and increase profitability. This makes it a cost-effective solution for companies looking to improve their operations and stay competitive. **Challenges and Limitations** **Data Privacy Concerns** One of the main challenges of using AI technology is data privacy. Collecting and processing large amounts of data can raise concerns about how that data is used and protected. It is essential for businesses to implement robust data privacy policies to address these concerns. **Ethical Considerations** Ethical considerations are another important factor when using AI technology. This includes ensuring that AI systems are used responsibly and do not cause harm to individuals or society. Businesses must consider the ethical implications of their AI applications and take steps to mitigate any potential risks. **Technical Limitations** While Leonardo AI is a powerful tool, it is not without its technical limitations. For example, the accuracy of the AI model can be affected by the quality of the data it is trained on. Additionally, implementing AI technology can be complex and require significant resources. **Future of Leonardo AI** **Emerging Trends in AI** The field of AI is constantly evolving, with new trends and technologies emerging all the time. Some of the most exciting trends in AI include advancements in deep learning, reinforcement learning, and AI ethics. These trends are likely to shape the future of Leonardo AI and its applications. **Potential Advancements** Looking ahead, there are many potential advancements for Leonardo AI. For example, improvements in AI algorithms and hardware could lead to even greater accuracy and efficiency. Additionally, new applications and use cases for Leonardo AI are likely to emerge as the technology continues to evolve. **Conclusion** Leonardo AI is a cutting-edge technology that is transforming industries and driving innovation. With its advanced machine learning algorithms, natural language processing capabilities, and robust data analytics tools, Leonardo AI is helping businesses to operate more efficiently, make better decisions, and stay competitive. While there are challenges and limitations to consider, the future of Leonardo AI looks bright, with many exciting advancements on the horizon. **FAQs** **1. What industries can benefit from Leonardo AI?** Leonardo AI can benefit a wide range of industries, including healthcare, finance, retail, manufacturing, and customer service. Its versatile capabilities make it suitable for various applications and use cases. **2. How secure is Leonardo AI?** Leonardo AI is designed with security in mind. It uses advanced encryption and data protection measures to ensure that sensitive information is kept safe and secure. However, businesses must also implement their own security policies to protect their data. **3. Can Leonardo AI be customized?** Yes, Leonardo AI can be customized to meet the specific needs of different businesses and industries. This includes tailoring the AI model, algorithms, and applications to suit individual requirements. **4. What are the costs associated with Leonardo AI?** The costs associated with Leonardo AI can vary depending on the specific implementation and use case. However, it is generally considered to be a cost-effective solution that can provide significant return on investment. **5. How does Leonardo AI compare to other AI systems?** Leonardo AI is considered to be one of the leading AI systems on the market. Its advanced features, versatility, and ease of use make it a popular choice for businesses looking to implement AI technology.
jettliya
1,909,643
Dr. Swati's Dental Clinic: Your Best Choice for Dental Care in Jaipur
Finding the perfect dental clinic can be a daunting task, but if you're looking for the best...
0
2024-07-03T03:44:44
https://dev.to/drswati12/dr-swatis-dental-clinic-your-best-choice-for-dental-care-in-jaipur-1pbg
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s3ahvls9fj4djq16lcfp.png) Finding the perfect[ dental clinic](https://drswaticlinic.com/ ) can be a daunting task, but if you're looking for the best dentist in Jaipur,[ Dr. Swati's Dental Clinic](https://www.google.com/maps/place/Dr.+Swati's+Dental+Clinic+%7C+Best+Dental+Clinic+C-Scheme+Jaipur/@26.9096866,75.8077482,15z/data=!4m6!3m5!1s0x396db78b358bb759:0x7d4e0a6205839c85!8m2!3d26.9096866!4d75.8077482!16s%2Fg%2F11qzztbdyx?entry=ttu) is your go-to destination. Located in the heart of the city, this clinic offers top-notch dental care with a personal touch. In this blog, we'll explore why Dr. Swati's Dental Clinic stands out and how it caters to all your dental needs. Why Choose Dr. Swati's Dental Clinic? 1. Highly Qualified and Experienced Dentist: Dr. Swati is a renowned dentist in Jaipur with years of experience and numerous satisfied patients. Her expertise in various dental procedures ensures that you receive the best care possible. 2. Comprehensive Range of Services: The clinic offers a wide array of dental services including preventive, restorative, and cosmetic dentistry. Services include teeth whitening, dental implants, root canal treatment, orthodontics, and pediatric dentistry. 3. State-of-the-Art Technology: Equipped with the latest dental technology, Dr. Swati's Dental Clinic ensures precise diagnostics and effective treatments. Facilities include digital X-rays, intraoral cameras, and modern sterilization techniques. 4. Convenient Location: Situated in the prime locality of C Scheme, the clinic is easily accessible from all parts of Jaipur. The central location makes it convenient for regular visits and emergency consultations. 5. Patient-Centric Approach: Dr. Swati and her team prioritize patient comfort and satisfaction. The clinic's friendly environment and personalized care ensure a pleasant dental experience. Services Offered at Dr. Swati's Dental Clinic 1. Preventive Dentistry: Regular check-ups and cleanings Fluoride treatments and sealants 2. Restorative Dentistry: Fillings, crowns, and bridges Dental implants and dentures 3. Cosmetic Dentistry: Teeth whitening and veneers Smile makeovers and cosmetic bonding 4. Orthodontics: Braces and Invisalign Corrective treatments for misaligned teeth 5. Pediatric Dentistry: Specialized care for children's dental needs Preventive and restorative treatments for kids Why Regular Dental Visits are Important Maintaining good oral health goes beyond brushing and flossing. Regular dental visits are crucial for early detection and treatment of dental issues. Dr. Swati's Dental Clinic emphasizes the importance of preventive care to avoid serious dental problems in the future. Tips for Maintaining Oral Health Brush your teeth twice a day with fluoride toothpaste. Floss daily to remove plaque between teeth. Limit sugary snacks and beverages to prevent cavities. Visit Dr. Swati's Dental Clinic regularly for check-ups and cleanings. Testimonials from Satisfied Patients Rohit Sharma: "Dr. Swati's Dental Clinic is the best dental clinic in Jaipur. The staff is very friendly, and Dr. Swati is extremely professional. Highly recommend!" Priya Singh: "I had a great experience at Dr. Swati's Dental Clinic. The clinic is well-equipped, and Dr. Swati is one of the top dentists in Jaipur. My smile has never looked better!" Contact Dr. Swati's Dental Clinic
drswati12
1,909,642
Migrating data in production (with zero downtime)
When running Reform, we often faced the issue of having to migrate data in our database. We were an...
0
2024-07-03T03:38:24
https://dev.to/bjorndcode/migrating-data-in-production-with-zero-downtime-29lm
laravel, database, webdev
When running [Reform](https://www.reform.app/), we often faced the issue of having to migrate data in our database. We were an early-stage startup, so we built features as needed. Down the road we ran into scenarios where we realised we hadn’t chosen the correct data model. The nature of the product (a form builder) meant we couldn’t have any downtime, because that meant our customers lost leads. ## Example What do I mean by migrating data? I mean any change to the data model, like moving data to a new column or changing a column type. When we launched Reform, we only supported single page forms. We stored the page as a JSON object in a `page` column. But we quickly realised we needed to support multipage forms. That meant we had to replace `page` with `pages` and store an array of pages instead. ## 3 step approach We developed a 3-step approach to make these changes with no downtime. 1. Preparation: Prepare the database to handle the new data model and migrate existing data 2. Launch: Releasing the new feature 3. Clean-up: Remove old data and tweak new columns The main idea is to create a new column before reading data from it. We can migrate existing data to the new column and since we’re not reading data from this column yet, it doesn’t matter how long it takes. Once all data is migrated, we can update the app logic to read from the new column. In practice this requires releasing a preparation PR, migrating the data and then releasing the actual feature (and clean up). ## Preparation The first release is a preparation PR. This PR contains a migration to add the new column. Because the new column is created for existing rows, it will have to be nullable for now. Next we’ll update the app logic to save any changes both in the old column and the new column. In our example we would still save it as an object in `page` but we’d also generate a JSON array on the fly and save the page as the only item in the array. This means that if user saves their data it will sync to both `page` and `pages`. ``` // Dummy code $form->update([ 'page' => $page, 'pages' => [$page], ]); ``` But we still haven’t dealt with rows the user haven’t updated. For that, we’ll have to create a custom migration script. Reform is built with Laravel, so we’d create the script using a custom command and a custom queue job. The command fetches unmigrated data from the database. In our example, that means any rows where `pages` is null. Only fetching unmigrated data also means we can re-run the command as many times as needed. We’d often have to re-run it if an error occurred, or if we’d batch it to only run it for N rows. ``` // Dummy code $forms = Form::whereNull('pages')->get(); $forms->each(function ($form) { MigrateFormPageToPages::dispatch($form); }); ``` For each row, it dispatches a queue job responsible for migrating a single row. Having a separate queue job for each row has a couple of benefits: 1. If the job fails, we can inspect and re-run it 2. Processing a lot of queue jobs automatically adds a small delay so we don’t overload the database with too many requests We triggered the command through the server console on DigitalOcean. ## Launch At this point we have a new column in our database. We’ve migrated all existing data so it exists in both columns and updates are synced to both columns. It’d now be safe to release the new feature that prompted this migration. In our Reform example that meant launching a new UI that could handle multiple pages. On the backend we also updated the code to only update the `pages` column since `page` is obsolete. ## Cleanup Finally, we can clean up the mess we created. We can safely drop the `page` column since it’s not needed any more. Initially we made `pages` nullable but at this point we can add a not-null constraint since all rows have data. I’m currently building a new database client for developers. It’s the tool I wish we had when building Reform. Check out a quick demo here: https://www.youtube.com/watch?v=KAyeOBe7csc
bjorndcode
1,909,640
Implementing Type-Safe Next.js Server Actions with Zsa: A Comprehensive Guide
In the realm of web development, ensuring type safety and validation is crucial for building robust...
0
2024-07-03T03:27:38
https://dev.to/vyan/implementing-type-safe-nextjs-server-actions-with-zsa-a-comprehensive-guide-25bn
webdev, nextjs, react, javascript
In the realm of web development, ensuring type safety and validation is crucial for building robust and reliable applications. Next.js, a popular React framework, offers powerful server-side capabilities, but achieving end-to-end type safety can be challenging. Enter the Zsa package, a powerful tool that combines Zod and server actions to ensure type-safe Next.js server actions. In this blog, we'll explore how to leverage the Zsa package to implement type-safe server actions in your Next.js applications. ## What is the Zsa Package? The Zsa package is a library that integrates Zod, a TypeScript-first schema declaration and validation library, with server actions in Next.js. This combination provides a clean and efficient approach to type safety and input validation, enhancing both development efficiency and user experience. ### Key Features of Zsa: - **Type Safety:** Ensures that all data passed to server actions is type-checked and validated. - **Input Validation:** Utilizes Zod to define and enforce input schemas. - **Efficient Server Actions:** Simplifies the creation and execution of server actions in Next.js. ## Getting Started with Zsa Let's dive into the implementation of type-safe server actions using the Zsa package. ### 1. Setting Up Your Project First, ensure you have a Next.js project set up. If not, you can create one using the following command: ```bash npx create-next-app@latest my-next-app cd my-next-app ``` Next, install the necessary dependencies: ```bash npm install zsa zod react-hook-form ``` ### 2. Defining Input Schemas with Zod Zod allows you to define input schemas that ensure your data is validated before reaching your server actions. Here's an example of how to define an input schema: ```typescript import { z } from 'zod'; const createCollectionSchema = z.object({ name: z.string().min(1, "Name is required"), description: z.string().optional(), }); ``` ### 3. Creating Server Actions with Zsa Using Zsa, you can create server actions that are both type-safe and validated. Here's how to implement a server action for creating a collection: ```typescript import { createServerAction } from 'zsa'; import { createCollectionSchema } from './schemas'; const createCollection = createServerAction(createCollectionSchema, async (input, context) => { // Your server logic here const { name, description } = input; // Assume we have a function to save the collection to the database await saveCollectionToDatabase(name, description, context.user.id); return { success: true }; }); export default createCollection; ``` ### 4. Utilizing Server Actions in Client Components You can use server actions in your React components by leveraging React Hook Form for input validation and handling: ```jsx import React from 'react'; import { useForm } from 'react-hook-form'; import { zodResolver } from '@hookform/resolvers/zod'; import createCollection, { createCollectionSchema } from '../serverActions/createCollection'; const CreateCollectionForm = () => { const { register, handleSubmit, formState: { errors } } = useForm({ resolver: zodResolver(createCollectionSchema), }); const onSubmit = async (data) => { const result = await createCollection(data); if (result.success) { alert('Collection created successfully!'); } else { alert('Error creating collection'); } }; return ( <form onSubmit={handleSubmit(onSubmit)}> <div> <label>Name</label> <input {...register('name')} /> {errors.name && <p>{errors.name.message}</p>} </div> <div> <label>Description</label> <input {...register('description')} /> </div> <button type="submit">Create Collection</button> </form> ); }; export default CreateCollectionForm; ``` ### 5. Enhancing Development Efficiency with Zsa Hooks The Zsa package provides hooks like `useServerAction` to simplify the execution of server actions and handling outcomes. This enhances both development efficiency and user experience: ```jsx import { useServerAction } from 'zsa'; import createCollection from '../serverActions/createCollection'; const CreateCollectionForm = () => { const { execute, result, error } = useServerAction(createCollection); const onSubmit = async (data) => { await execute(data); }; return ( <form onSubmit={handleSubmit(onSubmit)}> {/* Form fields */} {result && <p>Collection created successfully!</p>} {error && <p>Error creating collection: {error.message}</p>} </form> ); }; ``` ### 6. Centralizing Validations and User Authentication A clean code structure often requires centralizing validations and ensuring user authentication. The Zsa package helps streamline these processes: ```typescript import { createServerAction, createProcedure } from 'zsa'; import { authenticatedProcedure } from './procedures'; import { createCollectionSchema } from './schemas'; const createCollection = createProcedure(authenticatedProcedure, createServerAction(createCollectionSchema, async (input, context) => { const { name, description } = input; await saveCollectionToDatabase(name, description, context.user.id); return { success: true }; })); export default createCollection; ``` ### Conclusion: Explore the Zsa Package The Zsa package provides a robust solution for implementing type-safe Next.js server actions, combining the power of Zod for input validation with efficient server action handling. By leveraging these tools, you can enhance your development workflow, improve code maintainability, and ensure a seamless user experience. Whether you're building a simple form or a complex application, the Zsa package offers features like server-side rendering, static site generation, and integration with React Query, making it a valuable ally in your web development toolkit. Explore the Zsa package and unlock the potential of type-safe Next.js server actions today!
vyan
1,909,639
HTTPS: How HTTPS Works - Handshake
What is HTTPS? HTTPS (HyperText Transfer Protocol Secure) is an extension of HTTP and uses...
0
2024-07-03T03:24:26
https://dev.to/zeeshanali0704/https-how-https-works-handshake-1mjo
systemdesignwithzeeshanali, systemdesign
# What is HTTPS? HTTPS (HyperText Transfer Protocol Secure) is an extension of HTTP and uses TLS (Transport Layer Security) to encrypt data between the client and server. The HTTPS handshake is a critical part of this process, ensuring that the communication is secure. ### Working of HTTPS Handshake HTTPS (HyperText Transfer Protocol Secure) is an extension of HTTP and uses TLS (Transport Layer Security) to encrypt data between the client and server. The HTTPS handshake is a critical part of this process, ensuring that the communication is secure. #### Protocol Used: - **TLS (Transport Layer Security):** This protocol ensures data privacy and integrity between the client and server. #### Steps of the HTTPS Handshake: 1. **TCP Handshake:** - **TCP SYN:** The client sends a TCP SYN packet to the server to initiate a TCP connection. - **TCP SYN + ACK:** The server responds with a TCP SYN-ACK packet, acknowledging the connection request. - **TCP ACK:** The client sends a TCP ACK packet to acknowledge the server's response, completing the TCP handshake. 2. **Client Hello:** - The client sends a "Client Hello" message to the server. This message includes: - Supported TLS versions. - Cipher suites (encryption algorithms). - A randomly generated number. 3. **Server Hello:** - The server responds with a "Server Hello" message, which includes: - Selected TLS version. - Selected cipher suite. - A randomly generated number. 4. **Server Certificate:** - The server sends its certificate to the client. This certificate contains the server's public key and is signed by a trusted CA. 5. **Server Key Exchange (if necessary):** - Depending on the cipher suite, the server may send additional key exchange information. 6. **Server Hello Done:** - The server signals that it has finished its initial handshake messages. 7. **Client Key Exchange:** - The client generates a pre-master secret key and encrypts it using the server's public key, then sends it to the server. 8. **Generate Session Keys:** - Both the client and the server use the pre-master secret and the random numbers to generate the session keys, which are symmetric keys used for encrypting the data during the session. 9. **Change Cipher Spec (Client):** - The client sends a "Change Cipher Spec" message to indicate that subsequent messages will be encrypted with the session key. 10. **Client Finished:** - The client sends a "Finished" message, encrypted with the session key, to indicate that the client part of the handshake is complete. 11. **Change Cipher Spec (Server):** - The server sends a "Change Cipher Spec" message to indicate that subsequent messages will be encrypted with the session key. 12. **Server Finished:** - The server sends a "Finished" message, encrypted with the session key, to indicate that the server part of the handshake is complete. Once the handshake is complete, the client and server use the session keys to encrypt and decrypt the data they exchange. #### Advantages of Using HTTPS: 1. **Encryption:** Data exchanged between the client and server is encrypted, protecting it from eavesdroppers and attackers. 2. **Data Integrity:** Ensures that data cannot be modified or corrupted during transfer without being detected. 3. **Authentication:** Verifies that the client is communicating with the legitimate server, preventing man-in-the-middle attacks. 4. **Trust:** Increases user trust in the website, as modern browsers display a padlock icon for HTTPS connections, indicating that the connection is secure. ### Diagram: Below is a simplified diagram illustrating the HTTPS handshake process: ``` Client Server | | |-------- TCP SYN--------------------->| | | |<--------- TCP SYN + ACK -------------| | | |-------- TCP ACK -------------------->| | | | | | | |-------- Client Hello --------------->| | | |<--------- Server Hello --------------| | | |<-------- Server Certificate (KEY)----| |<------ Server Key Exchange (if) -----| |<-------- Server Hello Done ----------| | | | | |--------- Client Key Exchange ------->| | | |------ Change Cipher Spec ----------->| |--------- Client Finished ----------->| | | |<------ Change Cipher Spec -----------| |<--------- Server Finished -----------| | | | | |------ Encrypted Data --------------->| |<--------- Encrypted Data ------------| | | ``` #### Detailed Steps: 1. **TCP Handshake:** - Client initiates the connection with TCP SYN. - Server responds with TCP SYN-ACK. - Client acknowledges with TCP ACK. 2. **Client Hello:** - Client sends supported TLS versions, cipher suites, and a random number. 3. **Server Hello:** - Server responds with selected TLS version, cipher suite, and a random number. 4. **Server Certificate:** - Server sends its digital certificate for authentication. 5. **Server Key Exchange (if necessary):** - Server may send additional key exchange information. 6. **Server Hello Done:** - Server indicates the end of its initial messages. 7. **Client Key Exchange:** - Client sends the pre-master secret encrypted with the server's public key. 8. **Change Cipher Spec (Client):** - Client signals the start of encrypted communication. 9. **Client Finished:** - Encrypted message indicating client is ready. 10. **Change Cipher Spec (Server):** - Server signals the start of encrypted communication. 11. **Server Finished:** - Encrypted message indicating server is ready. 12. **Encrypted Data:** - Secure data exchange begins with encrypted data packets. This detailed process ensures a secure, encrypted connection between the client and server, protecting data integrity and privacy. More Details: Get all articles related to system design Hastag: SystemDesignWithZeeshanAli Git: https://github.com/ZeeshanAli-0704/SystemDesignWithZeeshanAli
zeeshanali0704
1,909,638
Working With Time Deltas in Python
This lab guides you through the process of working with time deltas in Python using the pandas library. A time delta represents a duration or difference in time. We will explore different ways to construct, manipulate, and operate on time deltas.
27,675
2024-07-03T03:23:13
https://labex.io/tutorials/python-working-with-time-deltas-65456
pandas, coding, programming, tutorial
## Introduction This lab guides you through the process of working with time deltas in Python using the pandas library. A time delta represents a duration or difference in time. We will explore different ways to construct, manipulate, and operate on time deltas. ### VM Tips After the VM startup is done, click the top left corner to switch to the **Notebook** tab to access Jupyter Notebook for practice. Sometimes, you may need to wait a few seconds for Jupyter Notebook to finish loading. The validation of operations cannot be automated because of limitations in Jupyter Notebook. If you face issues during learning, feel free to ask Labby. Provide feedback after the session, and we will promptly resolve the problem for you. ## Import the Required Libraries First, we need to import the necessary libraries. In this case, we will be using pandas and numpy. ```python # Import the required libraries import pandas as pd import numpy as np import datetime ``` ## Construct a Timedelta Let's create a timedelta object, which represents a duration or difference in time. ```python # Construct a timedelta object pd.Timedelta("1 days 2 hours") ``` ## Convert to Timedelta You can convert a scalar, array, list, or series from a recognized timedelta format into a timedelta type. ```python # Convert a string to a timedelta pd.to_timedelta("1 days 06:05:01.00003") ``` ## Perform Operations You can perform mathematical operations on timedeltas. ```python # Subtract two timedeltas s = pd.Series(pd.date_range("2012-1-1", periods=3, freq="D")) s - s.max() ``` ## Access Attributes You can access various components of the timedelta directly. ```python # Access the days attribute of a timedelta tds = pd.Timedelta("31 days 5 min 3 sec") tds.days ``` ## Convert to ISO 8601 Duration You can convert a timedelta to an ISO 8601 Duration string. ```python # Convert a timedelta to an ISO 8601 Duration string pd.Timedelta(days=6, minutes=50, seconds=3, milliseconds=10, microseconds=10, nanoseconds=12).isoformat() ``` ## Create a Timedelta Index You can generate an index with time deltas. ```python # Generate a timedelta index pd.TimedeltaIndex(["1 days", "1 days, 00:00:05", np.timedelta64(2, "D"), datetime.timedelta(days=2, seconds=2)]) ``` ## Use the Timedelta Index You can use the timedelta index as the index of pandas objects. ```python # Use the timedelta index as the index of a pandas series s = pd.Series(np.arange(100), index=pd.timedelta_range("1 days", periods=100, freq="h")) ``` ## Perform Operations with Timedelta Index You can perform operations with the timedelta index. ```python # Add a timedelta index to a datetime index tdi = pd.TimedeltaIndex(["1 days", pd.NaT, "2 days"]) dti = pd.date_range("20130101", periods=3) (dti + tdi).to_list() ``` ## Resample a Timedelta Index You can resample data with a timedelta index. ```python # Resample data with a timedelta index s.resample("D").mean() ``` ## Summary In this lab, we learned how to work with time deltas in Python using the pandas library. We covered how to construct a timedelta, convert to timedelta, perform operations, access attributes, convert to ISO 8601 Duration, create a timedelta index, use the timedelta index, perform operations with timedelta index, and resample a timedelta index. With these skills, you can efficiently handle and manipulate time-based data in your future data analysis tasks. --- ## Want to learn more? - 🚀 Practice [Working With Time Deltas](https://labex.io/tutorials/python-working-with-time-deltas-65456) - 🌳 Learn the latest [Pandas Skill Trees](https://labex.io/skilltrees/pandas) - 📖 Read More [Pandas Tutorials](https://labex.io/tutorials/category/pandas) Join our [Discord](https://discord.gg/J6k3u69nU6) or tweet us [@WeAreLabEx](https://twitter.com/WeAreLabEx) ! 😄
labby
1,909,464
Azure Blob Storage.
Azure Blob Storage is a service for storing large amount of unstructured object data in the...
0
2024-07-03T03:16:39
https://dev.to/tojumercy1/azure-blob-storage-1gm8
azure, stepfunctions, github, softwaredevelopment
## Azure Blob Storage is a service for storing large amount of unstructured object data in the cloud that doesn't adhere to a particular data model or definition, such as text or binary data. Blob stands for Binary Large Object. Blob storage is also referred to as object storage or container storage. Task 1: Create a storage account. - Create a storage account in your region with locally redundant storage. - Verify the storage account was created. Task 2: Work with blob storage: - Create a private blob container. - Upload a file to the container. Task 3: Monitor the storage container. - Review common storage problems and troubleshooting guides. - Review insights for performance, availability, and capacity. **Azure **Blob Storage uses a container resource to group a set of blobs. A blob can't exist by itself in Blob Storage. A blob must be stored in a container resource. ## **Task 1: Create a storage account.** - We would first create a new **Storage Account**. - From the Azure portal home page, click the **Menu** button in the upper left corner. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f3cztp3xsatoe3iopi9e.png) - Select **Storage Account.** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lac4xoyqje82n7akmq5h.png) - Click **Create**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1p1a2g8rivjne3x8ixqb.png) - On the Basics tab of the **Create A Storage Account** blade, under Resource group, click Create New. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lwszlwnw47r0hbekm3ux.png) - Provide a name for the resource group, and then click OK. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1kxqq83xpnd27jfj39dc.png) - Scroll down to continue. - Fill in the instance details as shown to on your screen. Leave the defaults for everything else. - Storage account name:> storageaccountxxx(use your Deployment ID) - Performance: Standard - Redundancy: Locally redundant storage (LRS) - Click **Review + Create ** Important note: Most Azure resources requires unique names. Throughout these steps, you will see placeholder words such as > "DeploymentId" as part of resource names. In a real world scenario ,you would replace these words with your unique ID or use the naming convention determined by your organization .For the purpose of this guide ,we are using deployment ID 507268. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/007na3jkl6jjjc9s8w6g.png) - When you see the notification that validation passed, click _Create_ to create the storage account. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kb2p6s9yivhg9z20pm2j.png) - The new storage account is now listed. ## **Task 2: Work with Blob Storage.** - After the deployment is complete, click_ Go To Resource_. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lyw7d16hyr0aglvtxzzn.png) - In the left menu, under _Data Storage_, click _Containers_. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i6leytxp6s4118e1annz.png) - Click** Container** and give the new container a name. When done, click **Create** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n71nkgmwhhcl8rspd4ff.png) - Click on the name box. - Select Private and click_ Create_. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z8cut2mt1xa31xzkuppq.png) - Click the container we just created, and then click **Upload** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8hcf6l94u7c9pp6xgm2v.png) - Browse to a file in my Local computer. **Note:** You can create an empty . txt file or use the existing file. Consider choosing a file of a small size to minimize the upload time. - In this case, select an image files. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yu349827h4jeihvqy8ud.png) - Review the available option, and then click Upload. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6zzrskpnx2oxvuozwx7n.png) - Once the file are uploaded, right click a file and notice the options, including View/edit, Download, Properties and Delete. - Select Properties. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ncn832pdktxpm01adrn9.png) - Review the properties and the available actions ,and then close the window. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hfm8ie8nxxsu0nyagcck.png) - Return to the storage account page. - Storage account can be used for other purposes, such as file shares, queues and tables .From the left menu, click File shares. - Review the options ,and then do the same for Queues and Tables. **## Task 3: Monitor the storage account** - Click Diagnose and solve problems. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iyx7aiav5tb9ydjho1tx.png) - Here you can explore some of the most common storage problems and find tools. Notice there are multiple trouble shooters. - Click _Troubleshoot_ to look at an example. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8hlacx8jcsghxmw4k6gc.png) - Close the window. - Next, scroll down to the Monitoring section and click Insights ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ycq1tenjkoyme0e8vrzw.png) > Notice there is information on Failures, Performance, Availability and Capacity. Your information will be different. Hosting a static website on Azure Blob Storage offers a scalable, secure, and cost-effective solution. By following these steps, you can deploy your static website and take advantage of Azure's global infrastructure. With Azure Blob Storage, you can focus on developing your website without worrying about the underlying infrastructure. ## Additional Tips and Best Practices - Use Azure Storage Explorer to manage your containers and blobs. - Use Azure Pipelines to automate your deployment process. - Use custom domains and SSL certificates for a professional touch. - Monitor your website's performance using Azure Monitor. By following this comprehensive guide, you can successfully host your static website on Azure Blob Storage and enjoy the benefits of a scalable and secure online presence.
tojumercy1
1,909,594
Is Your Website Penalized by Google? Recovery and Prevention Guide
Google, as the world's biggest search engine, is key to driving traffic to your website through...
0
2024-07-03T03:09:11
https://dev.to/juddiy/is-your-website-penalized-by-google-recovery-and-prevention-guide-21db
website, seo, learning, google
Google, as the world's biggest search engine, is key to driving traffic to your website through search result rankings. But sometimes, websites can get penalized for breaking Google's rules, which can really hurt their ranking and visibility. Here are some signs that might indicate your website has been penalized by Google: #### Sign 1: Sudden Drop in Traffic If you've noticed a sudden decrease in website visits and search engine traffic, it could be a signal that Google has penalized your site. You can monitor these changes using tools like Google Analytics. #### Sign 2: Sharp Decline or Disappearance in Rankings If your keyword rankings experience a sharp decline or disappear completely within a short period, it suggests that your website has been demoted or removed from search results. #### Sign 3: Received Warnings from Google Search Console Google Search Console is a crucial tool for monitoring and maintaining your website's performance in Google search. Warnings from Google Search Console may indicate that your site has violated Google's guidelines or been affected by algorithm updates. #### Sign 4: Decreased Visibility for Search Keywords If your website's visibility and ranking for important keywords noticeably decrease, especially compared to competitors, it could be an indication of a penalty. ### How Can You Recover? If you suspect your website has been penalized by Google, here are some steps you can take to recover: 1. **Review and Correct Issues**: Begin by using Google Search Console and other tools to identify potential issues causing the penalty, such as low-quality links, duplicate content, or excessive use of keywords. Address these issues promptly. 2. **Submit a Reconsideration Request**: After addressing the identified issues, submit a reconsideration request through Google Search Console. Clearly explain the actions you've taken to rectify the problems and request a review of your site. 3. **Follow Google's Guidelines and Best Practices**: Ensure your website adheres to all of Google's guidelines and best practices, including maintaining high-quality content, improving user experience, optimizing page speed, and more. 4. **Monitor and Continuously Optimize**: Regularly monitor your website's performance and rankings. Continuously optimize your site to align with Google's algorithm changes and updates. ### How to Prevent? To prevent Google penalties and maintain a strong online presence, consider the following preventive measures: - **Regular Content Updates**: Use [SEO AI](https://seoai.run/) tool to analyze keyword trends and competition, helping you strategize for regular updates and optimized content to stay competitive in search results. - **Avoid Unethical Practices**: Refrain from employing black hat SEO tactics like keyword stuffing, hidden text, or link schemes. Focus on ethical SEO practices that enhance user experience and provide genuine value. - **Prioritize User Experience**: Ensure your website delivers a seamless user experience with fast loading times, intuitive navigation, and mobile-friendliness. - **Build Quality Backlinks**: Earn high-quality backlinks naturally from reputable sources rather than buying links or engaging in link exchanges. By implementing these proactive measures and consistently following Google's guidelines, you can minimize the risk of penalties and maintain a robust presence in search engine results.
juddiy
1,909,593
Is Your Website Penalized by Google? Recovery and Prevention Guide
Google, as the world's biggest search engine, is key to driving traffic to your website through...
0
2024-07-03T03:09:00
https://dev.to/juddiy/is-your-website-penalized-by-google-recovery-and-prevention-guide-38l2
website, seo, learning, google
Google, as the world's biggest search engine, is key to driving traffic to your website through search result rankings. But sometimes, websites can get penalized for breaking Google's rules, which can really hurt their ranking and visibility. Here are some signs that might indicate your website has been penalized by Google: #### Sign 1: Sudden Drop in Traffic If you've noticed a sudden decrease in website visits and search engine traffic, it could be a signal that Google has penalized your site. You can monitor these changes using tools like Google Analytics. #### Sign 2: Sharp Decline or Disappearance in Rankings If your keyword rankings experience a sharp decline or disappear completely within a short period, it suggests that your website has been demoted or removed from search results. #### Sign 3: Received Warnings from Google Search Console Google Search Console is a crucial tool for monitoring and maintaining your website's performance in Google search. Warnings from Google Search Console may indicate that your site has violated Google's guidelines or been affected by algorithm updates. #### Sign 4: Decreased Visibility for Search Keywords If your website's visibility and ranking for important keywords noticeably decrease, especially compared to competitors, it could be an indication of a penalty. ### How Can You Recover? If you suspect your website has been penalized by Google, here are some steps you can take to recover: 1. **Review and Correct Issues**: Begin by using Google Search Console and other tools to identify potential issues causing the penalty, such as low-quality links, duplicate content, or excessive use of keywords. Address these issues promptly. 2. **Submit a Reconsideration Request**: After addressing the identified issues, submit a reconsideration request through Google Search Console. Clearly explain the actions you've taken to rectify the problems and request a review of your site. 3. **Follow Google's Guidelines and Best Practices**: Ensure your website adheres to all of Google's guidelines and best practices, including maintaining high-quality content, improving user experience, optimizing page speed, and more. 4. **Monitor and Continuously Optimize**: Regularly monitor your website's performance and rankings. Continuously optimize your site to align with Google's algorithm changes and updates. ### How to Prevent? To prevent Google penalties and maintain a strong online presence, consider the following preventive measures: - **Regular Content Updates**: Use [SEO AI](https://seoai.run/) tool to analyze keyword trends and competition, helping you strategize for regular updates and optimized content to stay competitive in search results. - **Avoid Unethical Practices**: Refrain from employing black hat SEO tactics like keyword stuffing, hidden text, or link schemes. Focus on ethical SEO practices that enhance user experience and provide genuine value. - **Prioritize User Experience**: Ensure your website delivers a seamless user experience with fast loading times, intuitive navigation, and mobile-friendliness. - **Build Quality Backlinks**: Earn high-quality backlinks naturally from reputable sources rather than buying links or engaging in link exchanges. By implementing these proactive measures and consistently following Google's guidelines, you can minimize the risk of penalties and maintain a robust presence in search engine results.
juddiy
1,909,592
PCI-E SSD
PCIe SSDs (Peripheral Component Interconnect Express Solid State Drives) represent the pinnacle of...
0
2024-07-03T03:08:44
https://dev.to/pciessd/pci-e-ssd-2fd
PCIe SSDs (Peripheral Component Interconnect Express Solid State Drives) represent the pinnacle of high-speed data storage technology, offering unparalleled performance for both consumer and enterprise applications. Leveraging the PCIe interface, these SSDs deliver exceptional data transfer speeds and significantly lower latency compared to traditional SATA SSDs. This results in faster boot times, quicker application loading, and enhanced overall system responsiveness. PCIe SSDs utilize advanced NAND flash memory technology, ensuring superior durability and reliability. They are available in various form factors, including M.2 and U.2, catering to a wide range of device compatibility requirements. Key features of PCIe SSDs include high throughput, enhanced error correction, and advanced data protection mechanisms, making them ideal for demanding environments such as gaming, video editing, and large-scale data processing. When selecting a PCIe SSD, considerations include the specific performance needs of your applications, compatibility with your system's motherboard, and the balance between cost and storage capacity. PCIe SSDs are an excellent choice for users seeking to maximize their system's potential, offering scalable solutions that can grow with your data storage needs. Integrating PCIe SSDs into your computing environment ensures a significant boost in performance, making them a vital component for modern high-performance systems and enterprise storage solutions. Elsewhere on the net: [PCI-E SSD ](https://serverorbit.com/solid-state-drives-ssd/pci-e-ssd/)
pciessd
1,909,591
Spinning Hyperbolic Vortex
Check out this Pen I made!
0
2024-07-03T03:00:57
https://dev.to/dan52242644dan/spinning-hyperbolic-vortex-4jf8
codepen, webdev, beginners, ai
Check out this Pen I made! {% codepen https://codepen.io/Dancodepen-io/pen/jOoZovB %}
dan52242644dan
1,909,589
Sistemas Operacionais: FIFO
Os sistemas operacionais são a espinha dorsal de qualquer computador, gerenciando hardware e software...
0
2024-07-03T02:54:01
https://dev.to/iamthiago/sistemas-operacionais-fifo-4ebo
Os sistemas operacionais são a espinha dorsal de qualquer computador, gerenciando hardware e software para garantir que tudo funcione de maneira eficiente e eficaz. Um dos conceitos fundamentais no gerenciamento de processos e recursos é o FIFO, que significa "First In, First Out" (Primeiro a Entrar, Primeiro a Sair). Neste artigo, exploraremos o que é FIFO, como ele funciona e sua importância nos sistemas operacionais. ## O que é FIFO? FIFO é um método de organização e manipulação de dados onde o primeiro elemento adicionado à fila é o primeiro a ser removido. É uma técnica simples, mas poderosa, usada em diversas áreas da computação, incluindo gerenciamento de memória, filas de processos, e buffers de entrada e saída. ## Como o FIFO Funciona? Para entender melhor o FIFO, vamos considerar um exemplo prático: uma fila de pessoas em um caixa de supermercado. A primeira pessoa a chegar à fila é a primeira a ser atendida. Da mesma forma, no contexto de sistemas operacionais, processos ou dados que chegam primeiro são os primeiros a serem processados ou removidos. ### Exemplo de Implementação de FIFO ```python class FilaFIFO: def __init__(self): self.fila = [] def enfileirar(self, item): self.fila.append(item) def desenfileirar(self): if len(self.fila) > 0: return self.fila.pop(0) else: return "Fila vazia" # Exemplo de uso fila = FilaFIFO() fila.enfileirar("Processo 1") fila.enfileirar("Processo 2") fila.enfileirar("Processo 3") print(fila.desenfileirar()) # Saída: Processo 1 print(fila.desenfileirar()) # Saída: Processo 2 print(fila.desenfileirar()) # Saída: Processo 3 ``` ## Aplicações do FIFO em Sistemas Operacionais ### 1. Gerenciamento de Memória Em sistemas operacionais, o FIFO é frequentemente utilizado no gerenciamento de memória, especialmente em algoritmos de paginação. Quando a memória RAM está cheia e uma nova página precisa ser carregada, o algoritmo FIFO remove a página mais antiga da memória para dar espaço à nova. ### 2. Escalonamento de Processos O FIFO também é utilizado no escalonamento de processos, onde os processos são executados na ordem em que chegam à fila de prontos. Este método é simples de implementar e justo, pois todos os processos têm a chance de serem executados na ordem em que chegaram. ### 3. Buffers de Entrada e Saída Nos buffers de entrada e saída, o FIFO garante que os dados sejam processados na ordem em que foram recebidos. Isso é crucial para manter a integridade dos dados e garantir que nenhuma informação seja perdida ou processada fora de ordem. ## Vantagens e Desvantagens do FIFO ### Vantagens - **Simples de Implementar**: O FIFO é fácil de entender e implementar, o que o torna uma escolha popular em muitos sistemas. - **Justo**: Todos os elementos ou processos têm a mesma oportunidade de serem atendidos na ordem em que chegaram. ### Desvantagens - **Não Considera Prioridades**: O FIFO não leva em consideração a prioridade dos processos ou dados, o que pode ser um problema em sistemas onde alguns processos são mais críticos que outros. - **Possível Ineficiência**: Em sistemas com cargas de trabalho variadas, o FIFO pode não ser a abordagem mais eficiente, especialmente se processos de longa duração bloquearem processos menores. ## Conclusão O FIFO é uma técnica fundamental em sistemas operacionais, garantindo que dados e processos sejam tratados na ordem em que chegam. Embora simples, sua implementação e impacto são significativos em diversas áreas da computação. Compreender o FIFO é essencial para qualquer pessoa interessada em sistemas operacionais e ciência da computação em geral. Para mais artigos e informações sobre sistemas operacionais, redes e outras áreas da TI, visite meu perfil no GitHub: [IamThiago-IT](https://github.com/IamThiago-IT). Lá, você encontrará projetos, códigos e recursos que podem ajudar no seu aprendizado e desenvolvimento profissional. --- Espero que este artigo seja útil e informativo! Se tiver alguma dúvida ou sugestão, sinta-se à vontade para comentar abaixo.
iamthiago
1,909,588
What Chat GPT thought of me over the entire time I used it at work.
Nowadays, the questions that a person asks also become a criterion for his assessment. After all, by...
0
2024-07-03T02:52:51
https://dev.to/mibii/what-chat-gpt-thought-of-me-over-the-entire-time-i-used-it-at-work-pdk
resume, chatgpt
Nowadays, the questions that a person asks also become a criterion for his assessment. After all, by the understanding a person demonstrates in his question on a topic that interests him, you can also say a lot about a person. I found it interesting what Chat GPT thought of me over the entire time I used it at work. (and I started using it from the moment it appeared) {% embed https://youtu.be/Ld7B-21ysu4 %} It is known that the quality of communication depends on the questions on which the person who asks them demonstrates his understanding of the topic and his awareness. I talked to Chat GPT on various topics, to summarize, I ask Chat GPT to write a short (one A4 page) summary about me, taking into consideration all the chats he had with me and the questions I asked. I am only interested in the technical side of the issue, to highlight my knowledge in the field of IT, web development and devops
mibii
1,909,129
Testing SAW (Scrape Any Website
SAW, which stands for Scrape Any Website, is a Windows application designed to facilitate web...
0
2024-07-02T16:38:10
https://dev.to/dorcasbd/testing-saw-scrape-any-website-4p57
testing, saw, qualityassurance, webdev
SAW, which stands for Scrape Any Website, is a Windows application designed to facilitate web scraping tasks. It allows users to scrape and extract data from various websites efficiently, providing a user-friendly interface for organizing and gathering information from the web. **Key Features of SAW** - User-Friendly Interface - Efficient Data Extraction - Customizable Scrape Jobs - Support for Multiple Websites - Organized Data Storage **Testing Overview** We conducted thorough testing on a Windows 10 Pro machine using SAW version 1.0.0. Our testing aimed to uncover any bugs, usability issues, performance bottlenecks, and security vulnerabilities. We focused on both typical user scenarios and edge cases to ensure comprehensive coverage. **Key Findings** **- Inconsistent Data Extraction:** We observed inconsistencies in the data extraction results, with some elements missing from the output. Additionally, some large websites displayed binary data instead of the actual scraped content, making the extracted data unusable. Ensuring accurate and reliable data extraction is paramount for the application’s core functionality. **- Invisible Text Input:** The text input field for adding a new scrape job has white text, which blends with the white background, making it difficult for users to see what they are typing. This is a usability issue that needs to be addressed to improve user experience. **- Lack of URL Validation in Add URL Form:** The 'Add URL' form allows users to submit non-URL inputs without displaying any validation error. This can lead to confusion and potential misuse of the application. Users should receive a validation error and be prevented from submitting the form if the input is not a valid URL. **- Static Cursor on Interactive Elements:** The cursor remains the default arrow even when hovering over interactive elements like links and buttons. The cursor should change to indicate that the element is clickable, enhancing the user experience and intuitiveness. **Conclusion** Overall, SAW has the potential to be a powerful web scraping tool. Addressing the identified issues will significantly enhance its stability, usability, and reliability. You can download SAW from the Windows Store by following this link: [Scrape Any Website](https://apps.microsoft.com/detail/9mzxn37vw0s2). For a detailed report on the bugs identified, please refer to our comprehensive bug report [here](https://docs.google.com/document/d/1NVi0R4tQ-VlLDVY9dVk4716HJ83uLuGtNqD0x_IdkxE/edit?usp=sharing) and the excel sheet [here](https://docs.google.com/spreadsheets/d/19MzkZomVraGGLstePE407luCfqPMYyX0jIhnoT3-MC4/edit?usp=sharing).
dorcasbd
1,909,587
Formik vs. React Hook Form
Forms are crucial components of web applications. Information is mostly gotten from users using...
0
2024-07-03T02:52:21
https://dev.to/abelotegbola/formik-vs-react-hook-form-3dcj
Forms are crucial components of web applications. Information is mostly gotten from users using forms. Usually, Building interactive and engaging online apps requires developers to create forms that are both efficient and easy to use. React provides libraries that facilitate the handling of forms. Two well-known React form libraries are Formik and React Hook Form. Although they both provide strong form management capabilities for React applications, their methods and advantages vary. This post will assist you in selecting the one that best meets your requirements. [Formik](https://formik.org/) Formik is an advanced and extensively used form framework designed for React. It offers a thorough API for controlling the submission, validation, and status of forms. Its integration with Yup for schema-based validation, which streamlines the process, is one of its main advantages. Key Features of Formik: 1. Declarative Form Handling: Formik makes it easy to create and manage forms declaratively. 2. Validation: Built-in support for Yup allows for powerful and reusable validation schemas. 3. Field-Level Control: There is easy control over form fields, which makes it suitable for complex forms. Setup: 1. Install formik and yup in your react app ``` npm i formik yup or yarn i formik yup ``` 2. import Formik, Field, Form, ErrorMessage from formik ``` import React from 'react'; import { Formik, Field, Form, ErrorMessage } from 'formik'; import * as Yup from 'yup'; const SignupForm = () => { return ( <Formik initialValues={{ email'', password: '' }} validationSchema={Yup.object({ fullName: Yup.string().required('full name is required'), email: Yup.string().email('Invalid email address').required('Email is required'), password: Yup.string().required("Password is required") })} onSubmit={(values, { setSubmitting }) => { console.log(values); setSubmitting(false); }} > <Form> <label htmlFor="fullName">Full Name</label> <Field name="fullName" type="text" /> <ErrorMessage name="fullName" /> <label htmlFor="email">Email Address</label> <Field name="email" type="email" /> <ErrorMessage name="email" /> <label htmlFor="password">Password</label> <Field name="password" type="password" /> <ErrorMessage name="password" /> <button type="submit">Submit</button> </Form> </Formik> ); }; ``` Some advantages of using Formik includes: - Mature Ecosystem: Formik has extensive documentation, community support, and a lots of tutorials. - Robust Validation: Schema-based validation with Yup offers a clear and maintainable way to handle complex validation scenarios. Disavantages: - Verbose: Requires more boilerplate code compared to some alternatives. - Performance: Performance is low in very large forms due to rerendering issues. [React Hook Form:](https://react-hook-form.com/get-started) React Hook Form leverages React hooks for form management. It emphasizes performance and simplicity, providing a leaner and more intuitive API. React Hook Form is particularly praised for its minimal re-renders and ease of integration with existing React codebases. Key Features of React Hook Form: - Hooks-Based: Utilizes React hooks for managing form state and validation, leading to cleaner and more concise code. - Minimal Re-Renders: Optimized to minimize re-renders, improving performance for large forms. - Easy Integration: Works seamlessly with existing HTML form elements and third-party component libraries. - Validation Flexibility: Offers multiple validation strategies, including native validation, custom validation, and integration with libraries like Yup. Setup: 1. Install react-hook-form into your project ``` npm i react-hook-form or yarn add react-hook-form ``` 2. Import useForm from react-hook-form ``` import { useForm } from 'react-hook-form'; ``` 3. Create your component, using object destructure, get the register, handleSubmit and formState ``` function App() { const { register, handleSubmit, formState: { errors } } = useForm(); return ( <form></form> ) } ``` 4. Add the input fields with register to capture field data and handle submit to process form submission ``` import { useForm } from 'react-hook-form'; function App() { const { register, handleSubmit, formState: { errors } } = useForm(); return ( <form onSubmit={handleSubmit((data) => console.log(data))}> <input {...register('fullName', { required: true })} /> {errors.fullName && <p>Full name is required.</p>} <input {...register('email', { required: true, pattern: /^\w{3,}@\w{2,}\.\w{2,}/i })} /> {errors.email && <p>Email format is invalid.</p>} <input {...register('password', { required: true } )} /> {errors.password && <p>Password is required</p>} <input type="submit" /> </form> ); } ``` Advantages: - Performance: Significantly reduces re-renders, making it highly efficient. - Simplicity: Less boilerplate and a more intuitive API, reducing development time. - Flexibility: Easy to integrate with existing form components and validation libraries. Disadvantages: - Community Size: Being newer, it has a smaller community and fewer resources compared to Formik. Conclusion Choosing between Formik and React Hook Form depends on your specific needs and preferences. Formik is a solid choice for complex forms requiring robust validation and a mature ecosystem. React Hook Form, on the other hand, excels in performance and simplicity, making it ideal for projects where speed and ease of use are paramount. As a frontend developer using react to build websites, it has been an awesome technology in building small and large scale website projects. Looking forward to the amazing things we will be building in the hng internship. https://hng.tech/internship, https://hng.tech/hire
abelotegbola
1,909,584
Set GitHub Actions timeout-minutes
In this article I introduce a GitHub Actions' setting timeout-minutes and tools related to...
0
2024-07-03T02:46:43
https://dev.to/suzukishunsuke/set-github-actions-timeout-minutes-1jkk
In this article I introduce a GitHub Actions' setting `timeout-minutes` and tools related to `timeout-minutes`. - What's timeout-minutes? - Why should you set timeout-minutes? - Linters to enforce timeout-minutes - A command line tool to set timeout-minutes to all GitHub Actions jobs - Set timeout-minutes to your all repositories ## What's timeout-minutes? [timeout-minutes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idtimeout-minutes) is the maximum number of minutes to let a job run before GitHub automatically cancels it. The default value is 360. ## Why should you set timeout-minutes? The default value of timeout-minutes is 360, but this is too long for most GitHub Actions jobs. Even if processes are stuck for some reason, jobs keeps running until the timeout. This wastes resources uselessly. By setting timeout-minutes properly, you can notice the issue and resolve it by retrying jobs quickly. https://exercism.org/docs/building/github/gha-best-practices#h-set-timeouts-for-workflows ## Linters to enforce timeout-minutes There are linters to enforce timeout-minutes. 1. [ghalint](https://github.com/suzuki-shunsuke/ghalint) is a GitHub Actions linter 1. [lintnet](https://lintnet.github.io/) is a general purpose linter powered by Jsonnet ### ghalint From ghalint [v0.2.12](https://github.com/suzuki-shunsuke/ghalint/releases/tag/v0.2.12), ghalint enforces timeout-minutes. https://github.com/suzuki-shunsuke/ghalint/blob/main/docs/policies/012.md ### lintnet ghalint is ported to lintnet as a module. https://github.com/lintnet-modules/ghalint So you can enforce timeout-minutes using the module. https://github.com/lintnet-modules/ghalint/tree/main/workflow/job_timeout_minutes_is_required ## A command line tool to set timeout-minutes to all GitHub Actions jobs It is very bothersome to set timeout-minutes to a lot of jobs by hand. But you can do it easily using a command line tool [ghatm](https://github.com/suzuki-shunsuke/ghatm). It finds GitHub Actions workflows and adds timeout-minutes to jobs which don't have the setting. It edits workflow files while keeping YAML comments, indents, empty lines, and so on. For details, please see https://github.com/suzuki-shunsuke/ghatm . ## Set timeout-minutes to your all repositories ghatm is useful, but it is very bothersome to run ghatm and create and merge pull requests to a lot of repositories by hand. But you can do it easily using ghatm and [multi-gitter](https://github.com/lindell/multi-gitter). 1. Create pull requests by `multi-gitter run` ```sh multi-gitter run ./ghatm-set.sh \ --config config.yaml \ -O "$org" \ -t "ci: set timeout-minutes using ghatm" \ --skip-forks \ -b "$body" \ -B ci-set-timeout-minutes-by-ghatm ``` ghatm-set.sh ```sh ghatm set ``` config.yaml ```yaml git-type: cmd # Use git to sign commits ``` 2. Merge pull requests by `multi-gitter merge` ```sh multi-gitter merge \ -O "$org" \ -B ci-set-timeout-minutes-by-ghatm \ --skip-forks ```
suzukishunsuke
1,909,583
Issue with NPM Package Installing Globally
I was not able to install npm packages globally in mac os(using npm install -g), but able to install...
0
2024-07-03T02:36:34
https://dev.to/kiranuknow/issue-with-npm-package-installing-globally-cgi
node, npm, nodemodules
I was not able to install npm packages globally in mac os(using npm install -g), but able to install locally in the project. Installing locally in every project will increase the total node_modules folder size and for every project we need to install the same packages that will be tedious and unnecessary. Here are the steps I performed. I was using Homebrew to install node and npm. So, 1. Used command `npm config get prefix` - check where the npm actually points to. If this points to /opt/homebrew 2. Check if node_modules are installed in cd `/usr/local/lib` 3. If (2) is true, `brew remove npm` and then `brew install npm` 4. Now see if `npm install -g express works` ref: https://stackoverflow.com/questions/22562041/global-installation-with-npm-doesnt-work-after-mac-os-x-mavericks-update
kiranuknow
1,909,581
Exploring the Power of Python and Object-Oriented Programming
Python, a versatile and widely-used programming language, is known for its simplicity and...
0
2024-07-03T02:33:53
https://dev.to/minorpianokeys/exploring-the-power-of-python-and-object-oriented-programming-4ai3
Python, a versatile and widely-used programming language, is known for its simplicity and readability. One of its most powerful features is its support for Object-Oriented Programming (OOP). OOP is a programming paradigm that uses objects and classes to structure software programs. It allows developers to create modular, reusable code, which is essential for managing large and complex applications. In this blog post, we will delve into the fundamentals of OOP in Python and explore how it can enhance your programming skills. Understanding Object-Oriented Programming At the heart of OOP are objects and classes. A class is a blueprint for creating objects. It defines a set of attributes and methods that the created objects will have. An object, on the other hand, is an instance of a class. Think of a class as a cookie cutter and objects as the cookies made from that cutter. Each cookie (object) can have different decorations (attribute values), but they all share the same basic shape and ingredients (structure and behavior). **The Very Essential and Powerful __init__ method** In Python, the __init__ method is a special constructor method used to initialize newly created objects. It is automatically called when a new instance of a class is created, allowing you to set initial values for the object's attributes and perform any setup tasks required. The __init__ method takes at least one argument, self, which refers to the instance being created. Additional parameters can be passed to __init__ to initialize the attributes of the object. For example, in a Car class, __init__ might take parameters like make, model, and year to set the corresponding attributes. **Implementing OOP in Python** Let's explore how these principles are implemented in Python with an example. Suppose we are building a simple library management system. class Book: def __init__(self, title, author): self.title = title self.author = author We have initialized the Book class with a title and author that will be saved to each Book class we create. **Talking to Your Database With SQL** SQL (Structured Query Language) is a powerful and standardized language used to communicate with relational databases. It enables users to create, read, update, and delete (CRUD) data stored in a database, as well as manage database structures. SQL's declarative nature allows users to specify what data they need without detailing the procedural steps to retrieve it. Key components of SQL include queries for data retrieval (SELECT), commands for data manipulation (INSERT, UPDATE, DELETE), and statements for schema creation and modification (CREATE, ALTER, DROP). **Bringing Both Worlds Together** Object-Relational Mapping (ORM) is a programming technique that enables developers to interact with a relational database using the object-oriented paradigm of their programming language. By creating a bridge between the data structures in a relational database and the objects in Python, ORM tools simplify database interactions. Instead of writing raw SQL queries, developers can perform database operations using the language's constructs. This abstraction enhances code readability, maintainability, and productivity by allowing developers to work with database records as if they were regular objects. I used SQLite in Python in my Phase 3 Project. **CLI Programs** Command-Line Interfaces (CLIs) provide users with a text-based way to interact with computer programs and operating systems. Unlike graphical user interfaces (GUIs) which rely on visual elements like windows and buttons, CLIs require users to type commands into a terminal or console. CLIs allow for precise control over system resources and software configurations through commands that execute tasks ranging from file manipulation and software installation to network management and process monitoring. CLIs are platform-independent, making them suitable for both local and remote system management. While they may have a steeper learning curve compared to GUIs, CLIs offer advantages such as scriptability, rapid execution of repetitive tasks, and the ability to operate in low-resource environments or headless servers. As software development and system administration continue to evolve, CLIs remain indispensable tools for those who value speed, precision, and direct control over their computing environments. **Conclusion** Python's support for Object-Oriented Programming (OOP) empowers developers with a structured and efficient approach to building software systems. By leveraging classes, objects, and the versatile __init__ method, developers can create modular and reusable code that scales well with project complexity. Integrating SQL for database interactions and utilizing Object-Relational Mapping (ORM) tools further enhances Python's capabilities, enabling seamless integration of database operations within OOP principles. Additionally, Command-Line Interfaces (CLIs) extend Python's versatility by providing robust, text-based interaction with systems, offering precise control and automation capabilities. Together, these features show off Python's adaptability across diverse applications, from web development to data science and system administration, making it a cornerstone in modern software engineering.
minorpianokeys
1,909,580
User Management Automation in Linux Using Bash Script
Scenario As a SysOps engineer, you have been tasked to write a bash script that creates...
0
2024-07-03T02:27:10
https://dev.to/sarahligbe/user-management-automation-in-linux-using-bash-script-197l
bash, automation, linux, devops
## Scenario As a SysOps engineer, you have been tasked to write a bash script that creates Users, Groups, and home directories with appropriate permissions for new employees in your company. This article will walk you through writing the bash script to automate the process of creating users and groups in a Linux system. ## Solution ### Pre-requisites 1. An Ubuntu server 2. Basic Linux knowledge 3. User with sudo privileges ### Script Overview The script `create_users.sh` reads a text file containing usernames and group names, creates the specified users and groups, sets up home directories, generates random passwords for each user, and logs all actions. It is important to run the script with a user with sudo privileges as using your root user is not considered best practice. The script takes the input file as a command-line argument. ### Breakdown of the create_users.sh script #### Initial Checks The script starts with some important checks: ```bash # Function to check if user has sudo privileges check_sudo() { if sudo -n true 2>/dev/null; then return 0 else return 1 fi } # Check if user has sudo privileges if [ ! check_sudo ]; then echo "This script requires sudo privileges to run" echo "Please run this script with sudo or as a user with sudo privileges" exit 1 fi # Check if input file is provided if [ $# -eq 0 ]; then echo "Please provide an input file" echo "Usage: $0 <input-file>" exit 1 fi ``` These checks ensure that the script is run with root privileges and that an input file is provided. #### Directory and File Setup ```bash input_file=$1 log_file="/var/log/user_management.log" password_csv="/var/secure/user_passwords.csv" password_file="/var/secure/user_passwords.txt" password_dir="/var/secure" #create the password directory if it doesn't exist if [ ! -d $password_dir ]; then sudo mkdir $password_dir fi # Create log file, password csv, and password file and set permissions sudo touch $log_file $password_csv $password_file sudo chmod 600 $password_csv $password_file ``` The variables denoting the various files and directories are defined. The input_file containing the user and group names is assigned **$1** denoting that it is the first command-line argument supplied. The if statement checks whether the directory where the user passwords will be stored exists and creates the directory if it doesn’t . The password files are given restrictive permissions (600) to ensure only the owner can read it. #### Helper Functions Two helper functions are defined: ```bash # Function to log actions log() { echo "$(date): $1" | sudo tee -a "$log_file" > /dev/null } # Function to generate random password generate_password() { openssl rand -base64 12 } ``` The `log` function appends timestamped messages to the log file, while `generate_password` creates a random 12-character password. #### Reading the input file and Error Handling The input file is read line-by-line using the `while` loop ```bash #set line number to zero. to be used in error handling for invalid inputs line_number=0 # Read input file line by line while IFS=';' read -r username groups; do # Remove leading/trailing whitespace username=$(echo $username | xargs) groups=$(echo $groups | xargs) #increment the line number after reading each line line_number=$((line_number + 1)) # Check that the username and group is present if [[ -z "$username" || -z "$groups" ]]; then log "Error: Invalid input on line $line_number. Ensure Username and groups are provided" echo "Error: Invalid input on line $line_number. Ensure Username and groups are provided" continue fi # Check if user already exists if id "$username" &>/dev/null; then log "User $username already exists. Skipping." continue fi # Create user's personal group sudo groupadd $username log "Created personal group $username" # Create user with home directory with appropriate ownership and permissions to allow only the user read, write, and execute sudo useradd -m -g $username $username sudo chmod 700 "/home/$username" sudo chown "$username:$username" "/home/$username" log "Created user $username with home directory" # Set random password for user password=$(generate_password) echo "$username:$password" | sudo chpasswd echo "$username,$password" | sudo tee -a $password_csv > /dev/null echo "$username,$password" | sudo tee -a $password_file > /dev/null log "Set password for user $username" # Add user to additional groups IFS=',' read -ra group_array <<< "$groups" for group in "${group_array[@]}"; do group=$(echo $group | xargs) # Create group if it doesn't exist if ! getent group $group &>/dev/null; then sudo groupadd $group log "Created group $group" fi sudo usermod -a -G $group $username log "Added user $username to group $group" done done < "$input_file" ``` Before the loop, a variable called `line_number` is defined as 0. This variable will be used as a form of error handling to note any lines in the input file that does not have a username or a group defined. `IFS=';'` sets the field separator to semicolon because the user and group names are separated by a semicolon. `read -r username groups` reads a line, splitting it into username and groups variables. Leading and trailing whitespaces are removed from username and groups using `xargs`. `line_number=$((line_number + 1))` increases by 1 at the start of each iteration in order to reference the current line number in the error logs and notifications. The script then goes on to check if the username or groupname is present in the file and sends an error message to the log file and also notifies you on the command line about which line has the error. The script then goes on to the next line to run the script. It also checks whether or not a user has already been created and skips that user to the next. After that it creates the user's personal group, creates the user with a home directory and permissions set to 700 to only allow the owner access to it, and sets a random password for the user. The script also handles multiple group assignments in the instance where a user is to join multiple groups. This is separated in the input file by a comma delimiter. The script loops through the array of `groups` and creates the groups if they don't exist and adds the user to each specified group. ### Usage To use the script, run: ```bash sudo chmod +x create_users.sh ./create_users.sh input_file.txt ``` Where input_file.txt contains lines in the format ``` username;group1,group2,group3 ``` ### Security Considerations 1. The user running the script must have sudo privileges 2. Passwords are stored in a file with restricted permissions. 3. Random passwords are generated using OpenSSL for better security. ## Conclusion This bash script provides a powerful and flexible way to automate user and group creation in Linux systems. By handling various scenarios and providing detailed logging, it simplifies the process of managing multiple user accounts, making it an invaluable tool for system administrators. This is a practical example of the skills developed at [HNG](https://hng.tech/hire), where participants learn to create efficient, scalable solutions for real-world problems. Register for the [HNG internship](https://hng.tech/internship) to develop these skills.
sarahligbe
1,909,578
Figma launches new AI features
Stability AI lands a lifeline from Sean Parker, Greycroft Stability AI secured new funding...
0
2024-07-03T02:26:34
https://aisnapshot.beehiiv.com/p/figma-launches-new-ai-features
ai, news, programming, python
## Stability AI lands a lifeline from Sean Parker, Greycroft Stability AI secured new funding from notable investors like Sean Parker and Greycroft, amid financial difficulties and leadership changes. The startup behind the image-generating model Stable Diffusion has struggled with cash flow and mounting operational costs. New CEO Prem Akkaraju and the board plan to focus on expanding Stability’s generative AI capabilities while adhering to open-source principles. Despite past mismanagement and ongoing legal challenges, the investment aims to bolster Stability's future in the burgeoning generative media market. ([Read more](https://techcrunch.com/2024/06/25/stability-ai-lands-a-lifeline-from-sean-parker-greycroft/)) ## Meet Figma AI: Empowering Designers with Intelligent Tools Figma has unveiled Figma AI, a suite of AI-driven features aimed at enhancing designers' productivity and creativity. Key features include: - Visual and Asset Search: Enhanced search capabilities using images or text to find relevant designs. - AI-Powered Content Generation: Tools to create realistic copy and images, background removal, and layer renaming. - Prototyping Tools: Quick-turn static designs into interactive prototypes. - Text Prompt Design Generation: Generate UI layouts and components from text descriptions. These tools are available in a beta program through 2024, and Figma ensures privacy and data protection in its AI training models. ([Read More](https://www.figma.com/blog/introducing-figma-ai/)) ## From bare metal to a 70B model A team from Imbue provides a detailed guide on setting up infrastructure to train a 70B parameter model, starting from bare metal hardware. The process includes provisioning machines, setting up a robust InfiniBand network, diagnosing hardware issues, and running health checks. Key learnings emphasize the importance of reproducibility, automated error handling, and the flexibility of swapping hardware. The effort resulted in a stable infrastructure capable of efficient, large-scale model training, and they invite interested individuals to join their team. ([Read More](https://imbue.com/research/70b-infrastructure/)) ## Meta 3D Gen Meta 3D Gen (3DGen) is a cutting-edge pipeline for fast text-to-3D asset generation, creating high-fidelity 3D shapes and textures in under a minute with support for physically-based rendering (PBR). It integrates Meta 3D AssetGen and Meta 3D TextureGen for superior text-to-3D and text-to-texture generation, achieving a 68% win rate over single-stage models. 3DGen outperforms industry baselines in prompt fidelity and visual quality, while also being significantly faster. ([Read More](https://ai.meta.com/research/publications/meta-3d-gen/)) ## Finding GPT-4’s mistakes with GPT-4 OpenAI developed CriticGPT, based on GPT-4, to catch errors in ChatGPT's code output, finding that it helps users outperform those without assistance 60% of the time. By integrating CriticGPT into their RLHF labeling pipeline, OpenAI aims to enhance AI trainers' ability to evaluate ChatGPT's outputs, especially as models become more accurate and their errors more subtle. CriticGPT's critiques are preferred over unassisted critiques 63% of the time, producing more comprehensive and accurate feedback with fewer hallucinations. ([Read More](https://openai.com/index/finding-gpt4s-mistakes-with-gpt-4/))
ai-snapshot
1,909,577
Hello
Hi, I'm Mike Selva, I am a Self-Learning Full Stack Developer, I am 17 years old now. I have been...
0
2024-07-03T02:26:08
https://dev.to/mikeselva123/hello-35kn
Hi, I'm Mike Selva, I am a Self-Learning Full Stack Developer, I am 17 years old now. I have been coding for over two weeks, I have joined Github and am Creating, editing and updating my projects, You can find my own GitHub at [My Github Account](https://github.com/mikeselva123) Well, See you next time, Bye
mikeselva123
1,909,575
CERTIFIED RECOVERY SPECIALIST // CONSULT DIGITAL WEB RECOVERY
As a German citizen residing in Western Australia, I found myself embroiled in a harrowing ordeal...
0
2024-07-03T02:22:01
https://dev.to/margaret_dalley_cabfa4fa2/certified-recovery-specialist-consult-digital-web-recovery-4h4f
As a German citizen residing in Western Australia, I found myself embroiled in a harrowing ordeal stemming from a devastating loss in my crypto investments. In 2021, at the age of 28, I encountered a distressing setback when the value of my investments plummeted due to what was purported to be the result of COVID-19 deflation. This catastrophic turn of events led to the loss of a substantial sum, totaling approximately 10.065221 BTC, equivalent to over USD 250,000. To compound the distress, I had already invested $320,000 in cash into this venture, only to witness it evaporate before my very eyes. In this staggering loss, I embarked on a desperate quest to reclaim my funds, seeking the assistance of numerous purported crypto recovery experts. Despite my fervent efforts, each attempt proved futile, plunging me into a quagmire of debt and despair. The weight of this financial turmoil exacted a heavy toll on my mental well-being, leading me through a tumultuous journey marked by profound depression and relentless internal strife. This bleak landscape, a glimmer of hope emerged in the form of Digital Web Recovery, a beacon of light amidst the shadows of my desolation. Upon reaching out to them via email, their response was swift and resolute, instilling a newfound sense of optimism within me. The urgency with which they addressed my plight served as a balm to my wounded spirit, igniting a spark of resilience within me. Digital Web Recovery's team of seasoned cybersecurity experts embarked on a relentless pursuit to track down the perpetrators responsible for the nefarious act that had robbed me of my hard-earned assets. Their unwavering dedication and expertise bore fruit as they meticulously unraveled the web of deceit woven by the swindlers, ultimately facilitating the recovery of a significant portion of my stolen funds. This monumental achievement not only offered a semblance of financial restitution but also catalyzed my journey toward healing and restoration. Beyond the remarkable feat of fund recovery, Digital Web Recovery extended a guiding hand of enlightenment and empowerment, imparting invaluable knowledge on crucial security measures essential for safeguarding digital assets. Their emphasis on implementing two-factor authentication, crafting robust and unique passwords, and remaining vigilant against insidious phishing attempts equipped me with the tools and awareness needed to fortify my defenses against future threats. In retrospect, my encounter with Digital Web Recovery transcended the realm of financial restitution; it emerged as a transformative experience imbued with resilience and growth. Their unwavering support, swift action, and sage guidance not only helped me reclaim a semblance of financial stability but also empowered me to navigate the digital realm with newfound confidence and vigilance.I am profoundly grateful for the unwavering dedication and expertise of Digital Web Recovery. Their justice and restoration have left an indelible mark on my life, serving as a testament to the resilience of cryptocurrencies amidst adversity. I wholeheartedly recommend Digital Web Recovery to anyone grappling with the aftermath of cyber fraud and deception; WhatsApp; +14033060588 Email; digitalwebexperts@zohomail.com Website https://digitalwebrecovery.com their unwavering dedication and expertise are nothing short of extraordinary Thanks ![Uploading image](...)
margaret_dalley_cabfa4fa2
1,909,574
Seamless Automation: Integrating Vacuum Cylinders into Production Lines
Pros of using Vacuum Cylinders in Automation Automation is becoming more and popular for simplifying...
0
2024-07-03T02:20:49
https://dev.to/katie_abrahamkqjagsa_759/seamless-automation-integrating-vacuum-cylinders-into-production-lines-4plg
vacuumcylinder
Pros of using Vacuum Cylinders in Automation Automation is becoming more and popular for simplifying work processes to improve productivity in this modern era. What companies want to do is keep operations in the wheelhouse, improve efficiency and put more of an emphasis on work safety. The inclusion of vacuum cylinders is an ideal method to resolve the issue in obtaining complete and perfect automation. It has a lot of benefits, and solutions to take the quality, speed along with safety in work process. Key Benefits of Including Vacuum Cylinders: The use of the vacuum cylinder in production operations offers lots of advantages for companies. First of all, these devices guarantee fast and efficient material lifting. With hydraulic cylinders, users can easily transport materials to a specific spot with no disturbances thus conserving time and improving overall throughput. Besides, these vacuum cylinders are quite economical which is why it an ultimate choice for business looking to decrease their operation costs. This together with their minimal maintenance requirements, life expectancy and the robust nature of them, they do offer cost saving throughout its lifecycle. A significant benefit is the elimination of manual handling, which ultimately increases safety in production. The more they lessen manual tasks, the better chances there are to reduce danger and human errors which improves safety practices as a whole. Offering Innovation and Spoiling the Customer: Integrating vacuum cylinders has been a major source of innovation with high added value for customer satisfaction. With these devices, flexibility and creation of dimensionally adjusts based on production requirements as well easily to adapt the tooling for different applications along with choosing appropriate controls that improve productivity. In addition, technological advancements in the integration of vacuum cylinders have facilitated fertile ground for intelligent vacuum cylinders. Some of these devices feature in-built sensors which monitor the pressure applied on materials while they are being transported, helping to reduce material breaking due to improper handling practices. Enhanced Safety Measures: This prevents accidents in the working area, which considerably cause more damage than others in a factory. The integration of hydralic cylinder in this product not only enhances safety by decreasing manual handling, but also increases operational efficiency and reduces the requirement for human interaction. Utilizing Vacuum Cylinders: The good thing is, the use of vacuum cylinders only states easy and user-friendly. In order to move material efficiently a vacuum cylinder system including the vacuum generator, then the vacuum cups and lastly the support product that hold them. When these elements are connected, users can manage with ease all types of materials. In addition to being more versatile, vacuum cups are compatible with all sorts of shapes and sizes in different industries. After-sales support and quality assurance Vacuum cylinders are known for the wide range of after-sales services offered by their best manufacturers. This ensures that the device will continue to run smoothly with regular maintenance services and technical support, helping control any possible future problems. Product quality is one of the most important points within a production process, should it be in any industry. Beyond quality control, vacuum cylinders make it easier to move materials from stations are less likely damage finish products. Moreover, these devices serve the purpose of reducing machine downtime for improved productivity in the production environment. Cross-Industry Applications: The capacity of vacuum cylinders can be used in a variety of industries. These devices have been designed for various applications such as food and pharmaceuticals, packaging electrical & electronics marketplace automotive sector. In Conclusion: Automated vacuum cylinders have changed the way companies handle their processes, giving businesses the power to improve efficiency and safety in everything they do - all while increasing productivity and saving money. The stainless steel pneumatic cylinder is user-friendly, low on maintenance and highly durable which makes the vacuum cylinders a top choice in all those industries that foc onsistently putting out value-for-money products for their customers. For the companies that employ vacuum cylinder integration in their manufacturing lines, they are provided with assured airtight efficiency and freedom to refocus on success and innovation within their market domain.
katie_abrahamkqjagsa_759
1,909,552
Leetcode Day 2: Palindrome Number Explained
The problem is as follows: Given an integer x, return true if x is a palindrome, and false...
0
2024-07-03T02:19:12
https://dev.to/simona-cancian/leetcode-day-2-palindrome-number-377i
leetcode, python, beginners, codenewbie
The problem is as follows: Given an integer `x`, return `true` if `x` is a **palindrome**, and `false` _otherwise_. Example 1: ``` Input: x = 121 Output: true Explanation: 121 reads as 121 from left to right and from right to left. ``` Example 2: ``` Input: x = -121 Output: false Explanation: From left to right, it reads -121. From right to left, it becomes 121-. Therefore, it is not a palindrome. ``` Example 3: ``` Input: x = 10 Output: false Explanation: Reads 01 from right to left. Therefore, it is not a palindrome. ``` Here is how I solved it: - Let's convert the given integer into a string - In Python we can reverse a string by slicing: create a slice that starts at the end of the string and moves backwards. The slice statement `[::-1]` means start and end of the string and end at position 0, move with the step `-1`, which means one step backwards. https://www.w3schools.com/python/python_howto_reverse_string.asp ``` rev = str(x)[::-1] ``` - Let's compare the reversed string to the given integer (again, convert it to string: we can only compare same datatypes). Return True if it matches, else False. ``` if rev == str(x): return True return False ``` But hey, we can actually write just one line of code using ternary operator! To be honest, I am not sure if it's bad design, but it seemed more readable to me. Here is the complete solution: ``` class Solution: def isPalindrome(self, x: int) -> bool: return True if str(x)[::-1] == str(x) else False ```
simona-cancian
1,909,551
Day 1: Error: "Module not found: Error: Can't resolve '@angular/core'"
Scenario: This error often occurs when you try to run your Angular application after...
0
2024-07-03T02:16:14
https://dev.to/dipakahirav/day-1-error-module-not-found-error-cant-resolve-angularcore-2il0
angular, webdev, javascript, errors
#### Scenario: This error often occurs when you try to run your Angular application after installing or upgrading Angular packages and it can't find the `@angular/core` module. please subscribe to my [YouTube channel](https://www.youtube.com/@DevDivewithDipak?sub_confirmation=1 ) to support my channel and get more web development tutorials. #### Solution: 1.**Ensure Dependencies are Installed Correctly:** Make sure all dependencies are properly installed by running: ```sh npm install ``` 2.**Check `package.json`:** Ensure that `@angular/core` and other Angular dependencies are listed in your `package.json`. It should look something like this: ```json "dependencies": { "@angular/core": "^12.0.0", "@angular/common": "^12.0.0", ... } ``` 3.**Clean npm Cache and Reinstall Node Modules:** Sometimes, npm cache can cause issues. Clear the cache and reinstall the node modules: ```sh npm cache clean --force rm -rf node_modules npm install ``` 4.**Check the `tsconfig.json` Configuration:** Ensure the paths in `tsconfig.json` are correctly configured: ```json { "compilerOptions": { "baseUrl": "./", "paths": { "@angular/*": ["node_modules/@angular/*"] } } } ``` 5.**Update Angular CLI:** Ensure your Angular CLI is up-to-date: ```sh npm install -g @angular/cli ``` 6.**Restart the Development Server:** Sometimes, simply restarting the development server can solve the issue: ```sh ng serve ``` If you've followed all these steps and the problem persists, you may want to check for any specific issues related to your project setup or Angular version. Feel free to ask for another error and its solution tomorrow! please subscribe to my [YouTube channel](https://www.youtube.com/@DevDivewithDipak?sub_confirmation=1 ) to support my channel and get more web development tutorials. ### Follow and Subscribe: - **Instagram**: [devdivewithdipak](https://www.instagram.com/devdivewithdipak) - **Website**: [Dipak Ahirav] (https://www.dipakahirav.com) - **Email**: dipaksahirav@gmail.com - **YouTube**: [devDive with Dipak](https://www.youtube.com/@DevDivewithDipak?sub_confirmation=1 ) - **LinkedIn**: [Dipak Ahirav](https://www.linkedin.com/in/dipak-ahirav-606bba128) Happy coding! 🚀
dipakahirav
1,909,549
NoInfer in Typescript 5.4
I introduce NoInfer which is new feature in TypeScript 5.4. What is NoInfer? NoInfer is a feature in...
0
2024-07-03T02:12:35
https://dev.to/makoto0825/noinfer-in-typescript-54-4k72
I introduce NoInfer which is new feature in TypeScript 5.4. **What is NoInfer?** NoInfer is a feature in TypeScript that suppresses type inference. For example, we have following the code. ```typescript const func1 = <T extends string>(array: T[], searchElement: T) => { return array.indexOf(searchElement); }; console.log(func1(["a","b","c"],"d")) ``` The second argument of `func1` is assigned to `d`, but since `d` does not exist in the elements of the array in the first argument, an index search returns -1. However, TypeScript does not produce a type error in such a case. This is because both the array passed as the first argument and the string specified as the second argument are used as materials to infer the type T, resulting in the type T being inferred as 'a' | 'b' | 'c' | 'd'. So, how can you produce a type error if a string other than `a`, `b`, or `c` is entered as the second argument? This is where `NoInfer` can be used. ```typescript const func2 = <T extends string>(array: T[], searchElement: NoInfer<T>) => { return array.indexOf(searchElement); }; console.log(func2(["a","b","c"],"b"))// no error console.log(func2(["a","b","c"],"d"))// error happened. Argument of type '"d"' is not assignable to parameter of type '"a" | "b" | "c"' ``` Using `NoInfer` allows you to stop the type inference expansion at that part. Hence, In this case, the type T is constrained to 'a' | 'b' | 'c', so you can explicitly raise an error when passing 'd' as an argument.
makoto0825
1,909,547
LED transparent screen: visual revolution and technological breakthrough in the new era
With the continuous advancement of technology and the growing market demand, LED transparent screens...
0
2024-07-03T02:09:00
https://dev.to/sostrondylan/led-transparent-screen-visual-revolution-and-technological-breakthrough-in-the-new-era-5gh6
led, transparent, screen
With the continuous advancement of technology and the growing market demand, [LED transparent screens](https://sostron.com/products/crystal-transparent-led-screen/) stand out in the advertising and display fields with their unique advantages. This article will explore in depth the core competitive advantages of LED transparent screens and how it meets the needs of modern business and architecture through technological innovation. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2m98tdfej2xcab7g1vhj.png) 1. Core competitive advantages of LED transparent screens High-quality materials: LED transparent screens use universal and stable display materials to ensure the reliability and durability of the product. Innovative lamp bead technology: Its excellent color mixing effect brings users an unprecedented visual experience and enhances the attractiveness of advertising and information delivery. [Introduce the working principle of LED lamp beads to you. ](https://sostron.com/the-working-principle-of-led-lamp-beads/) Structural firmness: Through the embedded substrate slot design, it provides extremely high stability and ensures the stability of the display in various environments. Efficient heat dissipation and fast heat conduction: The full solder and excellent heat dissipation design ensure the stability and life of the display screen during long-term operation. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u6211tcm22tuxm0zd9hr.png) 2. The main advantages of LED transparent screen High transparency effect: about 75% of the perspective effect retains the lighting and perspective functions of the glass. At the same time, the almost invisible LED lamp bead design does not affect the beauty of the glass curtain wall. [There are 3 types of LED lamp bead specifications here. ](https://sostron.com/introducing-3-types-of-led-lamp-bead-specifications-for-you/) Lightweight design: The screen motherboard is only 10mm thick and weighs as little as 14kg/㎡, which greatly reduces the load requirements for the glass curtain wall. It is easy to install and does not take up extra space. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o9u0naq3zxgreoqancjb.png) Save installation costs: Due to its light weight and easy installation, it does not require complex supporting steel structures, thereby greatly reducing installation costs. Convenient maintenance: Indoor maintenance design is fast and safe, greatly saving manpower and material resources. Reduce lighting costs: LED transparent screens can be used as part of exterior wall lighting, saving additional lighting costs while providing more attractive advertising benefits. [7 advantages of using LED transparent screen rental for retail display.](https://sostron.com/7-advantages-of-using-led-transparent-screen-rental/) Energy saving and environmental protection: The average power consumption is less than 280W/㎡, and no traditional cooling system is required, reducing energy consumption. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qf5giinx8vah8dl2wh0k.png) Easy operation and strong controllability: Supports network cable connection and remote wireless control, which is convenient for changing display content at any time to meet the needs of different scenarios. Wide range of applications: LED transparent screens are suitable for banks, shopping malls, theaters, commercial streets, chain stores, hotels, municipal public buildings, landmark buildings, office buildings and other places. [Here are ten questions about transparent LED window displays. ](https://sostron.com/transparent-led-window-display-ten-questions-answered/) The introduction of LED transparent screens not only breaks the limitations of traditional LED displays, but also meets the new needs of modern business and architecture for display equipment with its innovative technology and excellent performance. With the continuous advancement of technology, we have reason to believe that LED transparent screens will play a more important role in the future display field. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xzwacto7x7q5ho6b2fuk.png) Thank you for watching. I hope we can solve your problems. Sostron is a professional [LED display manufacturer](https://sostron.com/about-us/). We provide all kinds of displays, display leasing and display solutions around the world. If you want to know: [How to distinguish the quality of LED transparent screens?](https://dev.to/sostrondylan/how-to-distinguish-the-quality-of-led-transparent-screens-1ilm) Please click read. Follow me! Take you to know more about led display knowledge. Contact us on WhatsApp:https://api.whatsapp.com/send?phone=+8613570218702&text=Hello
sostrondylan
1,909,303
Git and GitHub Explained Like You're Five
Imagine that you have an application or a project folder. And you want to collaborate with other...
0
2024-07-03T02:06:00
https://dev.to/thekarlesi/git-and-github-explain-like-youre-five-2c2p
webdev, beginners, programming, tutorial
Imagine that you have an application or a project folder. And you want to collaborate with other people on that project. Or imagine that you have your application, and you uploaded some new code into your application. You added a new feature for example. And now your application breaks. Let's say your application is not running anymore like it was before. And you are like "Hey. I would really love it if I could go back in the past in that code base in which my application was running." "I wish there was some sort of way to you know take a picture of that codebase, and whenever I want that picture like that codebase as it was, during that time in the past, I could have that." So, basically saving the history of your project. Like, let's say, 1st July, your project looked like this. 2nd July, your project looked like this. 3rd July your project looked like this. Like some additional files, code changes etc. You want to go back to what your project looked like on 1st July. You can do that. You want to go back to what your project looked like on 2nd July, you can do that. That is a use case that you can think of. Another one can be, you are contributing to open source for example. Or, you and your friends are contributing to one project. Now, imagine that, 100 people are contributing to just one folder. That is what happens to open source projects as well. There is one main folder, and so many people from around the world are contribute to that project. Now, you might be wondering, "how do these people share their code?" If you want to share a code in some open source project, you want to add your own functionality or share some changes you made, how can you do that? We can do that through Git and GitHub. So Git and GitHub allows us to maintain this history of the project at what particular point of time, which person made which change, and where in the project. Git helps us in doing that. P.S. Get [my 2 Hour Web Developer](https://karlgusta.gumroad.com/l/eofdr) to learn how to think like a developer.
thekarlesi
1,909,546
Pressure Infusion Bags: Enhancing Medical Care with Reliable Suppliers
Do you know just what a Pressure Infusion Bag is? It's a bag that is unique will help experts which...
0
2024-07-03T02:04:38
https://dev.to/katie_abrahamkqjagsa_759/pressure-infusion-bags-enhancing-medical-care-with-reliable-suppliers-2m88
infusionbag
Do you know just what a Pressure Infusion Bag is? It's a bag that is unique will help experts which can be medical medicine and fluids to patients faster and more safely. Pressure infusion bag are created by special companies called manufacturers who make sure they are good quality and safe to use Benefits: The Pressure Infusion Bag helps professionals that are medical deliver and time medicine or fluids quickly to patients. The bag creates pressure that pushes the medicine or fluids to the person's body faster and much more efficiently. Which means it can save minutes which are precious an crisis situation, allowing doctors and nurses to target on saving life Innovation: Pressure Infusion Bags are an device that is revolutionary the medical field which includes been designed to greatly help medical practitioners deliver medications and fluids more quickly and safely. The bags have a design that is unique helps create pressure, which pushes the fluids or medicine into the patient's human body with ease. Innovation means discovering brand new and better ways to do things, and the Pressure Infusion Bag definitely ticks that box Safety: Patients' security is the quantity that is true priority of medical professionals. The Pressure Infusion Bag is designed to assist dieticians give medicine and fluids to clients safely, and in the amount that is right. It is important to make use of the Pressure Infusion Bag as instructed to ensure patients get the amount that is correct of or liquids without any risk of harm Use and How to Use: The Pressure Infusion Bag is a tool that is use that is simple. Medical professionals just need to fill it with the amount that is proper of or fluids, close the bag then and connect it to your person's IV tubing. The bag will be filled with atmosphere to produce pressure to assist push the medication or fluids in to the patient's human anatomy during the rate that is correct. It is essential to proceed with the instructions provided with the Pressure Infusion Bag to ensure it is used correctly and safely Service and Quality: ​ Manufacturers that create Pressure Infusion Bags are responsible for ensuring their items are of high quality and safe to utilize. Additionally they offer after-sales services to help medical specialists to make use of their services and Products in the manner that is greatest. Vendors are committed to ensuring that their services and products meet the highest requirements of quality and safety to guarantee the best outcomes that are possible patients Application: Pressure Infusion Bags are used for a range of medical settings, such as for example emergency departments, intensive care units, and operating spaces. They are used to supply fluids, medication, and anesthesia to patients quickly and effectively. The case's mechanism supports a range of surgical procedures, enabling doctors to manage drugs and fluids in a controlled and manner that is efficient In Conclusion: Pressure Infusion Bags are an instrument that is vital the delivery of medical care, and dependable suppliers may play a role that is significant ensuring their quality and safety for patients. They can be an product that is innovative helps doctors conserve time that is precious delivering medicine or liquids and enhance patient outcomes. By using the Pressure infusion bag correctly and after the instructions and recommendations of vendors, medical experts can enhance their practices, respond more quickly to medical emergencies, and save more lives
katie_abrahamkqjagsa_759
1,909,536
Linux User Creation With Bash Script
Introduction Managing users and groups in a Unix-like operating system can be a tedious...
0
2024-07-03T02:00:16
https://dev.to/daniaernest/linux-user-creation-with-bash-script-46ec
devops, linux, bash
## Introduction Managing users and groups in a Unix-like operating system can be a tedious task, especially when dealing with multiple users. To simplify this process, we can use a Bash script to automate the creation of users and groups, set up home directories, generate random passwords, and log all actions. This blog post will walk you through a comprehensive Bash script that accomplishes these tasks. ## Script Overview The script we're going to discuss performs the following functions: 1. Create Users and Groups: Reads a file containing usernames and group names, creates the users and groups if they do not exist, and assigns users to the specified groups. 2. Setup Home Directories: Sets up home directories with appropriate permissions and ownership for each user. 3. Generate Random Passwords: Generates random passwords for the users and stores them securely. 4. Log Actions: Logs all actions to /var/log/user_management.log for auditing and troubleshooting. 5. Store Passwords Securely: Stores the generated passwords in /var/secure/user_passwords.csv with restricted access. ## The Script Here is the complete Bash script: ``` #!/bin/bash LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" # Ensure /var/secure exists and has the correct permissions mkdir -p /var/secure chmod 700 /var/secure touch "$PASSWORD_FILE" chmod 600 "$PASSWORD_FILE" # Function to log messages log_message() { echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE" } # Function to generate random passwords generate_password() { local password_length=12 tr -dc A-Za-z0-9 </dev/urandom | head -c $password_length } # Function to add users, groups and set up home directories setup_user() { local username=$1 local groups=$2 # Create the user if ! id -u "$username" &>/dev/null; then password=$(generate_password) useradd -m -s /bin/bash "$username" echo "$username:$password" | chpasswd log_message "User $username created." # Store the username and password echo "$username,$password" >> "$PASSWORD_FILE" log_message "Password for $username stored." else log_message "User $username already exists." fi # Create groups and add user to groups IFS=',' read -ra group_array <<< "$groups" for group in "${group_array[@]}"; do if ! getent group "$group" &>/dev/null; then groupadd "$group" log_message "Group $group created." fi usermod -aG "$group" "$username" log_message "Added $username to $group." done # Set up the home directory local home_dir="/home/$username" chown "$username":"$username" "$home_dir" chmod 700 "$home_dir" log_message "Home directory for $username set up with appropriate permissions." } # Main script if [ $# -eq 0 ]; then log_message "Usage: $0 <input_file>" exit 1 fi input_file=$1 log_message "Starting users and groups script." # Read the input file and process each line while IFS=';' read -r username groups; do setup_user "$username" "$groups" done < "$input_file" log_message "Users created with password and set to groups script completed." ``` ## Explanation 1. Logging and Password File Setup: The script ensures that the /var/secure directory exists with the appropriate permissions. It creates the password file /var/secure/user_passwords.csv and sets 2. permissions so that only the file owner can read it. Logging Function: The log_message function logs messages to /var/log/user_management.log with a timestamp, providing an audit trail of actions taken by the script. 3. Password Generation Function: The generate_password function generates a random password of 12 characters, comprising uppercase and lowercase letters and digits. 4. User Setup Function: The setup_user function creates a user if they do not already exist, generates and sets a password for them, creates groups if necessary, adds the user to the specified groups, and sets up their home directory with appropriate permissions. 5. Main Script: The main part of the script reads an input file containing username;groups entries, processes each line, and calls the setup_user function for each user. ## Usage Prepare the Input File: Create a text file (e.g., input.txt) with the following format, where each line contains a username and a list of groups separated by a semicolon: ``` user1;group1,group2 user2;group3,group4 ``` Run the Script: Save the script to a file (e.g., create_user.sh), Change the user to root sudo -s, make it executable, and run it with the path to your input file as an argument: ``` sudo -s chmod +x create_user.sh ./create_user.sh input.txt ``` Note that running the script as a root user is necessary to ensure it has the required permissions to create users, modify groups, and write to the log and password files. ## verify the script ```bash ~ cd /var/log/ ~ ls ~ user_management.log ~ cd /var/secure/ ~ ls ~ user_passwords.csv ``` ## Conclusion This Bash script simplifies user management by automating the creation of users and groups, setting up home directories, generating random passwords, and logging all actions. By following the steps outlined above, you can efficiently manage user accounts on your system while maintaining security and a clear audit trail. ### FOR TALENTS [HNG Internship]( https://hng.tech/internship) [Hire Talents](https://hng.tech/hire)
daniaernest
1,909,542
Top ShowdownJS Extensions and Their Usage
Top 20 ShowdownJS Extensions and Their Usage ShowdownJS is a powerful Markdown to HTML...
0
2024-07-03T01:41:23
https://dev.to/sh20raj/top-showdownjs-extensions-and-their-usage-18an
showdownjs, extensions, javascript, webdev
## Top 20 ShowdownJS Extensions and Their Usage ShowdownJS is a powerful Markdown to HTML converter. To extend its capabilities, developers have created numerous extensions. Here's a comprehensive guide to 20 of the top ShowdownJS extensions, categorized by their functionality. ### Syntax Highlighting 1. **Showdown Highlight Extension** - **Description**: Adds syntax highlighting to code blocks. - **Installation**: ```bash npm install showdown-highlight ``` - **Usage**: ```javascript const showdown = require('showdown'); const showdownHighlight = require('showdown-highlight'); const converter = new showdown.Converter({ extensions: [showdownHighlight] }); const markdown = '```javascript\nconsole.log("Hello, world!");\n```'; const html = converter.makeHtml(markdown); console.log(html); ``` ### Media Embedding 2. **Showdown YouTube Embed Extension** - **Description**: Embeds YouTube videos with simple syntax. - **Installation**: ```bash npm install showdown-youtube ``` - **Usage**: ```javascript const showdown = require('showdown'); const showdownYoutube = require('showdown-youtube'); const converter = new showdown.Converter({ extensions: [showdownYoutube] }); const markdown = 'Check out this video: @[](https://www.youtube.com/watch?v=dQw4w9WgXcQ)'; const html = converter.makeHtml(markdown); console.log(html); ``` 3. **Showdown CodePen Embed Extension** - **Description**: Embeds CodePen snippets. - **Installation**: ```bash npm install showdown-codepen ``` - **Usage**: ```javascript const showdown = require('showdown'); const showdownCodepen = require('showdown-codepen'); const converter = new showdown.Converter({ extensions: [showdownCodepen] }); const markdown = '@[codepen](https://codepen.io/pen/wefewfw)'; const html = converter.makeHtml(markdown); console.log(html); ``` ### Link Handling 4. **Showdown Target Blank Extension** - **Description**: Modifies links to open in a new tab. - **Installation**: ```bash npm install showdown-target-blank ``` - **Usage**: ```javascript const showdown = require('showdown'); const targetBlank = require('showdown-target-blank'); const converter = new showdown.Converter({ extensions: [targetBlank] }); const markdown = '[Open Google](https://www.google.com)'; const html = converter.makeHtml(markdown); console.log(html); ``` ### Emoji Support 5. **Showdown Emoji Extension** - **Description**: Adds emoji support. - **Installation**: ```bash npm install showdown-emoji ``` - **Usage**: ```javascript const showdown = require('showdown'); const showdownEmoji = require('showdown-emoji'); const converter = new showdown.Converter({ extensions: [showdownEmoji] }); const markdown = 'Hello, world! :smile:'; const html = converter.makeHtml(markdown); console.log(html); ``` ### Tooltip Support 6. **Showdown Tooltip Extension** - **Description**: Adds tooltips to text. - **Installation**: ```bash npm install showdown-tooltip ``` - **Usage**: ```javascript const showdown = require('showdown'); const showdownTooltip = require('showdown-tooltip'); const converter = new showdown.Converter({ extensions: [showdownTooltip] }); const markdown = 'Hover over this text for a tooltip. ![alt text](tooltip "This is a tooltip")'; const html = converter.makeHtml(markdown); console.log(html); ``` ### Table of Contents 7. **Showdown TOC Extension** - **Description**: Generates a table of contents from headings. - **Installation**: ```bash npm install showdown-toc ``` - **Usage**: ```javascript const showdown = require('showdown'); const showdownToc = require('showdown-toc'); const converter = new showdown.Converter({ extensions: [showdownToc] }); const markdown = '# Heading 1\n\n## Heading 2\n\n### Heading 3\n'; const html = converter.makeHtml(markdown); console.log(html); ``` ### Math Support 8. **Showdown MathJax Extension** - **Description**: Adds MathJax support for rendering mathematical expressions. - **Installation**: ```bash npm install showdown-mathjax ``` - **Usage**: ```javascript const showdown = require('showdown'); const showdownMathjax = require('showdown-mathjax'); const converter = new showdown.Converter({ extensions: [showdownMathjax] }); const markdown = '$$E=mc^2$$'; const html = converter.makeHtml(markdown); console.log(html); ``` ### Footnotes 9. **Showdown Footnotes Extension** - **Description**: Adds support for footnotes. - **Installation**: ```bash npm install showdown-footnotes ``` - **Usage**: ```javascript const showdown = require('showdown'); const showdownFootnotes = require('showdown-footnotes'); const converter = new showdown.Converter({ extensions: [showdownFootnotes] }); const markdown = 'Here is a footnote reference[^1]\n\n[^1]: Here is the footnote.'; const html = converter.makeHtml(markdown); console.log(html); ``` ### Task Lists 10. **Showdown Task Lists Extension** - **Description**: Adds support for task lists. - **Installation**: ```bash npm install showdown-task-lists ``` - **Usage**: ```javascript const showdown = require('showdown'); const showdownTaskLists = require('showdown-task-lists'); const converter = new showdown.Converter({ extensions: [showdownTaskLists] }); const markdown = '- [ ] Task 1\n- [x] Task 2'; const html = converter.makeHtml(markdown); console.log(html); ``` ### Smart Punctuation 11. **Showdown Smartypants Extension** - **Description**: Converts ASCII punctuation characters into "smart" typographic punctuation HTML entities. - **Installation**: ```bash npm install showdown-smartypants ``` - **Usage**: ```javascript const showdown = require('showdown'); const showdownSmartypants = require('showdown-smartypants'); const converter = new showdown.Converter({ extensions: [showdownSmartypants] }); const markdown = '"Hello," he said.'; const html = converter.makeHtml(markdown); console.log(html); ``` 13. **Showdown External Links Extension** - **Description**: Adds target="_blank" and rel="noopener noreferrer" to external links. - **Installation**: ```bash npm install showdown-external-links ``` - **Usage**: ```javascript const showdown = require('showdown'); const showdownExternalLinks = require('showdown-external-links'); const converter = new showdown.Converter({ extensions: [showdownExternalLinks] }); const markdown = '[External Link](https://example.com)'; const html = converter.makeHtml(markdown); console.log(html); ``` ### Audio Embedding 14. **Showdown Audio Embed Extension** - **Description**: Embeds audio files. - **Installation**: ```bash npm install showdown-audio-embed ``` - **Usage**: ```javascript const showdown = require('showdown'); const showdownAudioEmbed = require('showdown-audio-embed'); const converter = new showdown.Converter({ extensions: [showdownAudioEmbed] }); const markdown = '![audio](https://example.com/audio.mp3)'; const html = converter.makeHtml(markdown); console.log(html); ``` ### Image Handling 15. **Showdown Lazy Load Images Extension** - **Description**: Adds lazy loading to images. - **Installation**: ```bash npm install showdown-lazy-load-images ``` - **Usage**: ```javascript const showdown = require('showdown'); const showdownLazyLoadImages = require('showdown-lazy-load-images'); const converter = new showdown.Converter({ extensions: [showdownLazyLoadImages] }); const markdown = '![alt text](https://example.com/image.jpg)'; const html = converter.makeHtml(markdown); console.log(html); ``` ### Custom Containers 16. **Showdown Container Extension** - **Description**: Adds custom container blocks. - **Installation**: ```bash npm install showdown-container ``` - **Usage**: ```javascript const showdown = require('showdown'); const showdownContainer = require('showdown-container'); const converter = new showdown.Converter({ extensions: [showdownContainer] }); const markdown = '::: warning\n*Here be dragons*\n:::'; const html = converter.makeHtml(markdown); console.log(html); ``` ### Sanitizing HTML 17. **Showdown xssFilter Extension** - **Description**: Adds an XSS filter to the output HTML. - **Installation**: ```bash npm install showdown-xss-filter ``` - **Usage**: ```javascript const showdown = require('showdown'); const showdownXssFilter = require('showdown-xss-filter'); const converter = new showdown.Converter({ extensions: [showdownXssFilter] }); const markdown = '<script>alert("XSS")</script>'; const html = converter.makeHtml(markdown); console.log(html); ``` ### Alerts and Notifications 18. **Showdown Alert Extension** - **Description**: Adds alert boxes. - **Installation**: ```bash npm install showdown-alert ``` - **Usage**: ```javascript const showdown = require('showdown'); const showdownAlert = require('showdown-alert'); const converter = new showdown.Converter({ extensions: [showdownAlert] }); const markdown = '::: alert\n*Important message*\n:::'; const html = converter.makeHtml(markdown); console.log(html); ``` ### Code Copy Button 19. **Showdown Copy Code Button Extension** - **Description**: Adds a copy button to code blocks. - **Installation**: ```bash npm install showdown-copy-code-button ``` - **Usage**: ```javascript const showdown = require('showdown'); const showdownCopyCodeButton = require('showdown-copy-code-button'); const converter = new showdown.Converter({ extensions: [showdownCopyCodeButton] }); const markdown = '```javascript\nconsole.log("Hello, world!");\n```'; const html = converter.makeHtml(markdown); console.log(html); ``` ### Customizable HTML 20. **Showdown Custom HTML Extension** - **Description**: Allows adding custom HTML tags. - **Installation**: ```bash npm install showdown-custom-html ``` - **Usage**: ```javascript const showdown = require('showdown'); const showdownCustomHtml = require('showdown-custom-html'); const converter = new showdown.Converter({ extensions: [showdownCustomHtml] }); const markdown = '<custom-tag>Custom content</custom-tag>'; const html = converter.makeHtml(markdown); console.log(html); ``` --- ### Diagrams 12. **Showdown Mermaid Extension** - **Description**: Adds support for Mermaid diagrams. - **Installation**: ```bash npm install showdown-mermaid ``` - **Usage**: ```javascript const showdown = require('showdown'); const showdownMermaid = require('showdown-mermaid'); const converter = new showdown.Converter({ extensions: [showdownMermaid] }); const markdown = '```mermaid\ngraph TD;\n A-->B;\n A-->C;\n B-->D;\n C-->D;\n```'; const html = converter.makeHtml(markdown); console.log(html); ``` ### External Links --- These extensions showcase the versatility and extensibility of ShowdownJS, enabling you to tailor it to your specific needs. Whether you need syntax highlighting, media embedding, custom containers, or additional HTML sanitization, these extensions provide robust solutions.
sh20raj
1,909,540
Greetings!
Greetings all. I'm really excited to be here. I'm a new developer, working hard to build up my...
0
2024-07-03T01:37:56
https://dev.to/bearmsu/greetings-512l
programming, python, learning
Greetings all. I'm really excited to be here. I'm a new developer, working hard to build up my skills. I focus mostly on Python and SQL, but am also learning full-stack web development (HTML, CSS, and JS). I'm really excited to join this community. I'm always looking to learn and never pass up the opportunity get better. I'm always willing to collaborate and learn more, as I'm still a newbie at this, even though I've been working hard for the last year, I feel like I move one step forward and take two steps back. I'm hoping that I can work with others and collaborate and gain the knowledge and experience I need to be a good developer and then pass that knowledge on.
bearmsu
1,909,539
Razor Delivery Service
Razor Delivery Service: The Prescription Delivery Solution! Simplify your pharmacy’s prescription...
0
2024-07-03T01:37:34
https://dev.to/razordelivery/razor-delivery-service-4l77
razordeliveryservice, razordelivery, razordeliverynewyorkcity
Razor Delivery Service: The Prescription Delivery Solution! Simplify your pharmacy’s prescription delivery process to improve operational efficiency, minimize patient burden & improve drug adherence. Features: Delivery Service We are on time. Always on time. we know the time importance. Faster than you can imagine. Experience fast service. Effortless onboarding: Create recurring tasks with various rules as per your business needs. Real-time data: Your drivers follow the assigned route by google maps and complete all deliveries on time. Proof of delivery: Proof of Delivery is the receipt for the transportation of goods delivered. in-app collection of photos, signatures, barcodes and notes primary functions Analyze and improve: Enhance your delivery services with end-to-end route planning, automated dispatch and real-time tracking Contact: Business Hours : Monday-Friday: 10AM–6PM Email Address : support@razordelivery.com Phone Number : (917) 475-9222
razordelivery
1,909,538
HNG STAGE ZERO: ANALYZING RETAIL SALES DATA AT FIRST GLANCE
Introduction The Kaggle dataset “Sample Sales Data” by Kyanyoga provides a sample dataset...
0
2024-07-03T01:31:53
https://dev.to/devbassey/hng-stage-zero-analyzing-retail-sales-data-at-first-glance-2gp3
data, datascience, database, dataengineering
## Introduction The Kaggle dataset “[Sample Sales Data](https://www.kaggle.com/datasets/kyanyoga/sample-sales-data?resource=download)” by Kyanyoga provides a sample dataset for sales data analysis. It includes sales data with attributes such as order details, product information, customer details, and geographical data, allowing for various analyses like sales trends, product performance, and revenue analysis. This dataset is useful for practicing data analysis, visualization, and predictive modeling. - Purpose: This report aims to analyze the provided sales data to extract meaningful insights that can help understand sales trends, product performance, and revenue generation. - Scope: This report covers data exploration, cleaning, analysis, and visualization of the sales data. ## Observations **1. Dataset Overview:** - Source: Kaggle — Sample Sales Data by Kyanyoga - Description: The dataset contains 2823 entries and 25 columns including sales records. Columns include order numbers, quantities, prices, sales amounts, dates, statuses, product lines, customer details, and geographical data. **2. Data Exploration** **Data Structure:** - Number of records: 2823 - Number of features: 25 columns Features include Product Category, Order Quantity, Sales Value, Date, etc. Initial Observations: Summary statistics of key features such as mean, median, and standard deviation. **3. Data Cleaning** Missing Values: - ADDRESSLINE2 (302 non-null entries). - STATE (1337 non-null entries). - POSTALCODE (2747 non-null entries). - TERRITORY (1749 non-null entries). **Data Types:** - Numerical: ORDERNUMBER, QUANTITYORDERED, PRICEEACH, ORDERLINENUMBER, SALES, QTR_ID, MONTH_ID, YEAR_ID, MSRP. - Categorical: STATUS, PRODUCTLINE, PRODUCTCODE, CUSTOMERNAME, PHONE, ADDRESSLINE1, ADDRESSLINE2, CITY, STATE, POSTALCODE, COUNTRY, TERRITORY, CONTACTLASTNAME, CONTACTFIRSTNAME, DEALSIZE. - Date/Time: ORDERDATE. ## Visualization The following are the parameters I used for the sales data visualization; - Sales Over Time - Top Products by Sales - Sales by Region **Sales Over time** In this study, I use a line chart to visualize sales over time between January 2003 and April 2005 based on the Sales Sample Data. The ORDERDATE column was converted to the datetime data format. Find image below. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yjaremhbiyirwr1mp902.png) The line chart above shows total sales over time (2003 - 2005). The x-axis represents the date (by month), and the y-axis represents the total sales for each month. This visualization helps identify trends and patterns in sales over a given period. **Top Products by Sales** I used bar chart to visualize top products by sales. First, I aggregated the PRODUCTLINE column. Find image below. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xo8q5d6nf198jhyg3tpl.png) From the above image, find the percentage of sales below by percentage. - Motorcycles; 11.7% - Classic Cars; 34.3% - Trucks and Buses; 10.7% - Vintage Cars; 21.5% - Planes; 10.8% - Ships; 8.3% - Trains; 2.7% From the above statistics, the top 3 highest Products by sales are Classic Cars (34.3%), Vintage Cars (21.5%), and Motorcycles with a staggering 11.7%. **Sales by Region** In order to visualize top sales by region, I aggregated the COUNTRY column using a pie chart. Find image below; ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/20s15slp5icz8cwov0v5.png) The above visualization shows the top 3 sales by region with US topping the chart with 35.6%, followed by Spain with 12.1% and France with 11.1%. ## Conclusion This analysis is an overview at first glance of sales transactions Learn more about HNG by clicking on any of the link below; https://hng.tech/internship https://hng.tech/premium
devbassey
1,908,413
How to use an auto-tiling technique in your next game project
As a game developer, if the thought of hand crafting a level does not appeal to you, then you may...
0
2024-07-03T01:15:10
https://excaliburjs.com/blog/Autotiling%20Technique
gamedev, typescript, tutorial, algorithms
As a game developer, if the thought of hand crafting a level does not appeal to you, then you may consider looking into procedural generation for your next project. Even using procedural generation, however, you still need to be able to turn your generated map arrays into a tilemap with clean, contiguous walls, and sprites that match up cleanly, as if it was drawn by hand. This is where a technique called auto-tiling can come into play to help determine which tiles should be drawn in which locations on your tilemap. In this article, I will explain the concept of auto-tiling, Wang Tiles, [binary](https://en.wikipedia.org/wiki/Binary) and [bitmasks](https://en.wikipedia.org/wiki/Mask_(computing)), and then walk through the process and algorithms associated with using this tool in a project. ## What is auto-tiling Auto-tiling converts a [matrix](https://processing.org/tutorials/2darray) or [array](https://en.wikipedia.org/wiki/Array) of information about a map and assigns the corresponding tile texture to each tile in a manner that makes sense visually for the tilemap level. This uses a tile's position, relative to its neighbor tiles to determine which tile sprite should be used. Today we will focus on bitmask encoding neighbor data, although there are other techniques that can be used to accomplish this. One can get exposed to auto-tiling in different implementations. If you're using a game engine like [Unity](https://unity.com/) or [Godot](https://godotengine.org/), there are features automatically built into those packages to enabling auto-tiling as you draw and create your levels. Also, there are software tools like [Tiled](https://www.mapeditor.org/), [LDTK](https://ldtk.io/), and [Sprite Fusion](https://www.spritefusion.com/), that are a little more tilemap specific and give you native tools for auto-tiling. Auto-tiling has provided the most benefit when we think about how we can pivot from tilemap matrices or flat indexes representing the state of a tilemap, to a rendered map on the screen. Let us say you have a tilemap in the form of a 2d matrix with 1's and 0's in it representing the 'walkable' state of a tile. Let us assign a tile as a floor (0) piece or a wall (1) piece. Now, one can simply use two different tiles, for example: a grass tile ![grass tile](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/io67kgwrm3jl5qtkd5vq.png) and a dirt path tile ![dirt tile](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fa96gwmcc2630conh5dd.png) We could take a tilemap matrix like this: ![tile matrix](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/quj5dcmi61chu16i71sa.png) and use these two tiles to assign the grass to the 1's and the 0's to the path tile. It would look like this: ![rudimentary tilemap](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ux05p167z8bb37y838d0.png) This is technically a tile-map which has been auto-tiled, but we can do a little better. ## What are Wang tiles? Wang tiles do not belong or associate with game development or tile-sets specifically, but come from mathematics. So, why are we talking about them? The purpose of the Wang tiles within the scope of game development is to have a series of tile edges that create matching patterns to other tiles. We control which tiles are used by assigning a unique set of bitmasks to each tile that allows us reference them later. Wang tiles themselves are a class of system which can be modeled visually by square tiles with a color on each side. The tiles can be copied and arranged side by side with matching edges to form a pattern. Wang tile-sets or Wang 'Blob' tiles are named after Hao Wang, a mathematician in the 1960's who theorized that a finite set of tiles, whose sides matched up with other tiles, would ultimately form a repeating or periodic pattern. This was later to be proven false by one of his students. This is a massive oversimplification of Wang's work, for more information on the backstory of Wang tiles you can be read here: [Wang Tiles](https://en.wikipedia.org/wiki/Wang_tile). This concept of matching tile edges to a pattern can be used for a game's tilemap. One way we can implement Wang tiles in game development is to create levels from the tiles. We start with a tile-set that represents all the possible edge outcomes for any tile. These tile assets can be found here: [Wang Tile Set](https://opengameart.org/content/wang-%E2%80%98blob%E2%80%99-tileset) ![Wang Tiles](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9dd9qr1rj6rs8lpgxuqj.png) The numbers on each tile represents the bitmask value for that particular permutation of tile design. We then can see how you can swap these tiles for a separate texture below. In the image above, there are a couple duplicate tile configurations, and they are shown in white font. ![Wang Textured](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qip1mvec165r0s047aen.png) The magic of Wang tiles is that it can be extended out and create unique patterns that visually work. For example: ![Wang Example](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rkt6rmcqfa0ljmcsk8yp.png) As you can see from the tiles, every permutation of tile design is considered based on what potential neighbors each tile would have to marry up to. ## What is a bitmask? A bitmask is a binary representation of some pattern. In the scope of this conversation, we will use a bitmask to represent the 8 neighbors tiles of an given tile on a tilemap. ### Quick crash course on Binary So our normal counting format is designed as base-10. This means that each digit in our number system represents digits 0-9 (10 digits), and the value of each place value increases in power of base 10. So in the number '42', the 2 represents - (2 \* 10<sup>0</sup>) which is added to the 4 in the 'tens' place, which is (4 \* 10<sup>1</sup>), which equals 42. ``` (2 * 1) + (4 * 10) = 42 ``` This in binary looks different, as binary is base-2, which means that each digit position has digits 0 and 1, (2 digits). This is the counting system and 'language' of computers and processors. Quickly, let's re-assess the previous example of '42'. 42 in binary is 101010. Let's break this down in similar fashion. Starting from the right placeholder and working our way left... The 0 in the right most digit represents 0 \* 2<sup>0</sup>. The next digit represents 1 \* 2<sup>1</sup>... and on for each digit and the exponent increases each placeholder. ``` 0 1 0 1 0 1 _________________________________________________________________ (0 * 1) + (1 * 2) + (0 * 4) + (1 * 8) + (0 * 16) + (1 * 32) = 42 2 + 8 + 32 = 42 ``` ### Bits, Bytes, and Bitmasks That is how information in computers is encoded. We can use this 'encoding' scheme to easily represent binary information, like 'on' or 'off', or in this discussion, walkable tile or not walkable. This is why in the tile-set matrix example above, we can flag non-walkable tiles as '1', and walkable tiles as '0'. This is now binary encoded. A bit is one of these placeholders, or one digit. 8 of this bits together is a byte. Computers and processors, at a minimum, read at least a byte at a time. We can use this binary encoding for the auto-tiling by representing the state of each of a tile's neighbors into 8 bits, one for each neighbor. This means that the condition and status of each neighbor for a tile can be encoded into one byte of data (8 bits) and CAN be represented with a decimal value, see my earlier explanation about how the number 42 is represented in binary. So the whole point of this section is to get to this example: we are going to encode the neighbor's data for an example tile. ### Quick Demonstration ![bitmask example](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qim78uq0kkvp8ey44hwk.png) Now the tile we are assigning the bitmask to is the green, center tile. This tile has 8 neighbors. If I start reading the 1's and 0's from the top left and reading right, then down, I can get the value: 101 (top row) - 01 (middle row) - 101 (bottom row). Remember to skip the green tile. ![reading bitmask example](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5ppr609isu798loq4426.png) All together, this is 10101101, which can be stored as a binary value, which can be converted to a decimal value: 173. Remember to start at the rightmost bit when converting. ``` 1 0 1 1 0 1 0 1 __________________________________________________________________________________ (1 * 1) + (0 * 2) + (1 * 4) + (1 * 8) + (0 * 16) + (1 * 32) + (0 * 64) + (1 * 128) 1 + 4 + 8 + 32 + 128 = 173 ``` Now we can use that decimal value of 173 to represent the neighbor pattern for that tile. Every tile in a tilemap, can be encoded with their 'neighbors' bitmasks. As you saw earlier, the wang tiles had bitmask values assigned to them. This is how we know which tile to substitute for each bitmask. ## The Process We have already covered the hard part. In this section we are pulling it all together in a walkthrough of the overall high level process. Here are the steps we are covering: 1. Find or create a tile-set spritesheet that you would like to use 2. Create your tilemap data, however you like. 3. Loop through each index of tile, and evaluate the neighbor tiles, and assign bitmask 4. Map the bitmap values to the 'appropriate' tile in your tile-set (this is the long/boring part IMO) 5. Iterate over each tile and assign the correct image that matches the bitmask value 6. Draw your tilemap in your game ### Creating a tile-set Here is an example of a tile-set that I drew for the demo project. ![example tileset](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8kcb6kpvjbojkjmax837.png) These 47 tiles represent all the different 'wall' formations that would be required. I kept my floor tiles separate in a different file so that it is easier to swap out. The floor is drawn as a separate tile underneath the wall. Each tile represented in the grid is designed to match up with a specific group of neighbor patterns. Let's take the top-left tile: ![top left tile](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bo7qm8rbukf7d0kvtbi2.png) This tile is intended to be mapped to a tile where there are walled neighbors on the right, below, and bottom right of the tile in question. There maybe a few neighbor combinations ultimately that may be mapped to this tile, in my project I found 7 combinations that this tile configuration would be mapped to. If you look through each tile you can see how it 'matches' up with another mating tile or tiles in the map. For my implementation, I spent time testing out each configuration visually to see which tile different bitmasks needed to be mapped to. ### Create your tilemap data Now we will use either a 2d matrix or a flat array in your codebase, with each index representing a tile. I use a flat array, with a tilemap width and height parameter. It is simply preference. You can manually set these values in your array, or you can use a procedural generation algorithm to determine what your wall and floor tiles. I can recommend my [Cellular Automata](https://dev.to/excaliburjs/cellular-automata-171f) article that I wrote earlier if you are interested in generating the tilemap procedurally. When this is completed, you'll have a data set that will look something like this. ![Tilemap](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j6htvpfed06g28q7glor.png) ### Loop through tilemap and assign bitmasks For each index of your array, you will need to capture all the states of the neighbor tiles for each tile, and record that value on each tile. I would refer to the previous section regarding how to calculate the bitmasks. ```ts // This loops through each tile in the tilemap private createTileMapBitmasks(map: TileMap): number[] { // create the array of bitmasks, the indexes of this array will match up to the index // of the tilemap let bitmask: number[] = new Array(map.columns * map.rows).fill(0); let tileIndex = 0; // for each tile in the map, add the bitmask to the array for (let tile of map.tiles) { bitmask[tileIndex] = this._getBitmask(map, tileIndex, 1); tileIndex++; } return bitmask; } // setting up neighbor offsets indexes / const neighborOffsets = [ [1, 1], [0, 1], [-1, 1], [1, 0], [-1, 0], [1, -1], [0, -1], [-1, -1], ]; // iterate through each neighbor tile and get the bitmask based on if the tile is solid private _getBitmask(map: TileMap, index: number, outofbound: number): number { let bitmask = 0; // find the coordinates of current tile const width = map.columns; const height = map.rows; let y = Math.floor(index / width); let x = index % width; // loop through each neighbor offset, and 'collect' their state for (let i = 0; i < neighborOffsets.length; i++) { const [dx, dy] = neighborOffsets[i]; const nx = x + dx; const ny = y + dy; //convert back to index const altIndex = nx + ny * width; // check if the neighbor tile is out of bounds, else if tile is a wall ('solid') shift in the bitmask if (ny < 0 || ny >= height || nx < 0 || nx >= width) bitmask |= outofbound << i; else if (map.tiles[altIndex].data.get("solid") === true) bitmask |= 1 << i; } return bitmask; } ``` ### Map bitmask values to each tile sprite in spritesheet Here is the monotonous part. For a byte, or an 8-bit word, the amount of permutations of tile patterns is 256. That's a lot of mappings. Now I did mine the hard way, manually, one by one. But there may be easier ways to do this. I use Typescript, so I will share a bit of what my mappings look like. Each number key in the object is the bitmask value, and its mapped to a coordinate array [x, y] for my spritesheet that I shared earlier in the article. Now, I could have put them in order, but that does not really serve any benefit. ```ts export const tilebitmask: Record<number, Array<number>> = { 0: [3, 3], 1: [3, 3], 4: [3, 3], 128: [3, 3], 32: [3, 3], 11: [0, 0], 175: [0, 0], 15: [0, 0], 47: [0, 0], 207: [0, 5], 203: [0, 5], 124: [3, 5], 43: [0, 0], ... ``` ### Iterate over the tiles and assign tile sprite The last two steps we'll do together. Now we simply need to iterate over our tilemap, assign the appropriate sprite tiles. I'm using Excalibur.js for my game engine, and the code is in Typescript, but you can use whichever tool you would prefer. ```ts draw(): TileMap { // call the method that loops through and configures all the bitmasks let bitmask = this.createTileMapBitmasks(this.map); let tileindex = 0; for (const tile of this.map.tiles) { tile.clearGraphics(); // if the tile is solid, draw the base tile first, THEN the foreground tile if (tile.data.get("solid") === true) { // add floor tile tile.addGraphic(this.baseTile); // using the tile's index grab the bitmask value let thisTileBitmask = bitmask[tileindex]; // this is the magic... grab the coordinates of the tile sprite from tilebitmask, and provide that to Excalibur let sprite: Sprite; sprite = this.spriteSheet.getSprite(tilebitmask[thisTileBitmask][0], tilebitmask[thisTileBitmask][1]); //add the wall sprite to the tile tile.addGraphic(sprite); } else { // if the tile is not solid, just draw the base tile tile.addGraphic(this.baseTile); } tileindex++; } return this.map; } ``` ## Demo Application ![Demo App](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kkpz7sig87x0xrx915hz.png) [Link to Demo](https://mookie4242.itch.io/autotiling-demonstration) [Link to Repo](https://github.com/jyoung4242/CA-itchdemo) In this demo application, I'm using Excalibur.js engine to show how auto-tiling can work, and its benefits in game development. The user can click on the tilemap to draw walkable paths onto the canvas. As the walkable paths are drawn, the auto-tiling algorithm will automatically place the correct tile in its position based on the neighbor tile's walkable status. There are some controls at the top of this app, a button to reset the tilemap settings back to not walkable, so one can start over. Also, two drop downs that let the user swap out tile-sets for different styles. This shows the benefits of having standardized Wang tiles for your tile-sets. For example, in this demo, we have three Wang tile-sets. When you swap them out, it can automatically draw them correctly into your tilemap. Grass ![Grass Tileset](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w5k0rudscp791uf540mw.png) Snow ![Snow Tileset](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8qgk5lglu6k1x37vvxkl.png) and Rock ![Rock Tileset](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7045soow38be4tq4wvly.png) ## Why Excalibur ![ExcaliburJS](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qp03tbavz7ahbf9f4z7u.png) Small Plug... [ExcaliburJS](https://excaliburjs.com/) is a friendly, TypeScript 2D game engine that can produce games for the web. It is free and open source (FOSS), well documented, and has a growing, healthy community of gamedevs working with it and supporting each other. There is a great discord channel for it [JOIN HERE](https://discord.gg/ScX52wD4eM), for questions and inquiries. Check it out!!! ## Conclusions That was quite a bit, no? We covered the concept of auto-tiling as a tool you can use in game development. We discussed the benefits of Wang tiles for your projects and how they allow for the auto selection of the correct tile sprites to use based off of bitmask assignments. We dug into bitmask and base-2 binary encoding to show how we were encoding the neighbor tile information into a decimal value so we could map the tile sprites appropriately. We finished this portion by doing an example tile encoding of neighbors to demonstrate the process. We went through he process of auto-tiling, looking at tile-sets, looking at code snippets, and finishing at the demo application on itch. I hope you enjoyed this take on auto-tiling, as mentioned above, this is NOT the only way to do this, there are other ways of accomplishing the same effect. You also can tweak this to your own liking, for instance, you can introduce varying tiles so you can use different floor tiles, or adding decor on to walls to add additional variety and add a feeling of greater immersion into the worlds your building. Have fun!
jyoung4242
1,909,520
20 Ways to Improve Node.js Performance at Scale 🚀
Node.js is a powerful platform for building scalable and high-performance applications. However, as...
0
2024-07-03T01:13:48
https://dev.to/dipakahirav/20-ways-to-improve-nodejs-performance-at-scale-25nf
node, javascript, learning
Node.js is a powerful platform for building scalable and high-performance applications. However, as your application grows, maintaining optimal performance can become challenging. Here are twenty effective strategies to enhance Node.js performance at scale, complete with examples and tips. 💡 please subscribe to my [YouTube channel](https://www.youtube.com/@DevDivewithDipak?sub_confirmation=1 ) to support my channel and get more web development tutorials. --- ### 1. Use Asynchronous Programming ⏳ Node.js excels at handling asynchronous operations. Ensure your code leverages `async/await`, promises, and callbacks to avoid blocking the event loop. **Example:** ```javascript const fetchData = async () => { try { const response = await fetch('https://api.example.com/data'); const data = await response.json(); console.log(data); } catch (error) { console.error('Error fetching data:', error); } }; fetchData(); ``` --- ### 2. Optimize Database Queries 📊 Efficient database queries are crucial for performance. Use indexing, caching, and query optimization techniques. Consider using an ORM for complex operations. **Example with Mongoose (MongoDB ORM):** ```javascript // Create an index to speed up searches const userSchema = new mongoose.Schema({ username: { type: String, unique: true, index: true }, email: { type: String, unique: true }, // other fields... }); const User = mongoose.model('User', userSchema); // Optimized query const getUsers = async () => { return await User.find({ active: true }).select('username email'); }; ``` --- ### 3. Implement Load Balancing ⚖️ Distribute incoming traffic across multiple servers to prevent bottlenecks. Use tools like NGINX, HAProxy, or cloud-based load balancers. **NGINX Configuration Example:** ```nginx http { upstream myapp { server 127.0.0.1:3000; server 127.0.0.1:3001; } server { listen 80; location / { proxy_pass http://myapp; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } } ``` --- ### 4. Use Clustering 🧑‍🤝‍🧑 Node.js runs on a single thread but can utilize multiple CPU cores with clustering. Use the `cluster` module to enhance concurrency. **Cluster Example:** ```javascript const cluster = require('cluster'); const http = require('http'); const numCPUs = require('os').cpus().length; if (cluster.isMaster) { console.log(`Master ${process.pid} is running`); for (let i = 0; i < numCPUs; i++) { cluster.fork(); } cluster.on('exit', (worker, code, signal) => { console.log(`Worker ${worker.process.pid} died`); }); } else { http.createServer((req, res) => { res.writeHead(200); res.end('Hello World\n'); }).listen(8000); console.log(`Worker ${process.pid} started`); } ``` --- ### 5. Leverage Caching 🗂️ Implement caching to reduce redundant data fetching. Use in-memory caches like Redis or Memcached. **Example with Redis:** ```javascript const redis = require('redis'); const client = redis.createClient(); const cacheMiddleware = (req, res, next) => { const { key } = req.params; client.get(key, (err, data) => { if (err) throw err; if (data) { res.send(JSON.parse(data)); } else { next(); } }); }; // Route with caching app.get('/data/:key', cacheMiddleware, async (req, res) => { const data = await fetchDataFromDatabase(req.params.key); client.setex(req.params.key, 3600, JSON.stringify(data)); res.send(data); }); ``` --- ### 6. Optimize Middleware and Routing 🛤️ Minimize the number of middleware layers and ensure they are efficient. Use a lightweight router and optimize the order of middleware. **Optimized Middleware Example:** ```javascript const express = require('express'); const app = express(); // Efficiently ordered middleware app.use(express.json()); app.use(authMiddleware); app.use(loggingMiddleware); app.get('/users', userHandler); app.post('/users', createUserHandler); app.listen(3000, () => { console.log('Server is running on port 3000'); }); ``` --- ### 7. Monitor and Profile Your Application 🔍 Regularly monitor your application’s performance and identify bottlenecks using tools like New Relic, AppDynamics, or built-in Node.js profiling tools. **Basic Profiling Example:** ```javascript const { performance, PerformanceObserver } = require('perf_hooks'); const obs = new PerformanceObserver((items) => { console.log(items.getEntries()); performance.clearMarks(); }); obs.observe({ entryTypes: ['measure'] }); performance.mark('A'); doSomeLongRunningTask(); performance.mark('B'); performance.measure('A to B', 'A', 'B'); ``` --- ### 8. Use HTTP/2 🌐 HTTP/2 offers several improvements over HTTP/1.1, such as multiplexing, header compression, and server push, which can significantly enhance the performance of web applications. **Example:** ```javascript const http2 = require('http2'); const fs = require('fs'); const server = http2.createSecureServer({ key: fs.readFileSync('path/to/privkey.pem'), cert: fs.readFileSync('path/to/fullchain.pem') }); server.on('stream', (stream, headers) => { stream.respond({ 'content-type': 'text/html', ':status': 200 }); stream.end('<h1>Hello World</h1>'); }); server.listen(8443); ``` --- ### 9. Optimize Static Assets 📦 Serving static assets efficiently can significantly reduce load times. Use a Content Delivery Network (CDN) to distribute static files globally, and implement caching and compression techniques. **Example:** ```javascript const express = require('express'); const compression = require('compression'); const app = express(); app.use(compression()); // Enable gzip compression app.use(express.static('public', { maxAge: '1d', // Cache static assets for 1 day })); app.listen(3000, () => { console.log('Server is running on port 3000'); }); ``` --- ### 10. Manage Memory Efficiently 🧠 Efficient memory management is crucial for Node.js applications, especially under high load. Monitor memory usage and avoid memory leaks by properly handling event listeners and cleaning up resources. **Memory Leak Detection Example:** ```javascript const memwatch = require('memwatch-next'); memwatch.on('leak', (info) => { console.error('Memory leak detected:', info); }); const leakyFunction = () => { const largeArray = new Array(1e6).fill('leak'); // Simulate a memory leak }; setInterval(leakyFunction, 1000); ``` --- ### 11. Implement Rate Limiting 🚦 Protect your application from abusive traffic by implementing rate limiting. This helps ensure fair usage and prevents overloading your server. **Rate Limiting Example with Express:** ```javascript const rateLimit = require('express-rate-limit'); const limiter = rateLimit({ windowMs: 15 * 60 * 1000, // 15 minutes max: 100 // limit each IP to 100 requests per windowMs }); app.use(limiter); app.get('/', (req, res) => { res.send('Hello World!'); }); app.listen(3000, () => { console.log('Server is running on port 3000'); }); ``` --- ### 12. Use Proper Error Handling 🛡️ Effective error handling prevents crashes and helps maintain the stability of your application. Use centralized error handling middleware and avoid unhandled promise rejections. **Centralized Error Handling Example:** ```javascript app.use((err, req, res, next) => { console.error(err.stack); res.status(500).send('Something broke!'); }); process.on('unhandledRejection', (reason, promise) => { console.error('Unhandled Rejection:', reason); }); ``` --- ### 13. Utilize HTTP Caching Headers 🗃️ Leverage HTTP caching headers to instruct browsers and intermediate caches on how to handle static and dynamic content. This reduces the load on your server and speeds up response times. **Example with Express:** ```javascript app.get('/data', (req, res) => { res.set('Cache-Control', 'public, max-age=3600'); res.json({ message: 'Hello World!' }); }); ``` --- ### 14. Profile and Optimize Code 🛠️ Regularly profile your Node.js application to identify performance bottlenecks. Use tools like Chrome DevTools, Node.js built-in profiler, and Flamegraphs to pinpoint and optimize slow code paths. **Basic Profiling Example:** ```bash node --prof app.js node --prof-process isolate-0xnnnnnnnnnnnn-v8.log > processed.txt ``` --- ### 15. Use Environment Variables 🌍 Manage configuration settings using environment variables to avoid hardcoding sensitive data and configuration details in your code. This practice enhances security and flexibility. **Example with `dotenv` Package:** ```javascript require('dotenv').config(); const dbConnectionString = process.env.DB_CONNECTION_STRING; mongoose.connect(dbConnectionString, { useNewUrlParser: true, useUnifiedTopology: true }); ``` --- ### 16. Enable HTTP Keep-Alive 📡 Enabling HTTP Keep-Alive allows multiple requests to be sent over a single TCP connection, reducing latency and improving performance. **Example with Express:** ```javascript const http = require('http'); const express = require('express'); const app = express(); app.get('/', (req, res) => { res.send('Hello World!'); }); const server = http.createServer(app); server.keepAliveTimeout = 60000; // Set keep-alive timeout to 60 seconds server.listen(3000, () => { console.log('Server is running on port 3000'); }); ``` --- ### 17. Use Streams for Large Data Transfers 🚀 Node.js streams allow you to process large amounts of data efficiently by breaking it into chunks. Use streams for reading and writing large files or data transfers to avoid blocking the event loop. **Example of Reading a File with Streams:** ```javascript const fs = require('fs'); const readStream = fs.createReadStream('largeFile.txt'); readStream.on('data', (chunk) => { console.log(`Received ${chunk.length} bytes of data.`); }); readStream.on('end', () => { console.log('Finished reading file.'); }); ``` --- ### 18. Use WebSockets for Real-Time Applications 🌐 For real-time applications, use WebSockets to enable full-duplex communication between the client and server. This method is more efficient than HTTP polling. **Example with `ws` Package:** ```javascript const WebSocket = require('ws'); const wss = new WebSocket.Server({ port: 8080 }); wss.on('connection', (ws) => { ws.on('message', (message) => { console.log('Received:', message); }); ws.send('Hello! You are connected.'); }); ``` --- ### 19. Avoid Blocking the Event Loop 🌀 Ensure that long-running operations do not block the event loop. Use asynchronous methods or worker threads for CPU-intensive tasks. **Example with Worker Threads:** ```javascript const { Worker } = require('worker_threads'); const runService = () => { return new Promise((resolve, reject) => { const worker = new Worker('./worker.js'); worker.on('message', resolve); worker.on('error', reject); worker.on('exit', (code) => { if (code !== 0) reject(new Error(`Worker stopped with exit code ${code}`)); }); }); }; runService().then(result => console.log(result)).catch(err => console.error(err)); ``` --- ### 20. Use Node.js Performance Hooks 📊 Node.js provides built-in performance hooks that allow you to measure and monitor the performance of your application. **Example:** ```javascript const { performance, PerformanceObserver } = require('perf_hooks'); const obs = new PerformanceObserver((items) => { items.getEntries().forEach(entry => { console.log(`${entry.name}: ${entry.duration}`); }); }); obs.observe({ entryTypes: ['measure'] }); performance.mark('A'); // Simulate a long operation setTimeout(() => { performance.mark('B'); performance.measure('A to B', 'A', 'B'); }, 1000); ``` --- By implementing these twenty strategies, you can significantly improve the performance and scalability of your Node.js applications. This comprehensive approach ensures your application can handle large volumes of traffic and data efficiently. 🚀 --- If you found this article helpful, don't forget to ❤️ and follow for more content on Node.js and full-stack development! Happy coding! 🧑‍💻✨ please subscribe to my [YouTube channel](https://www.youtube.com/@DevDivewithDipak?sub_confirmation=1 ) to support my channel and get more web development tutorials. ### Follow and Subscribe: - **Instagram**: [devdivewithdipak](https://www.instagram.com/devdivewithdipak) - **Website**: [Dipak Ahirav] (https://www.dipakahirav.com) - **Email**: dipaksahirav@gmail.com - **YouTube**: [devDive with Dipak](https://www.youtube.com/@DevDivewithDipak?sub_confirmation=1 ) - **LinkedIn**: [Dipak Ahirav](https://www.linkedin.com/in/dipak-ahirav-606bba128) Happy coding! 🚀
dipakahirav
1,909,519
6 Best Database Tools for SQL Server-2024
In this article, we discuss the 6 best tools: 3 data modeling tools for SQL Server, and 3 for...
0
2024-07-03T01:13:25
https://dev.to/concerate/8-best-database-tools-for-sql-server-37mh
In this article, we discuss the 6 best tools: 3 data modeling tools for SQL Server, and 3 for creating, testing, and managing SQL Server databases. **The Best Data Modelers for SQL Server** Data modeling is analyzing, arranging, and presenting the data, its relationships, and other information. Standard notations are used to organize these aspects visually. A proper data model offers a blueprint when creating a new software system or modifying a legacy application. This improves the efficiency and effectiveness of the development. Data models are used in all phases of database development, so it is critical to pick an appropriate data modeler. Our picks for the top SQL Server data modeling tools are below. **1. Vertabelo** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xuemftgx8zpfpc4p1t46.png) **Vertabelo **is a powerful and intuitive online database modeling tool designed to help database developers, architects, and analysts design databases visually and collaboratively. Key Features **Visual Database Design:** Vertabelo allows users to create and manage database models visually. It supports various database engines, including MySQL, PostgreSQL, SQL Server, Oracle, SQLite, and more. The tool provides a clean and user-friendly interface to design database schemas, including tables, columns, keys, indexes, and relationships. **Collaboration and Sharing:** Users can share their database models with team members or clients, allowing for real-time collaboration. Vertabelo supports different access levels, ensuring that team members can only make changes based on their permissions. **Model Validation:** Vertabelo includes built-in validation rules that check the consistency and integrity of the database model, helping to identify and correct errors early in the design process. **SQL Script Generation:** Once the database design is complete, Vertabelo can generate SQL scripts to create the database schema in the target database engine. This feature supports forward engineering, allowing for quick and accurate deployment of database models. **Reverse Engineering:** Vertabelo supports reverse engineering, which means users can import existing database schemas into the tool and visualize them as models. This is particularly useful for understanding and documenting legacy databases. **Version Control:** The tool includes version control features that allow users to track changes made to the database model over time. Users can compare different versions of the model and revert to previous versions if needed. **Integration with Other Tools:** Vertabelo can integrate with various other tools and platforms, enhancing its functionality and making it a versatile choice for database modeling. **Cloud-Based Access:** As a cloud-based application, Vertabelo allows users to access their database models from anywhere with an internet connection. This eliminates the need for local installations and ensures that the latest version of the model is always available. **2. Visual Paradigm** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mj4lykh7vv79tk6vqei4.png) Visual Paradigm is a versatile software suite that provides a comprehensive set of tools for system modeling, project management, software development, and database design. It's highly regarded for its robust functionality and user-friendly interface, making it a popular choice among professionals in various fields. Key Features of Visual Paradigm **UML and SysML Modeling:** Visual Paradigm supports a wide range of Unified Modeling Language (UML) diagrams, including class, sequence, use case, activity, and state diagrams. It also supports Systems Modeling Language (SysML), which is essential for systems engineering. **Database Design and Engineering:** Visual Paradigm provides powerful database design tools, supporting both forward and reverse engineering. Users can create Entity-Relationship (ER) diagrams, generate SQL scripts, and synchronize database models with actual databases. The tool supports various databases, including MySQL, Oracle, SQL Server, PostgreSQL, and more. **Requirements Management:** Visual Paradigm offers comprehensive requirements management features, enabling users to capture, analyze, and track requirements throughout the project lifecycle. It supports use case modeling, requirements traceability, and change management. **Code Engineering:** The tool supports code generation and reverse engineering for various programming languages, including Java, C++, C#, and PHP. It can generate code from UML diagrams and synchronize models with the source code. Collaboration and Sharing: Visual Paradigm facilitates team collaboration with real-time editing, cloud repository, and version control. Users can share diagrams and models easily with stakeholders and team members. **Visual and Intuitive Interface:** The software features an intuitive and visually appealing interface, making it easy to create and edit models and diagrams. It offers drag-and-drop functionality and customizable templates. **3.ER/Studio** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eqi04npizow0qzueyjze.png) ER/Studio is a powerful data modeling tool developed by IDERA, designed to assist database architects, developers, and analysts in creating and managing complex database environments. It is particularly known for its ability to model data across multiple platforms and its robust features that support large-scale enterprise data management. **Key Features of ER/Studio** **Comprehensive Data Modeling:** ER/Studio supports both logical and physical data modeling, enabling users to design data models that reflect business requirements and translate them into physical database structures. It includes features for creating entity-relationship (ER) diagrams, UML diagrams, and data flow diagrams. **Cross-Platform Support:** The tool supports a wide range of database platforms, including Oracle, SQL Server, MySQL, DB2, PostgreSQL, and others. This cross-platform capability allows users to manage data models across different database systems from a single interface. **Metadata Management:** ER/Studio provides robust metadata management capabilities, allowing users to capture, document, and analyze metadata from various sources. It includes features for metadata import/export, lineage tracking, and impact analysis. **Collaboration and Version Control:** The tool includes built-in version control and collaborative features, enabling teams to work together on data models efficiently. Users can track changes, merge models, and manage multiple versions of data models. **Reverse and Forward Engineering:** ER/Studio supports both reverse engineering (importing existing database schemas into the tool) and forward engineering (generating SQL scripts to create databases from models). This bidirectional capability ensures consistency between the data model and the physical database. **Data Lineage and Impact Analysis:** Users can trace data lineage to understand the flow of data through various systems and processes. Impact analysis helps in assessing the potential effects of changes in the data model on downstream systems and processes. Reporting and Documentation: ER/Studio provides extensive reporting and documentation features, enabling users to generate detailed reports and documentation for their data models. These features support regulatory compliance and facilitate communication with stakeholders. **User-Friendly Interface:** The tool boasts an intuitive and user-friendly interface, with drag-and-drop functionality and customizable templates. It also offers a visual interface for designing and managing data models, making it accessible to both novice and experienced users. **The Best Tools for Creating, Testing, and Managing SQL Server Databases** The next step in your database development journey is creating, testing, and managing the database. Let’s review some of the best database tools for SQL Server for this next step. **4. SQL Server Management Studio (SSMS)** SQL Server Management Studio (SSMS) is a robust, integrated environment designed by Microsoft for managing SQL Server infrastructure. It combines a broad group of graphical tools with rich script editors to provide access to SQL Server for developers and database administrators of all skill levels. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o7uae7b86nkdahisfmw7.png) **Key Features of SSMS** **Integrated Environment:** SSMS provides a comprehensive interface for managing SQL Server infrastructure, including SQL Server, Azure SQL Database, and SQL Data Warehouse. It integrates various tools in a single environment, allowing for efficient database management and development. **Database Administration:** SSMS enables users to configure, monitor, and administer instances of SQL Server. It includes tools for performance tuning, backup and recovery, and security management. **Query Execution and Optimization:** The tool features an advanced query editor with IntelliSense, syntax highlighting, and code snippets. Users can analyze and optimize query performance using the built-in execution plan visualizer. **Data Import and Export:** SSMS includes wizards for importing and exporting data, making it easier to move data between different sources. It supports various file formats such as CSV, Excel, and XML. **Visual Designers:** The tool provides visual designers for creating and modifying database objects such as tables, views, and stored procedures. It also includes a visual query designer that helps in constructing complex queries without needing to write SQL code. **Security Management:** SSMS includes features for managing database security, including user and role management, permissions, and auditing. It supports encryption and data masking to enhance data security. **Job Scheduling and Automation:** Users can create and manage SQL Server Agent jobs to automate routine tasks such as backups, maintenance, and report generation. The tool provides a job history and monitoring features to track job execution. **Integration with Azure Services:** SSMS integrates seamlessly with Azure SQL Database and Azure SQL Data Warehouse, providing cloud-based database management capabilities. Users can deploy and manage databases in the Azure environment from within SSMS. **5. Azure Data Studio** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d9jf3igbcnkor4x1d2xt.png) Azure Data Studio (ADS) is a cross-platform database management tool built by Microsoft. It is designed to handle various database environments, particularly focusing on Azure SQL Database and SQL Server. ADS is appreciated for its modern and user-friendly interface, making it an excellent choice for database administrators and developers alike. **Key Features of Azure Data Studio** **Cross-Platform Compatibility:** Azure Data Studio is available on Windows, macOS, and Linux, making it a versatile tool for developers working in diverse environments. **Integrated Development Environment (IDE):** ADS combines a rich code editor with IntelliSense, syntax highlighting, and code snippets, which enhance productivity and reduce the likelihood of errors. It supports multiple languages, including T-SQL, PowerShell, and KQL (Kusto Query Language). **Built-In Notebooks:** ADS includes support for Jupyter Notebooks, allowing users to combine live code, visualizations, and narrative text in a single document. This feature is particularly useful for data analysis, troubleshooting, and creating runbooks. **Extensibility and Customization:** The tool is highly extensible, with a marketplace offering a variety of extensions for additional functionalities. Users can customize their workspace with themes, key bindings, and extensions tailored to their specific workflows. **Kubernetes Support:** ADS supports the management of SQL Server Big Data Clusters, allowing for the deployment, management, and monitoring of SQL Server instances on Kubernetes. **Rich Query Editor:** The query editor provides features such as IntelliSense, code snippets, and a robust visualizer for query results. It also includes an integrated terminal for running command-line scripts. **Dashboard Customization:** Users can create customizable dashboards to monitor server performance, database health, and query performance in real time. **Source Control Integration:** ADS integrates with source control systems like Git, allowing for version control and collaboration on database scripts and projects. **Data Visualization:** The tool offers built-in charting and visualization options to help users interpret and present data effectively. This is particularly useful for data analysis and reporting. **6.SQLynx** SQLynx is a new and innovative database Integrated Development Environment (IDE) designed to cater to the needs of professional SQL developers. This tool provides a comprehensive set of features aimed at improving efficiency, productivity, and the overall experience of managing and developing SQL databases. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3xa001pxes3oc6ut1cv8.png) **Key Features of SQLynx** **Multiple Query Execution Modes:** SQLynx allows users to execute queries in various modes, providing flexibility depending on the task at hand. This feature is beneficial for running ad-hoc queries, batch processes, or scheduled tasks. **Local History:** The local history feature in SQLynx keeps track of all your activities, ensuring that you don't lose any of your work. It allows you to review and revert to previous states of your work, providing a safety net against accidental changes or deletions. **Intuitive User Interface:** SQLynx boasts a highly intuitive and fast user interface. The visual SQL builder helps users create and edit SQL statements easily, while the powerful code auto-completion feature saves time and minimizes errors. **Context-Sensitive and Schema-Aware Auto-Completion:** The auto-completion feature is both context-sensitive and schema-aware, providing more relevant code suggestions based on the current context and database schema. **Detailed Query and Database Insights:** SQLynx provides detailed insights into the behavior of your queries and the database engine. This helps in optimizing query performance and understanding the underlying processes. **Remote Access:** As a web application, SQLynx can be deployed on your any server or your desktop computer and accessed remotely. This feature is particularly useful for teams distributed across different locations or for accessing databases from various devices. **Stability and Performance:** SQLynx is designed to handle large data volumes efficiently, maintaining stability and high performance even under heavy loads. This makes it suitable for big data applications and environments requiring high reliability. Cross-Platform Compatibility: The SQLynx personal version is available for Windows, Linux, and Mac OS, ensuring broad compatibility and flexibility. https://sqlynx.com
concerate
1,900,797
How to deal with race conditions using Java and PostgreSQL
Using locking to control database concurrency Imagine you are working on an e-commerce...
0
2024-07-03T01:00:59
https://dev.to/ramoncunha/how-to-deal-with-race-conditions-using-java-and-postgresql-4jk6
postgres, java, lock, database
## Using locking to control database concurrency Imagine you are working on an e-commerce system and thousands of people try to buy the last remaining product at the same time. However, many of them could proceed to the checkout and finish the order. When you check your stock, you have a product with a negative quantity. How was this possible, and how can you solve this? Let's code! The first thing you might think is to check the stock before the checkout. Maybe something like this: ```java public void validateAndDecreaseSolution(long productId, int quantity { Optional<StockEntity> stockByProductId = stockRepository.findStockByProductId(productId); int stock = stockByProductId.orElseThrow().getStock(); int possibleStock = stock - quantity; if (stock <= 0 || possibleStock < 0) { throw new OutOfStockException("Out of stock"); } stockRepository.decreaseStock(productId, quantity); } ``` You can use this validation, but when we talk about hundreds, thousands, millions, or even dozens of requests per second, this validation will not be enough. When 10 requests reach this piece of code at the exact same time and the database returns the same value for `stockByProductId`, your code will break. You need a way to block other requests while we do this verification. ### First solution - FOR UPDATE Add a lock statement on your SELECT. In this example I did this using FOR UPDATE with Spring Data. As [PostgreSQL documentation](https://www.postgresql.org/docs/9.0/sql-select.html#SQL-FOR-UPDATE-SHARE) says > FOR UPDATE causes the rows retrieved by the SELECT statement to be locked as though for update. This prevents them from being modified or deleted by other transactions until the current transaction ends. ```java @Query(value = "SELECT * FROM stocks s WHERE s.product_id = ?1 FOR UPDATE", nativeQuery = true) Optional<StockEntity> findStockByProductIdWithLock(Long productId); ``` ```java public void validateAndDecreaseSolution1(long productId, int quantity) { Optional<StockEntity> stockByProductId = stockRepository.findStockByProductIdWithLock(productId); // ... validate stockRepository.decreaseStock(productId, quantity); } ``` All requests to `stocks` table using the product ID will wait until the actual transaction finishes. The objective here is to ensure you get the last updated value of the stock. ### Second solution - pg_advisory_xact_lock This solution is similar to the previous one, but you can select what is the lock key. We'll lock the entire transaction until we finish all the processing of validation and stock decrement. ```java public void acquireLockAndDecreaseSolution2(long productId, int quantity) { Query nativeQuery = entityManager.createNativeQuery("select pg_advisory_xact_lock(:lockId)"); nativeQuery.setParameter("lockId", productId); nativeQuery.getSingleResult(); Optional<StockEntity> stockByProductId = stockRepository.findStockByProductId(productId); // check stock and throws exception if it is necessary stockRepository.decreaseStock(productId, quantity); } ``` The next request will only interact with a product with the same ID after this transactions ends. ### Third solution - WHERE clause In this case, we'll not lock our row or transaction. Let's permit this transaction to continue until the update statement. Notice the last condition: `stock > 0`. This will not permit our stock be less than zero. So if two people try to buy at the same time, one of them will receive an error because our database will not allow `stock <= -1`. ```java @Transactional @Modifying @Query(nativeQuery = true, value = "UPDATE stocks SET stock = stock - :quantity WHERE product_id = :productId AND stock > 0") int decreaseStockWhereQuantityGreaterThanZero(@Param("productId") Long productId, @Param("quantity") int quantity); ``` ### Conclusion The first and second solutions use pessimistic locking as a strategy. The third is optimistic locking. The pessimistic locking strategy is used when you want restrictive access to a resource while you perform any task involving this resource. The target resource will be locked for any other access until you finish your process. Be careful with deadlocks! With optimistic locking, you can perform various queries on the same resource without any block. It's used when conflicts are not likely to happen. Usually, you will have a version related to your row, and when you update this row, the database will compare your row version with the row version in the database. If both are equal, the change will be successful. If not, you have to retry. As you can see, I don't use any version row in this article, but my third solution doesn't block any requests and controls concurrency using the stock > 0 condition. If you want to see the full code, you can check on my [GitHub](https://github.com/ramoncunha/dbtransactions). There are many other strategies to implement pessimistic and optimistic locking, you can search more about `FOR UPDATE WITH SKIP LOCKED` for example.
ramoncunha
1,909,513
Shanghai EMTH Import and Export Co., LTD: Transforming Challenges into Opportunities
Shanghai EMTH Import and Export Co., LTD: Rising In The Face Of Challenges Shanghai EMTH Import and...
0
2024-07-03T00:53:57
https://dev.to/elsina_edmondqksj_ff4837/shanghai-emth-import-and-export-co-ltd-transforming-challenges-into-opportunities-20i9
coldroom
Shanghai EMTH Import and Export Co., LTD: Rising In The Face Of Challenges Shanghai EMTH Import and Export Co., LTD is a popular enterprise, engaged in the production and distribution of an excellent array of aluminum goods such as windows doors curtain walls. The company was able to surpass the odds by using exciting strategies, maintaining security in minerals production, providing quality work and scouting for unique applications of their products. Why You Should Choose Shanghai EMTH Import and Export Co., LTD SHANGHAI EMTH IMPORT & EXPORT CO., LTD: With over 20 years in the world market, is a trustful manufacturer specialized on architecture aluminium. Having a team of highly skilled professionals who are committed towards providing world-class products, the company currently has production capacity in excess of 5000-6000 tons per month. The powerful solution makes them capable of delivering orders no matter how numerous or varied in size, showing their dedication to diverse customer demands. Also, with its fast production lead time allows customers to get their orders on or before the promised delivery date. Shanghai EMTH Import and Export Co., LTD Products has been producing high quality products at competitive prices with custom pricing so that it is affordable for any client without the lack of ideal in their range. Their years of experience and commitment to excellence make them a favorite partner in the aluminum industry. Innovations Innovation is always the foundation of competitive advantage in any industry, which Shanghai EMTH Import and Export Co., LTD firmly believes. The company does this by investing in state-of-the-art technology to constantly improve the standard of its products as well enhance production efficiency. This includes thermal break technology for their aluminum products. Aluminium profiles with improved insulation Thanks to this new technology the introduction of a barrier in aluminium profile can impede heat loss thus increasing energy efficiency, and bring down manufacturing costs. The outcome is a significant decrease in civilian energy consumption for building heating or cooling, which goes to show how committed the company was sustainability and customer satisfaction. Given that customer requirements are always evolving, Shanghai EMTH Import and Export Co., LTD continues to upgrade its products while keeping in touch with the changing needs of their customers. Safety Measures & Nbsp; The workplace safety is undoubtedly the first, Shanghai EMTH Import and Export Co., LTD aluminum products generally in the production and installation have been implemented several measures to protect human workers must uphold that a high standard. The company has been following strictest of the international safety protocols and regulations so to ensure best care for both their products as well as clients. Additionally, ongoing training and equipment maintenance practices-requirements at NCC-are core aspects of their operating models in keeping workers safe while performing their duties. In addition, routine quality control tests highlight the companys continued dedication to safety standards resulting in building consumer confidence and trust. Quality Service Shanghai EMTH Import and Export Co., LTD prides themselves on excellent customer service, with a goal to provide satisfaction for customers. Their team of customer service representatives is always ready to answer any questions and support you through the ordering process so your overall experience as a consumer is simplified. Shanghai EMTH Import and Export Co., LTD provides clients with comprehensive installation guidance, or customer services after arrival by easing all doubts so that customers buy at ease. Moreover, the High efficiency evaporator assure policies of the standard convince their devotion to render worth and encourage believe in customers also. Product Applications As so Shanghai EMTH Import and Export Co., LTD's aluminum products are able to meet the needs of a wide range of applications across different sectors, from construction to automotive through aerospace. Designed to be long-lasting and lightweight, Sherwin products are perfect for any setting requiring similar material specifications. Meeting the individual needs of customers in each area, the diverse product line offered by Profile Precision Extrusions has fueled advancements and helped push excellence. In turn, this consistent quality across a range of different industries now provides them with industry credibility as they have become widely recognized for being one The leading suppliers in high-quality Cold Room . To sum up, Shanghai EMTH Import and Export Co., LTD overcomes challenges by being innovative as possible while valuing safety placing in superior services with adaptable product application opportunities. With the right mix of standards for excellence and a customer-first strategy, they have established themselves as your friendly neighbor in aluminium who stands strong behind you within changing times proudly assuring to mitigate all adversities with high values.
elsina_edmondqksj_ff4837
1,909,512
Did You Know You Can Use GitHub to Host Your Site for Free?
In this guide, I'll walk you through how to use GitHub Pages to host your personal website or simple...
0
2024-07-03T00:52:29
https://asafhuseyn.com/blog/2024/07/03/Did-You-Know-You-Can-Use-GitHub-to-Host-Your-Site-for-Free.html
github, githubpages, githubgist, jekyll
In this guide, I'll walk you through how to use GitHub Pages to host your personal website or simple blog for free. Based on my experiences, I'll explain the process with examples. ## Contents 1. [What is GitHub Pages?](#what-is-github-pages) 2. [Step 1: Creating a Repository](#step-1-creating-a-repository) 3. [Step 2: Enabling GitHub Pages](#step-2-enabling-github-pages) 4. [Step 3: Creating a Static Site with Jekyll](#step-3-creating-a-static-site-with-jekyll) 5. [Using GitHub Gist](#using-github-gist) 6. [GitHub Actions and the Publishing Process](#github-actions-and-the-publishing-process) 7. [Step 4a: Purchasing a Domain (with Cloudflare)](#step-4a-purchasing-a-domain-with-cloudflare) 8. [Step 4b: Purchasing a Domain (with iCloud+)](#step-4b-purchasing-a-domain-with-icloud) 9. [Step 5: Connecting a Custom Domain](#step-5-connecting-a-custom-domain) 10. [Step 6a: Setting Up a Custom Email Address (Optional)](#step-6a-setting-up-a-custom-email-address-optional) 11. [Step 6b: Setting Up a Custom Email Address with iCloud+ (Optional)](#step-6b-setting-up-a-custom-email-address-with-icloud-optional) 12. [Conclusion](#conclusion) ## What is GitHub Pages? GitHub Pages is a free service that lets you publish static web pages directly from your GitHub repositories. It's perfect for personal websites, project pages, and blogs. ### Advantages: - Free hosting - Easy setup and management - Integration with Git and GitHub - Support for Jekyll static site generator - Ability to use a custom domain - SSL certificate support ## Step 1: Creating a Repository 1. Log in to your GitHub account. 2. Click the "New repository" button. 3. Name the repository `<username>.github.io` (e.g., `asafhuseyn.github.io`). 4. Set the repository to "Public". Note: If you're using a free GitHub account, you must make your repository public to use GitHub Pages. If you have a GitHub Pro or higher account, you can use GitHub Pages with private repositories. 5. Click the "Create repository" button. ## Step 2: Enabling GitHub Pages 1. Go to the settings of your newly created repository. 2. Click the "Pages" section in the left menu. 3. Under the "Source" section, select the "main" or "master" branch. 4. Click the "Save" button. Your site should now be live at `https://<username>.github.io`. ## Step 3: Creating a Static Site with Jekyll GitHub Pages works seamlessly with Jekyll, a Ruby-based static site generator. With Jekyll, you can easily add new pages and blog posts. ### Creating a New Page with Jekyll - On your local machine, navigate to your repository folder: ```bash git clone https://github.com/<username>/<username>.github.io.git cd <username>.github.io ``` - Create a new Markdown file: ```markdown --- layout: default title: "About Me" --- # Hello, I'm a developer! ``` - Add new blog posts to the `_posts` folder. The blog post file name format should be: `YYYY-MM-DD-title.md`. Example: ```markdown --- layout: post title: "My First Blog Post" date: 2024-07-03 --- This is my first blog post! ``` - Push the changes to GitHub: ```bash git add . git commit -m "Added new page and blog post" git push origin main ``` Note: * Jekyll supports themes, and you will need to configure the `_config.yml` file accordingly. * To keep things simple, I did not delve into more detailed topics like creating categories. You can check my repository example at [asafhuseyn.github.io](https://asafhuseyn.github.io). ## Using GitHub Gist GitHub Gist is a great way to share and manage your code snippets. While you can use Markdown code blocks to share code on your Jekyll site, Gists are better for more complex or frequently updated code. To use Gists on your Jekyll site: 1. Create a Gist: https://gist.github.com 2. Copy the ID of the created Gist. 3. Add it to your Jekyll page as follows: ```liquid {% gist GIST_ID %} ``` This method allows you to manage and update your code examples centrally. ## GitHub Actions and the Publishing Process GitHub Pages uses GitHub's CI/CD tool, GitHub Actions, to automatically build and publish your site. This process usually works smoothly and requires no intervention. Here's how it works: 1. When you push changes to the main branch (usually `main` or `master`), GitHub Actions are triggered automatically. 2. GitHub Actions build your Jekyll site and push the results to the `gh-pages` branch. 3. This process typically takes a few minutes. To monitor this process: 1. Go to the "Actions" tab in your GitHub repository. 2. You can see the latest workflows and their status. 3. If there are any errors, you can check the details here. ## Step 4a: Purchasing a Domain (with Cloudflare) 1. Create a Cloudflare account (if you don't have one). 2. Use Cloudflare's domain registration service to purchase your desired domain name. ## Step 4b: Purchasing a Domain (with iCloud+) 1. On your iPhone or iPad, go to Settings > [Your Name] > iCloud > iCloud+ > Custom Email Domain. 2. Tap "Add Domain". 3. To purchase a new domain, tap "Buy a new domain" and follow the instructions to purchase your desired domain name. ## Step 5: Connecting a Custom Domain 1. Go to your DNS settings and add the following A records for `@`: - 185.199.108.153 - 185.199.109.153 - 185.199.110.153 - 185.199.111.153 2. (Optional) Add the following AAAA records for `@` to support IPv6: - 2606:50c0:8000::153 - 2606:50c0:8001::153 - 2606:50c0:8002::153 - 2606:50c0:8003::153 3. Go to the settings of your GitHub repository, click on the "Pages" section, and enter your custom domain name (e.g., example.com) in the "Custom domain" field. ## Step 6a: Setting Up a Custom Email Address (Optional) To create a custom email address with your domain: 1. Choose an email service provider (Google Workspace, Zoho Mail, etc.). 2. Follow your provider's instructions to add MX records to your DNS settings. 3. Example MX records: ``` MX @ 10 mx1.mailserver.com MX @ 20 mx2.mailserver.com ``` 4. In your email service provider's admin panel, create a new email address (e.g., `contact@example.com`). ## Step 6b: Setting Up a Custom Email Address with iCloud+ (Optional) If you purchased your domain through iCloud+, you don't need to configure DNS settings. Just create an email address: 1. On your iPhone or iPad, go to Settings > [Your Name] > iCloud > iCloud+ > Custom Email Domain. 2. Select your domain. 3. Tap "Add Email Address". 4. Enter the desired email address (e.g., `contact@`, `info@`, etc.). 5. Tap "Done". You can use this address for email, iMessage, and FaceTime. ## Conclusion In this guide, I've tried to cover the basic steps you can take with GitHub Pages. *I explained the steps through Cloudflare because it's my favorite and Apple collaborates with it, but you can use any registrar as the steps will be the same. For specific information on any topic, feel free to reach out to me.*
asafhuseyn
1,909,466
Automating Creation of Users and groups using Bash Script (Step-by-Step Guide)
As a DevOps engineer, it is quite a common task to create, update, and delete users and groups....
0
2024-07-03T00:44:10
https://dev.to/sudobro/automating-creation-of-users-and-groups-using-bash-script-step-by-step-guide-3dm0
devops, linux, learning, bash
As a DevOps engineer, it is quite a common task to create, update, and delete users and groups. Automating this process can save a lot of time and help reduce human errors, especially when onboarding numerous users. In this article, I'll walk you through using bash to automate the creation of users and their respective groups, setting up home directories, generating random passwords, and logging all actions performed. **Objectives** - Create users and their personal groups. - Add users to specified groups. - Set up home directories with appropriate permissions. - Generate and securely store passwords. - Log all actions for auditing purposes. - Ensure error handling. ## Let's Begin First and foremost, at the beginning of any shell script we're writing, we start with **"shebang"** in the first line. ``` #!/bin/bash ``` Shebang tells us that the file is executable. ## Setting up the directory for user logs and file It is necessary to create a directory and files for logging and password storage. If it already exists, DO NOT CREATE IT AGAIN!!! ``` mkdir -p /var/log /var/secure touch /var/log/user_management.log touch /var/secure/user_passwords.txt ``` - **/var/log:** Directory for log files. - **/var/secure:** Directory for secure files. - **user_management.log:** Log file to record actions. ## Secure the generated passwords ``` chmod 600 /var/secure/user_passwords.txt ``` - **user_passwords.txt:** File to store passwords securely with read and write permissions for the file owner only. Earlier, we created the directory "`/var/secure`" to store `user_password.txt`. To give the appropriate permissions, we use `chmod 600` to give only the current user access to view our generated passwords. ## Create a function to log actions The `log_action()` function record actions with timestamps in the log file. ``` log_action() { echo "$(date) - $1" >> "/var/log/user_management.log" } ``` ## User Creation Function The `create_user()` function handles the creation of user accounts and their associated groups. It takes two arguments: username and groups. Steps taken in the `create_user()` function: - Check for Existing User: Logs a message if the user already exists. ``` if id "$user" &>/dev/null; then log_action "User $user already exists." return fi ``` `id "$user" &>/dev/null` checks if the user exists by attempting to retrieve the user's information. If the user does not exist, `id` will return a non-zero exit status. The `&>/dev/null` part redirects any output (both stdout and stderr) to `/dev/null`, effectively silencing the command's output. If the user exists `log_action "User $user already exists."` logs a message indicating that the user already exists. Using `return`, it exits the `create_user()` function early, preventing any further actions from being taken for the existing user. - Create User Group: Creates a personal group for the user. ``` groupadd "$user" ``` `groupadd "$user"` creates a new group with the same name as the user. This group will be the primary group for the user being created. - Handle Additional Groups: Checks and creates additional groups if they do not exist. ``` IFS=' ' read -ra group_array <<< "$groups" log_action "User $user will be added to groups: ${group_array[*]}" for group in "${group_array[@]}"; do group=$(echo "$group" | xargs) # Trim whitespace if ! getent group "$group" &>/dev/null; then groupadd "$group" log_action "Group $group created." fi done ``` Here, we split the `groups` string into an array using spaces as delimiters and log the groups that the user will be added to. Using `for group in "${group_array[@]}"; do`, we are able to iterates over each group in the `group_array` and trim any leading or trailing whitespace from the group name. Using `if ! getent group "$group" &>/dev/null; then` to check if the group already exists else we create the group `groupadd "$group"` and log that the group has been created `log_action "Group $group created."`. - Create User: Creates the user with a home directory and bash shell. ``` useradd -m -s /bin/bash -g "$user" "$user" if [ $? -eq 0 ]; then log_action "User $user created with primary group: $user" else log_action "Failed to create user $user." return fi ``` - Assign Groups: Adds the user to additional groups. ``` for group in "${group_array[@]}"; do usermod -aG "$group" "$user" done log_action "User $user added to groups: ${group_array[*]}" ``` `for group in "${group_array[@]}"; do` to iterate over each group in the `group_array` then adds the user to the specified group (`-aG` means append the user to the group). Next, log that the user has been added to the specified groups. - Generate and Store Password: Generates a random password and stores it securely. ``` password=$(</dev/urandom tr -dc A-Za-z0-9 | head -c 12) echo "$user:$password" | chpasswd echo "$user,$password" >> "/var/secure/user_passwords.txt" ``` We use `/dev/urandom` to generate a random 12-character password and `tr` to filter alphanumeric characters. Next, we set the password for the user using the `chpasswd` command and append the username and password to the secure file `/var/secure/user_passwords.txt`. - Set Permissions: This part of the code is responsible for setting the appropriate permissions and ownership for the user's home directory. ``` chmod 700 "/home/$user" chown "$user:$user" "/home/$user" ``` Using `700`, only the user has read, write, and execute permissions. `chown "$user:$user" "/home/$user"` to set the owner and group of the home directory to the user. Here is the full combined code within the `create_user()` function. ``` create_user() { local user="$1" local groups="$2" local password if id "$user" &>/dev/null; then log_action "User $user already exists." return fi groupadd "$user" IFS=' ' read -ra group_array <<< "$groups" log_action "User $user will be added to groups: ${group_array[*]}" for group in "${group_array[@]}"; do group=$(echo "$group" | xargs) if ! getent group "$group" &>/dev/null; then groupadd "$group" log_action "Group $group created." fi done useradd -m -s /bin/bash -g "$user" "$user" if [ $? -eq 0 ]; then log_action "User $user created with primary group: $user" else log_action "Failed to create user $user." return fi for group in "${group_array[@]}"; do usermod -aG "$group" "$user" done log_action "User $user added to groups: ${group_array[*]}" password=$(</dev/urandom tr -dc A-Za-z0-9 | head -c 12) echo "$user:$password" | chpasswd echo "$user,$password" >> "/var/secure/user_passwords.txt" chmod 700 "/home/$user" chown "$user:$user" "/home/$user" log_action "Password for user $user set and stored securely." } ``` ## Main Execution Flow ``` if [ $# -ne 1 ]; then echo "Usage: $0 <user_list_file>" exit 1 fi filename="$1" if [ ! -f "$filename" ]; then echo "Users list file $filename not found." exit 1 fi while IFS=';' read -r user groups; do user=$(echo "$user" | xargs) groups=$(echo "$groups" | xargs | tr -d ' ') groups=$(echo "$groups" | tr ',' ' ') create_user "$user" "$groups" done < "$filename" echo "Creation of user is complete. Go ahead and check /var/log/user_management.log for detailed information." ``` Here, we check if the user file is provided, then read the user list file and process each user entry, calling the `create_user()` function for each line and passing the `$user` and `$groups` as arguments. ## Let's put everything together ``` #!/bin/bash # Create directory for user logs and passwords mkdir -p /var/log /var/secure # Create logs file and passwords file touch /var/log/user_management.log touch /var/secure/user_passwords.txt # Make the password file secure for the file owner (read and write permissions for file owner only) chmod 600 /var/secure/user_passwords.txt # Create a function to record actions to log file log_action() { echo "$(date) - $1" >> "/var/log/user_management.log" } create_user() { local user="$1" local groups="$2" local password # Check if user already exists by logging user information with id if id "$user" &>/dev/null; then log_action "User $user already exists." return fi # Create personal group for the user groupadd "$user" # Create additional groups if they do not exist IFS=' ' read -ra group_array <<< "$groups" log_action "User $user will be added to groups: ${group_array[*]}" for group in "${group_array[@]}"; do group=$(echo "$group" | xargs) # Trim whitespace if ! getent group "$group" &>/dev/null; then groupadd "$group" log_action "Group $group created." fi done # Create user with home directory and shell, primary group set to the personal group useradd -m -s /bin/bash -g "$user" "$user" if [ $? -eq 0 ]; then log_action "User $user created with primary group: $user" else log_action "Failed to create user $user." return fi # Add the user to additional groups for group in "${group_array[@]}"; do usermod -aG "$group" "$user" done log_action "User $user added to groups: ${group_array[*]}" # Generate password and store it securely in a file password=$(</dev/urandom tr -dc A-Za-z0-9 | head -c 12) echo "$user:$password" | chpasswd # Store user and password securely in a file echo "$user,$password" >> "/var/secure/user_passwords.txt" # Set permissions and ownership for user home directory chmod 700 "/home/$user" chown "$user:$user" "/home/$user" log_action "Password for user $user set and stored securely." } # Check if user list file is provided if [ $# -ne 1 ]; then echo "Usage: $0 <user_list_file>" exit 1 fi filename="$1" if [ ! -f "$filename" ]; then echo "Users list file $filename not found." exit 1 fi # Read user list file and create users while IFS=';' read -r user groups; do user=$(echo "$user" | xargs) groups=$(echo "$groups" | xargs | tr -d ' ') # Replace commas with spaces for usermod group format groups=$(echo "$groups" | tr ',' ' ') create_user "$user" "$groups" done < "$filename" echo "Creation of users is complete. Go ahead and check /var/log/user_management.log for detailed information." ``` ## Usage To use this script, provide a user list file as an argument. Each line in the file should contain a username and a comma-separated list of groups (optional), separated by a semicolon. For example: ``` light; sudo,dev,www-data idimma; sudo mayowa; dev,www-data ``` To run the script, use this command. ``` ./script_name.sh user_list_file.txt ``` If you are not a root user, use `sudo ./create_users.sh user_list_file.txt` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9jhs11zw8wihpi5dc9ak.png) ## Conclusion This script provides a robust solution for automating user creation and management, ensuring secure handling of passwords and detailed logging of actions. It simplifies the process of setting up new user accounts while maintaining security and auditability. To learn more and kickstart your programming journey you can visit: https://hng.tech/internship or https://hng.tech/premium Feel free to reach out with questions or suggestions for improvements. Happy automating!😁👌
sudobro
1,909,509
My First Task In HNG Internship
I signup for an internship program named HNG. It is expected that the intern should have an...
0
2024-07-03T00:40:39
https://dev.to/ik_again/my-first-task-in-hng-internship-12f6
I signup for an internship program named HNG. It is expected that the intern should have an intermediate to advance experience for any track they wish to participate in. For more information regarding the internship, your can follow this link https://hng.tech/internship and applying for a job at HNG you can also checkout this link https://hng.tech/hire. --- Task 1: We were tasked to write a script named create_user.sh for creating a user and adding the user to a group via reading from an input file. --- ``` #!/bin/bash # Log file and password file PASSWORD_FILE="/var/secure/user_passwords.txt" LOG_FILE="/var/log/user_management.log" # ensure to check if the number of argument provided is 1 # if !true exit running the entire codebase if [ $# -ne 1 ]; then echo "Usage: $0 <input_textfile>" | sudo tee -a $LOG_FILE exit 1 fi ``` Considering the above code block; `#!/bin/bash` the shebang declaration specifying that this file is a bash script. ``` PASSWORD_FILE="/var/secure/user_passwords.txt" LOG_FILE="/var/log/user_management.log" ``` the above block of code assigns the path `/var/secure/user_passwords.txt` to variable `PASSWORD_FILE` and path `/var/log/user_management.log` to variable `LOG_FILE` ``` if [ $# -ne 1 ]; then echo "This is how to run the script: $0 <input_textfile>" | sudo tee -a $LOG_FILE exit 1 fi ``` The above block of code checks if only argument is passed to the script. - `$# -ne 1` checks if the number of argument passed is not equal to one and prints the output to the terminal and also log the data. - else if the condition doesn't hold true it exits the block of the code. ``` if [ ! -f "$input_textfile" ]; then echo "Error: The file $input_textfile does not exists" | sudo tee -a $LOG_FILE exit 1 fi ``` - `! -f "$input_textfile` this checks if an input file is not passed to the script it exits ``` sudo chown root:root $PASSWORD_FILE sudo mkdir -p /var/secure sudo touch $PASSWORD_FILE sudo chmod 600 $PASSWORD_FILE sudo touch $LOG_FILE sudo chmod 640 $LOG_FILE ``` - This Create necessary directories such as $LOG_FILE $PASSWORD_FILE and set permissions such as making the $PASSWORD_FILE have root administrative privilege and setting the permission to read and write privilege. - `sudo chmod 640 $LOG_FILE` this ensure that the user has a read and write privilege and the group has only read privilege. ``` generate_password() { < /dev/urandom tr -dc 'A-Za-z0-9!@#$%&*' | head -c 12 } ``` - This function is responsible for generating random password **Read File** ``` while IFS=';' read -r user groups; do if [ -z "$user" ] || [ -z "$groups" ]; then echo "Skipping invalid line: $user;$groups" | sudo tee -a $LOG_FILE continue fi ``` - Start a loop that reads a line from the $FILENAME, splits it into two parts separated by `;` based on IFS. - `read -r user groups` Assign the first part to username and the remaining parts to groups. - -z "$user" -z "$groups" checks to see if the user and group name is empty. **Creating users** ``` if id -u "$user" >/dev/null 2>&1; then echo "This particular User $user exists" | sudo tee -a $LOG_FILE else sudo useradd -m "$user" if [ $? -eq 0 ]; then echo "User $user created" | sudo tee -a $LOG_FILE # Generating the random password for each user password=$(generate_password) echo "$user,$password" | sudo tee -a $PASSWORD_FILE >/dev/null echo "$user:$password" | sudo chpasswd echo "User $user password is set" | sudo tee -a $LOG_FILE # Set appropriate permissions for the home directory sudo chmod 700 /home/$user sudo chown $user:$user /home/$user echo "Home directory for user $user set up with appropriate permissions" | sudo tee -a $LOG_FILE else echo "Failed to create user $user" | sudo tee -a $LOG_FILE continue fi fi ``` - `id -u "$user" >/dev/null 2>&1` this looks for the user id and suppress the standard output and error to `/dev/null` - `sudo useradd -m "$user"` - `useradd`: This is the command used to add a new user - `-m`: This option tells useradd to create a home directory for the new user if it does not already exist. The home directory will be created in the /home/ directory and named after the user. - `"$user"`: This is the username of the new user being created. The $user variable should contain the name of the user - `[ $? -eq 0 ]` this checks if the previous command successfully executed and 0 indicates success. - `password=$(generate_password)` calls the generate_password function and assigns the result to the password variable. - `echo "$user,$password" | sudo tee -a $PASSWORD_FILE >/dev/null` this suppresses the output due to the /dev/null - `echo "$user:$password" | sudo chpasswd` this allows the user to change password. - `echo "User $user password is set" | sudo tee -a $LOG_FILE` displays the output to the terminal. `sudo chmod 700 /home/$user` gives the user a full privileged. `sudo chown $user:$user /home/$user` gives the owner of the directory to the user. `else` if the condition doesn't hold true it print the output of failed user creation to the terminal. **Adding Users to Group** ``` IFS=',' read -r -a group_array <<< "$groups" for group in "${group_array[@]}"; do if getent group "$group" >/dev/null 2>&1; then sudo usermod -aG "$group" "$user" echo "User $user added to existing group $group" | sudo tee -a $LOG_FILE else sudo groupadd "$group" sudo usermod -aG "$group" "$user" echo "Group $group created and user $user added to it" | sudo tee -a $LOG_FILE fi done done < "$input_textfile" ``` -`IFS=',' read -r -a group_array <<< "$groups"` - `IFS=','`: Sets the Internal Field Separator to a comma. This means the read command will split the input string based on commas. - `read -r -a group_array <<< "$groups"` Reads the group variable, splits it by comma and stores the value to the group_array. - `group=$(echo "$group" | xargs) ` this removes any leading whitespace in the group. - `for group in "${group_array[@]}"` this loops through the group_array array and stores each iteration to group. - `if getent group "$group" >/dev/null 2>&1` if the group exists in the system; also suppress the standard output and error. - `sudo usermod -aG "$group" "$user"` adds users to the existing group - `sudo groupadd "$group"` this creates a new group. - `sudo usermod -aG "$group" "$user"` this adds user to the group - `echo "Group $group created and user $user added to it" | sudo tee -a $LOG_FILE` prints the output and log it into the log file. - `done` ends the for loop - `done < "$input_textfile"` ends the while loop that reads from the input file. - `echo "User creation and group assignment created." | sudo tee -a $LOG_FILE` outputting the finished the creation of users and group. **Running The Script** - created the file named called `name-of-text-file.txt` ``` nano name-of-text-file.txt #file content of the file kachi; security, crypto, signals dika; werey, genuis, smartkid diamond; werey, soc, faith chimummy; boss, theboss, smartguy david; psycho, funny, jovial faith; babe, babygirl, fine ``` - execute the script `create_userss.sh` with the text file `name-of-text-file.txt` ``` # making the script file to be executable chmod +x create_userss.sh # running the script ./create_userss.sh name-of-text-file.txt ``` - checking the LOG_FILE ``` sudo cat /var/log/user_management.log ``` - The display output for the log file ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ct92terz4sznyh4v6qlh.png) - checking the password file PASSWORD_FILE ``` sudo cat /var/secure/user_passwords.txt ``` - This displayed output for the password file ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j9q9nldvvx57ro7x6i1a.png)
ik_again
1,909,508
Scraping Twitter comments with selenium(Python): step-by-step guide
In today's world full of data, everyone uses social media to express themselves and contribute to the...
0
2024-07-03T00:40:34
https://dev.to/david_hdz/scraping-twitter-comments-with-seleniumpython-step-by-step-guide-d51
selenium, webscraping, automation
In today's world full of data, everyone uses social media to express themselves and contribute to the public voice. This is such valuable information that is just publically available to anyone, you can gather a lot of insights, feedback, and very good advice from this public opinion. That is why, I bring you this step-by-step guide to start scraping comments on Twitter without much work. What you will need: - A text editor -A programing language that selenium supports(I will be using python) -A Twitter account (Preferably not your main one) ## **Warning: Using WebScraping in the wrong manner could be unethical and illegal against some terms of service that could lead to permanent IP address bans and more. Use with no bad intentions this WebScraping tools** ## Step 1: The setup To start, we will create a new directory with a virtual environment and activate it. ``` > C:\Users\Blogs\Webscraping> python -m venv . > C:\Users\Blogs\Webscraping> Scripts\activate ``` This can vary from your operating system, if you are not familiar with Python and virtual environments, [refer here ](https://www.freecodecamp.org/news/how-to-setup-virtual-environments-in-python/)for more guidance. Okay, now that we have our environment running, I will install Selenium, our main dependency. `> pip install selenium` Now that we have all of our tools ready, we shall code ## Step two: Our code process Selenium is a free tool for automation processes on a web application. In these cases, we will be using the Selenium WebDriver, basically a tool that lets you run useful scripts in different browsers. In our case, we will be using Chrome. Our main process will look like this: (main.py) ``` from twitter import Twitter from selenium import web driver from time import sleep ##Desired post to scrape comments URL_POST = "**" ##Account credentials username = "**" email = "**" password = "**" driver = webdriver.Chrome() Twitter(driver).login() Twitter(driver).get_post(URL_POST) driver.quit() ``` Selenium WebDriver lets us do a lot of stuff in a browser, but let's leave that for the next step. Right now I would recommend creating a new Twitter account and searching for a post that you would like to search. Yes, I know, we haven't defined things like the Twitter class but for now, it will be best to pass your driver as an argument. ## Step 3: The Twitter class This will be the largest and most complex part of our program. It covers three methods: Twitter login get_post and scrape. We will first define a constructor with one input variable: - Driver: Our selenium driver that we started in main.py -Wait: a useful method for searching HTML tags that are not loaded yet (twitter.py) `import sys from csv_exports import twitter_post_to_csv from time import sleep from selenium.webdriver.common.by import By from useful_functions import validate_span from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC class Twitter: def __init__(self, driver): self.driver = driver self.wait = WebDriverWait(self.driver, 10)` **Login Method** To access Twitter comments, we need to be logged in, and unfortunately, a web driver does not remember credentials. So let's start by automating our login process... The code `def login(self, email, username, password): drive = self.driver wait = self.wait ##Going to the login page URL drive.get("https://x.com/i/flow/login") ##Sends email credential to the first form input input_email = wait.until(EC.presence_of_element_located((By.NAME, "text"))) input_email.clear() input_email.send_keys(email) sleep(3) ##Submits form button_1 = drive.find_elements(By.CSS_SELECTOR, "button div span span") button_1[1].click() ##Sends username credential to the second form input input_verification = wait.until(EC.presence_of_element_located((By.NAME, "text"))) input_verification.clear() sleep(3) input_verification.send_keys(username) ##Submits form button_2 = drive.find_element(By.CSS_SELECTOR, "button div span span") sleep(3) button_2.click() ##Sends username credential to the form input input_password = wait.until(EC.presence_of_element_located((By.NAME, "password"))) input_password.clear() sleep(3) input_password.send_keys(password) sleep(3) #Submits last form button_3 = drive.find_element(By.CSS_SELECTOR, "button div span span") button_3.click() sleep(5)` Here are the forms your program will be filling up: 1. The first form: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yst9vzxlslc35liahuy7.png) 2. The second form ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vx85hy91hg86g1uagg04.png) 3. The third form ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2vamke0txp51maler9pm.png) BREAKDOWN Our method waits for an, such as input_email, since it's our first request to the URL, everything needs to be loaded into full HTML. We use the find_elements() method from the web driver, to locate the inputs in the HTML. Our method systematically goes through every one of the forms one by one inputting and submitting keys with .send_keys() and .click() methods. We also use .clear() to make sure our input box does not contain information on it when we load the page. We have successfully logged in. NOTE* The second form only appears after a few times of using a Selenium web driver to interact with the Twitter login page. Twitter detects when a bot comes in and types numbers way too fast, so this second page appears only when a bot is detected. After a few times of using this program, you will always have this show up when logging in to your scraping account. **Scrape** This will be the method that retrieves a post comments. To this, we have big limitations and some problems to solve. The first one is that there is no way possible to target only Twitter comments. Twitter comments are inside of <spans>, which, twitter likes to use a lot for lots of things. The best way I could find to get Twitter comments is by using this method: "drive.find_elements(By.XPATH,"//span[@class='css-1jxf684 r-bcqeeo r-1ttztb7 r-qvutc0 r-poiln3']")! " in other words, using XML with classes. This returns a lot of unnecessary data and we have to do a lot of cleaning. The second problem, a little bit less severe, Twitter's dynamic reactions. When scrolling down or up, twitter loads or deletes HTML from the current document, so in order to get every comment possible, we have to go slowly and extract elements before we want to scroll again. Now that we have discovered this problem, lets get to work. ` def scrape(self): drive = self.driver containers = drive.find_elements(By.XPATH, "//span[@class='css-1jxf684 r-bcqeeo r-1ttztb7 r-qvutc0 r-poiln3']") ##Scrape data and store it in list scraped_data = [] temporary_1 = "" temporary_2 = "" index = 0 index_dict = 0 while index < len(containers): text = containers[index].text if text: if text[0] == "@": temporary_1 = text index_dict = index_dict + 1 if validate_span(text) is True and index_dict == 1: temporary_2 = text arr_push = { "username": temporary_1, "post": temporary_2 } scraped_data.append(arr_push) temporary_2, temporary_1 = "", "" index_dict = 0 index = index + 1 return scraped_data` This code retrieves all comments from the currently loaded document. By looping through certain conditions and adding additional methods like validate_span(), we were able to successfully, clean data all of the time. If you encounter a problem in the algorithm, feel free to let me know. The validate_span() function: (useful_functions.py) `def validate_span(span): if span[0] == "." or span[0] == "·": return False if span[0] == "@": return False if validate_number(span): return True return False def validate_number(string): if string[len(string) - 1] == "k": string = string[0: len(string) - 1] string = string.replace(".", "") index = 0 for i in string: if (i == "1" or i == "2" or i == "3" or i == "4" or i == "5" or i == "6" or i == "7" or i == "8" or i == "9" or i == "0"): index = index + 1 if len(string) <= index: return False else: return True` All of our unwanted elements usually follow counts, like counts or random dots and whitespace. By checking with a few conditions, this is an easy task to clean up. **The get_post method** This is the method where we loop until we get to the bottom of the page, using the scraping method in every iteration to make sure all data is scraped. `def get_post(self, url): drive = self.driver wait = self.wait drive.get(url) sleep(3) data = [] javascript = "let inner_divs = document.querySelectorAll('[" "data-testid=\"cellInnerDiv\"]');" + ("window" ".scrollTo(" "0, " "inner_divs[0].scrollHeight);") + "return inner_divs[2].scrollHeight;" previous_height = drive.execute_script("return document.body.scrollHeight") avg_scroll_height = int(drive.execute_script(javascript)) * 13 while True: data = data + self.scrape() drive.execute_script("window.scrollTo(0, (document.body.scrollHeight +"+str(avg_scroll_height)+" ));") sleep(3) new_height = drive.execute_script("return document.body.scrollHeight") if new_height == previous_height: break previous_height = new_height` By Injecting javascript into the driver and looping while the document's scroll heights are not the same, we are able to scrape data in every part of the page. Finally, we can do something useful with the data. In my case, am just going to print it. `for I in data: print(I)` Now all you have to do is change your desired URL and run the main file, then wait for your data to be returned. And you've done it! You have successfully created a web scraper for Twitter. Needless to say, use web scraping technologies in legal and ethical ways if you don't want to get in trouble... In conclusion, Twitter comments can scraped very efficiently, but should always be done with the correct legal use, apart from this, twitter data is very valuable and can help you understand the public opinion in a topic.
david_hdz
1,909,507
Issue 51 of AWS Cloud Security Weekly
(This is just the highlight of Issue 51 of AWS Cloud Security weekly @...
0
2024-07-03T00:34:30
https://aws-cloudsec.com/p/issue-51
(This is just the highlight of Issue 51 of AWS Cloud Security weekly @ https://aws-cloudsec.com/p/issue-51 << Subscribe to receive the full version in your inbox weekly for free!!). **What happened in AWS CloudSecurity & CyberSecurity last week June 20-July 02, 2024?** - Amazon GuardDuty EC2 Runtime Monitoring eBPF security agent now extends its support to Amazon Elastic Compute Cloud (EC2) workloads running on Ubuntu (versions 20.04 and 22.04) and Debian (versions 11 and 12) operating systems. If you utilize GuardDuty EC2 Runtime Monitoring with automated agent management, the security agent for your Amazon EC2 instances will be upgraded automatically. However, if you do not use automated agent management, you are responsible for manually upgrading the agent. - AWS has launched Amazon Virtual Private Cloud (VPC) support for AWS CloudShell, enabling creation of CloudShell environments within a VPC. This allows you to securely use CloudShell alongside other resources within the same subnet of your VPC without requiring additional network setup. Before this release, there was no method to control network traffic for CloudShell to the internet. - Amazon CodeCatalyst now integrates support for using source code repositories hosted on GitLab.com within CodeCatalyst projects, allowing you to leverage GitLab.com repositories with CodeCatalyst’s features, including its cloud IDE (Development Environments). You can initiate CodeCatalyst workflows in response to GitLab.com events, monitor the status of CodeCatalyst workflows directly within GitLab.com, and enforce blocking of GitLab.com pull request merges based on CodeCatalyst workflow statuses. - Amazon DocumentDB (with MongoDB compatibility) now includes support for cluster authentication using AWS Identity and Access Management (IAM) users and roles ARNs. This enhancement allows users and applications connecting to an Amazon DocumentDB cluster for data operations such as reading, writing, updating, or deleting to authenticate using AWS IAM identities. This means that the same AWS IAM user or role can be used consistently across connections to different DocumentDB clusters and other AWS services. For applications deployed on AWS EC2, AWS Lambda, AWS ECS, or AWS EKS, there is no longer a need to manage passwords within the application for authentication to Amazon DocumentDB. Instead, these applications retrieve their connection credentials securely through environment variables associated with an AWS IAM role, thereby establishing a passwordless authentication mechanism. - AWS CodeBuild now offers the ability to extend their build timeout to up to 36 hours, a significant increase from the previous limit of 8 hours. This enhancement allows you to set the maximum duration before CodeBuild terminates a build request if it remains incomplete. With this update, organizations managing workloads that demand extended timeouts, such as extensive automated test suites or builds involving embedded machines, can effectively utilize CodeBuild's capabilities. **Trending on the news & advisories (Subscribe to the newsletter for details):** - GitLab critical patch which could allow an attacker to trigger a pipeline as another user. - CISA: Exploring Memory Safety in Critical Open Source Projects. - Grafana security update: Grafana Loki and unintended data write attempts to Amazon S3 buckets. - TeamViewer breached in alleged APT hack. - Russian National Charged for Conspiring with Russian Military Intelligence to Destroy Ukrainian Government Computer - Systems and Data. - Microsoft- Toward greater transparency: Unveiling Cloud Service CVEs. - Geisinger provides notice of Nuance’s data security incident. - regreSSHion: Remote Unauthenticated Code Execution Vulnerability in OpenSSH server. - Rapid7 Agrees to Acquire Cyber Asset Attack Surface Management Company, Noetic Cyber.
aws-cloudsec
1,909,506
Vault Secret as an External Secret
This guide to help people who are new to Vault create a Kubernetes Secret through Vault There are...
0
2024-07-03T00:34:11
https://dev.to/tinhtq97/vault-secret-as-an-external-secret-357
This guide to help people who are new to Vault create a Kubernetes Secret through Vault There are two ways to use Vault Secret: - Adding the annotations - Use External Secret Operator ![External Secret](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/259zhq379pxgbfqrxrok.png)*<small>Source: external-secrets.io</small>* I used the second step in this article because I used ArgoCD before and wanted to visualize all the resources I created. ## Walkthrough ![Create Secret](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dbo1k4r6v3n8a2fsgd46.png) ![Enable Secret Engine](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7dt3mxiostq9x6wsp2j8.png) ![Path](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cujvzlcyventouz7v4qu.png) ![Value](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ke3q4za0r07eoxr8hzil.png) Create a Secret for the Vault token ```yaml kubectl create secret generic vault-token --from-literal=token=<token> ``` or ```bash echo -n "token" | base64 ``` ```yaml apiVersion: v1 data: token: <Encoded Vault Token> kind: Secret metadata: name: vault-token type: Opaque ``` Install External Secrets using Helm ```bash helm repo add external-secrets <https://charts.external-secrets.io> helm install external-secrets external-secrets/external-secrets ``` ## SecretStore ![Note](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6dpu39w44mei9gya0guw.png) Create secret-store.yaml file ```yaml apiVersion: external-secrets.io/v1beta1 kind: SecretStore metadata: name: secret-store spec: provider: vault: server: http://<serverAddresss>:<port> path: <path> ----- see the picture above version: "<version>" ---- See the version near the path above auth: tokenSecretRef: name: vault-token key: token ``` ## ExternalService Create external-secret.yaml file ```yaml apiVersion: external-secrets.io/v1beta1 kind: ExternalSecret metadata: name: external-secret spec: refreshInterval: 5m ------ Time for fetching new values secretStoreRef: name: secret-store ------- Secret Store Name kind: SecretStore target: name: secret-to-be-created --------- Secret Name creationPolicy: Owner dataFrom: - extract: key: "<key>" ------- See the picture above ``` It will automatically get all values from Vault Secret and create a new secret if it does not exist. ## Preferences [HashiCorp Vault](https://external-secrets.io/latest/provider/hashicorp-vault/?source=post_page-----78173a0d9b5a--------------------------------) --- English is not my first language, and I am not good at it, but I want to improve and enhance my English every day. Writing a blog is the best way I have chosen. Please let me know if there is anything confusing because of my English.
tinhtq97
1,909,505
Building Interactive Voice Response Systems with Twilio
Introduction Interactive Voice Response (IVR) systems are an essential part of modern...
0
2024-07-03T00:33:08
https://dev.to/kartikmehta8/building-interactive-voice-response-systems-with-twilio-b7j
javascript, beginners, programming, tutorial
## Introduction Interactive Voice Response (IVR) systems are an essential part of modern businesses, allowing customers to interact with a company's automated phone system through voice or touch-tone inputs. With the rise of remote work and digital communication, building IVR systems has become increasingly important for businesses of all sizes. Twilio, a cloud communications platform, offers an easy and cost-effective solution for building IVR systems. In this article, we will explore the advantages and disadvantages of using Twilio for building IVR systems, as well as some of its key features. ## Advantages One of the biggest advantages of using Twilio for IVR is its flexibility. It offers a wide range of tools and APIs that can be customized to fit the specific needs of a business. Additionally, Twilio's pay-as-you-go pricing model makes it a cost-effective option for companies on a budget. Furthermore, with Twilio's cloud infrastructure, businesses can handle large call volumes without any disruption. ## Disadvantages While Twilio is a powerful tool for building IVR systems, it does have some limitations. One major disadvantage is the technical expertise required to set up and customize the system. Businesses may need to hire developers or have in-house technical knowledge to fully utilize Twilio's features. Additionally, Twilio's pricing structure can become expensive for businesses that handle a high volume of calls. ## Features Twilio offers numerous features that make it an ideal choice for building IVR systems. Its programmable voice API allows for easy integration with existing phone systems and the ability to add custom prompts, menu options, and call routing. Furthermore, Twilio's speech-to-text and text-to-speech capabilities make it easy to create a natural and user-friendly IVR experience for customers. ### Example of a Simple Twilio IVR Setup ```javascript // Twilio IVR example in Node.js const express = require('express'); const app = express(); const VoiceResponse = require('twilio').twiml.VoiceResponse; app.post('/ivr', (req, res) => { const twiml = new VoiceResponse(); const gather = twiml.gather({ numDigits: 1, action: '/ivr/handle-key', method: 'POST', }); gather.say('Press 1 for sales, 2 for support, or 3 to speak to an operator.'); // If the user doesn't enter input, loop twiml.redirect('/ivr'); res.type('text/xml'); res.send(twiml.toString()); }); app.post('/ivr/handle-key', (req, res) => { const twiml = new VoiceResponse(); switch (req.body.Digits) { case '1': twiml.say('Connecting you to sales.'); twiml.redirect('YOUR_SALES_DEPARTMENT_URL'); break; case '2': twiml.say('Connecting you to support.'); twiml.redirect('YOUR_SUPPORT_DEPARTMENT_URL'); break; case '3': twiml.say('Connecting you to an operator.'); twiml.dial('OPERATOR_PHONE_NUMBER'); break; default: twiml.say('Invalid option.'); twiml.redirect('/ivr'); break; } res.type('text/xml'); res.send(twiml.toString()); }); app.listen(3000, () => { console.log('IVR server running on port 3000'); }); ``` ## Conclusion In conclusion, Twilio is a reliable and versatile tool for businesses looking to build IVR systems. Its flexibility, cost-effectiveness, and powerful features make it a top choice for companies of all sizes and industries. While it may have some drawbacks, the benefits of using Twilio for IVR systems far outweigh its limitations. With Twilio's user-friendly interface and customizable options, businesses can provide their customers with a seamless and efficient interactive voice response experience.
kartikmehta8
1,909,504
Data Fetching in Next.js 14: A Comprehensive Guide
Data fetching is a crucial aspect of building robust and dynamic applications. With the release of...
0
2024-07-03T00:27:48
https://dev.to/usmanghani1518/data-fetching-in-nextjs-14-a-comprehensive-guide-1dc
react, nextjs, javascript, webdev
Data fetching is a crucial aspect of building robust and dynamic applications. With the release of Next.js 14, developers have even more powerful tools at their disposal to fetch data efficiently and seamlessly. In this post, we will explore the various data fetching methods in Next.js 14, complete with examples and best practices. ## Introduction to Data Fetching in Next.js 14 Next.js 14 brings several enhancements and new features to improve the developer experience and application performance. Data fetching is one of the core areas where Next.js excels, offering different methods to suit various needs, including static site generation (SSG), server-side rendering (SSR), and client-side fetching. **1. Static Site Generation (SSG) with getStaticProps** SSG allows you to generate HTML at build time, which can be served to clients instantly. This method is ideal for pages with content that doesn't change frequently. **Example** ``` // pages/blog/[id].js export async function getStaticProps({ params }) { const res = await fetch(`https://api.example.com/posts/${params.id}`); const post = await res.json(); return { props: { post } }; } ``` **2. Server-Side Rendering (SSR) with getServerSideProps** SSR generates HTML on each request, making it perfect for dynamic content that changes often or requires authentication. **Example** ``` // pages/profile.js export async function getServerSideProps() { const res = await fetch('https://api.example.com/user/profile'); const profile = await res.json(); return { props: { profile } }; } ``` **3. Incremental Static Regeneration (ISR)** ISR allows you to update static content after the site has been built, ensuring your pages stay up-to-date without a full rebuild. **Example** ``` // pages/products/[id].js export async function getStaticProps({ params }) { const res = await fetch(`https://api.example.com/products/${params.id}`); const product = await res.json(); return { props: { product }, revalidate: 60 }; } ``` **4. Client-Side Fetching with SWR** SWR (stale-while-revalidate) is a React hook library for client-side data fetching, ensuring your UI is always fast and reactive. **Example** ``` import useSWR from 'swr'; const fetcher = (url) => fetch(url).then((res) => res.json()); export default function Profile() { const { data, error } = useSWR('/api/user', fetcher); if (error) return <div>Failed to load</div>; if (!data) return <div>Loading...</div>; return <div>Hello, {data.name}</div>; } ``` **Conclusion** Next.js 14 offers a variety of powerful data fetching methods to suit different needs, whether you're building a static site, a dynamic web application, or a combination of both. By leveraging these methods, you can ensure your application is fast, responsive, and up-to-date. Happy coding!
usmanghani1518
1,909,500
Another Approach to Screen Routing in SwiftUI
First of all, why custom routing? there are several answers and reasons for implementing custom...
0
2024-07-03T00:18:02
https://dev.to/mrasyadc/another-approach-to-screen-routing-in-swiftui-42ao
swift, ios
First of all, why custom routing? there are several answers and reasons for implementing custom routing in Swift. from the technical side, implementing custom routing will significantly reduce boilerplate code that tends to be nasty to implement another view from other files. the approach is similar to when we use routing from web frameworks like ExpressJS, Laravel PHP, and others. Since it was similar to web frameworks, the amount of time needed to learn the pattern could be reduced. Thus, increased productivity when working with several developers because it was much easier to implement routing for different types of views in SwiftUI. this easiness would be very beneficial with lots and lots (and different types) of screens and Views by collaborating and delegating tasks of different views within the development team. okay, enough talk. So how would we implement a web approach to routing in SwiftUI projects? first, we need to create 2 files of type swift. one file would store our Routing data and path that was chosen by the user or we could simply be called it a Router model. ```swift // Model/Router.swift import SwiftUI class Router: ObservableObject { @Published var path: [Destination] = [] enum Destination: String, Hashable { case SplashScreen, Home, Story, SecondStory } static let shared: Router = .init() } ``` okay, let's talk about it some more, we see that SwiftUI was imported. this is because we need the ObservableObject protocol on our class for the Router model. we specifically need the @Published to make sure our path can be looked at by the Views later. we store our user routing and movement between screens/views activity within the path variable. the path variable is defined as an array of Destination enum. we use the enum Destination as a way to force the type of page that the user can visit. this enforces the code to look for the Destination in the enum first and helps bug checking even before the app is built into production. a singleton pattern is also used by using a shared variable and initializing the class itself inside its class to be shared with other files and views. ```swift // Routes/RouteView.swift import SwiftUI struct RouteView: View { @StateObject private var navPath = Router.shared var body: some View { NavigationStack(path: $navPath.path) { // This is the root page view (you can use anything) // It does not have to be SplashScreenView SplashScreenView() // end of the root page view .toolbar(.hidden) .navigationDestination(for: Router.Destination.self) { destination in switch destination { case .Story: StoryFirstView() case .SecondStory: StorySecondView() case .Home: HomeView() case .SplashScreen: SplashScreenView() } } } } } ``` Secondly, the RouteView. this is where we define the routes for our SwiftUI projects. the @StateObject variable of navPath is used to detect changes in the @Published variable in the Router model beforehand. thus it would render specifically based on the current new state of the array path in $navPath.path which is a binding. we also need to define every case of the Destination enum in the Router model or it would fail because the switch statement needs to be exhaustive of the Destination enum. Lastly, you can call this RouteView in any place and it would have this nice web-like routing that would be beneficial for the development team. cheers!
mrasyadc
1,909,499
Automating User and Group Management with a Bash Script.
Managing users on a Linux system can be a time-consuming task, especially in environments where users...
0
2024-07-03T00:17:01
https://dev.to/candy-devops/automating-user-and-group-management-with-a-bash-script-2jli
Managing users on a Linux system can be a time-consuming task, especially in environments where users frequently join or leave. Automating this process can save administrators a lot of time and reduce human error. In this article, we'll walk through a Bash script designed to automate the creation of Linux users and their respective groups. This script ensures security, logging, and proper group management. ## Why Automate User Management? Automation helps maintain consistency and efficiency in repetitive tasks. In large organizations or during internships, like those offered by the [HNG Internship](https://hng.tech/internship), managing multiple users can quickly become a complex and error-prone process. A well-designed script can streamline this process, making it easier to manage users securely and effectively. ### Prerequisites Before diving into the script, ensure you have the following: - A Linux operating system (tested on Ubuntu). - Bash shell (`/bin/bash`). - OpenSSL for password generation (`openssl`). - Root privileges (`sudo`). The source code can be found on my [GitHub repo] (https://github.com/Candy-DevOps/Linux_User_Creation.git) ## The Bash Script Here's the full script: ```bash #!/bin/bash # Check if the input file exists if [ ! -f "$1" ]; then echo "Error: Input file not found." exit 1 fi # Ensure log and secure directories are initialized once LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" # Initialize log file if [ ! -f "$LOG_FILE" ]; then sudo touch "$LOG_FILE" sudo chown root:root "$LOG_FILE" fi # Initialize password file if [ ! -f "$PASSWORD_FILE" ]; then sudo mkdir -p /var/secure sudo touch "$PASSWORD_FILE" sudo chown root:root "$PASSWORD_FILE" sudo chmod 600 "$PASSWORD_FILE" fi # Redirect stdout and stderr to the log file exec > >(sudo tee -a "$LOG_FILE") 2>&1 # Function to check if user exists user_exists() { id "$1" &>/dev/null } # Function to check if a group exists group_exists() { getent group "$1" > /dev/null 2>&1 } # Function to check if a user is in a group user_in_group() { id -nG "$1" | grep -qw "$2" } # Read each line from the input file while IFS=';' read -r username groups; do # Trim whitespace username=$(echo "$username" | tr -d '[:space:]') groups=$(echo "$groups" | tr -d '[:space:]') # Check if the user already exists if user_exists "$username"; then echo "User $username already exists." else # Create user sudo useradd -m "$username" # Generate random password password=$(openssl rand -base64 12) # Set password for user echo "$username:$password" | sudo chpasswd # Log actions echo "User $username created. Password: $password" # Store passwords securely echo "$username,$password" | sudo tee -a "$PASSWORD_FILE" fi # Ensure the user's home directory and personal group exist sudo mkdir -p "/home/$username" sudo chown "$username:$username" "/home/$username" # Split the groups string into an array IFS=',' read -ra group_array <<< "$groups" # Check each group for group in "${group_array[@]}"; do if group_exists "$group"; then echo "Group $group exists." else echo "Group $group does not exist. Creating group $group." sudo groupadd "$group" fi if user_in_group "$username" "$group"; then echo "User $username is already in group $group." else echo "Adding user $username to group $group." sudo usermod -aG "$group" "$username" fi done done < "$1" ``` ### How the Script Works 1. **Input File Verification**: The script starts by checking if the input file exists. If not, it exits with an error message. 2. **Log and Secure Directory Initialization**: The script sets up a log file to keep track of its actions and a secure password file to store user passwords. These files are created with proper permissions to ensure security. 3. **User and Group Management**: - The script reads each line from the input file, which contains usernames and their associated groups. - It checks if each user already exists. If not, it creates the user and assigns a randomly generated password. - It ensures the user's home directory and personal group are set up. - The script then processes each group, checking if it exists and creating it if necessary, before adding the user to the group. ### Usage 1. Save the user information in a file, e.g., `usersname.txt`, formatted as `username;group1,group2,group3`. 2. Run the script with the user information file as an argument: ```bash ./user_creation_script.sh usersname.txt ``` ### Benefits of This Script - **Automation**: Reduces the manual effort required to manage users and groups. - **Security**: Ensures passwords are stored securely and logs all actions for auditing purposes. - **Consistency**: Maintains a consistent approach to user and group management. ## Conclusion Automating user management on Linux systems can significantly enhance efficiency and security. This script provides a robust solution for creating users and managing their groups. For those interested in more automation and development practices, the [HNG Internship](https://hng.tech/internship) offers a great opportunity to learn and grow in a collaborative environment. By leveraging such scripts, administrators can focus on more critical tasks, knowing that user management is handled consistently and securely. For more information on hiring talents trained in such automation practices, visit [HNG Hire](https://hng.tech/hire). --- Feel free to share your thoughts or improvements on this script in the comments! Happy coding! Written by: Candy-DevOps
candy-devops
1,812,681
Welcome Thread - v283
Leave a comment below to introduce yourself! You can talk about what brought you here, what...
0
2024-07-03T00:00:00
https://dev.to/devteam/welcome-thread-v283-g1a
welcome
--- published_at : 2024-07-03 00:00 +0000 --- ![Flashing text that says Welcome!](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0reb8zfegacsbla8wb64.gif) --- 1. Leave a comment below to introduce yourself! You can talk about what brought you here, what you're learning, or just a fun fact about yourself. 2. Reply to someone's comment, either with a question or just a hello. 👋 3. Share your wins from this week in our weekly ["What was your win this week?](https://dev.to/t/weeklyretro) thread.
sloan
1,909,497
User creation script using bash shell.
In this article, I will demonstrate how a sysop administrator employs bash shell scripting to create...
0
2024-07-02T23:57:05
https://dev.to/linuxinator/user-creation-script-using-bash-shell-kem
In this article, I will demonstrate how a sysop administrator employs bash shell scripting to create multiple users and assign them unique passwords to different groups. Well, who's a sysOps admin? A SysOps (System Operations) Administrator, also known as a Systems Administrator or SysAdmin is a professional responsible for managing, maintaining, and ensuring the smooth operation of an organization's IT infrastructure. It involves a wide range of tasks to keep the organization's systems running efficiently, securely, and reliably. Among the list of key responsibilities handled by a SysOp Admin, one of the most important tasks is user management. In an infrastructure where Linux OS is the main choice of all systems, the bash shell scripting language can be used by SysAdmin to manage and maintain user accessibility. Here, I will be explaining how a SysAdmin makes use of shell scripting to manage user, groups, and password creation with ease. ``` #!/bin/bash # Check if the input file is provided if [ -z "$1" ]; then echo "Usage: $0 <input_file>" exit 1 fi INPUT_FILE="$1" ``` The above code starts with a shebang statement which defines the type of shell to run this script, in this situation, it's a bash shell script. The other lines check if an input file is given while running the script, this explanation will come in later after the whole script is prepared. ``` LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" # Ensure the log file exists touch "$LOG_FILE" # Ensure the secure directory and password file exist with correct permissions mkdir -p /var/secure touch "$PASSWORD_FILE" chmod 600 "$PASSWORD_FILE" ``` While creating the users, we will need to log every action and step taken during the creation of users, passwords, and groups for future reference. The above code ensures that the log file is created and assigned appropriate permissions. ``` # Function to generate a random password generate_password() { tr -dc A-Za-z0-9 </dev/urandom | head -c 12 ; echo '' } ``` The next step is to generate random passwords for our users, it simply generates a 12-character password. ``` # Read the input file line by line while IFS=";" read -r user groups; do # Remove leading/trailing whitespace from user and groups user=$(echo "$user" | xargs) groups=$(echo "$groups" | xargs) ``` In the input file that contains the users and groups mentioned earlier, this code reads it line by line to create matching usernames and groups specified in the file. The input file can might contain details like this: `Luffy; straw-hats` The code also trims whitespace if there's any. ``` # Create a personal group with the same name as the user if ! getent group "$user" &>/dev/null; then groupadd "$user" echo "$(date +'%Y-%m-%d %H:%M:%S') - Created personal group $user" | tee -a "$LOG_FILE" fi ``` The next step checks if the group written in the input file exists and adds the user, if it doesn't, the group is created using `groupadd` and this action is logged into the log file. ``` if id "$user" &>/dev/null; then echo "$(date +'%Y-%m-%d %H:%M:%S') - User $user already exists. Skipping..." | tee -a "$LOG_FILE" continue fi # Create the user with the personal group useradd -m -s /bin/bash -g "$user" "$user" echo "$(date +'%Y-%m-%d %H:%M:%S') - Created user $user with personal group $user" | tee -a "$LOG_FILE" ``` This step checks for the existence of a user and logs the response, if the user doesn't exist, it creates the user and assigns the user's home directory to /bin/bash and personal group specified in the input file. ``` # Set the home directory permissions chmod 700 "/home/$user" chown "$user:$user" "/home/$user" echo "$(date +'%Y-%m-%d %H:%M:%S') - Set permissions for /home/$user" | tee -a "$LOG_FILE" ``` This action simply sets the permission for the home directory of the user to 700 and logs the action. ``` # Generate a random password and set it password=$(generate_password) echo "$user:$password" | chpasswd echo "$(date +'%Y-%m-%d %H:%M:%S') - Set password for $user" | tee -a "$LOG_FILE" # Securely store the password echo "$user,$password" >> "$PASSWORD_FILE" echo "$(date +'%Y-%m-%d %H:%M:%S') - Stored password for $user in $PASSWORD_FILE" | tee -a "$LOG_FILE" ``` A password is generated for the user, stores the username and password in the password file, and logs the action in the log file. ``` # Add user to specified groups IFS="," read -r -a group_array <<< "$groups" for group in "${group_array[@]}"; do group=$(echo "$group" | xargs) # Remove leading/trailing whitespace if ! getent group "$group" &>/dev/null; then groupadd "$group" echo "$(date +'%Y-%m-%d %H:%M:%S') - Created group $group" | tee -a "$LOG_FILE" fi usermod -aG "$group" "$user" echo "$(date +'%Y-%m-%d %H:%M:%S') - Added user $user to group $group" | tee -a "$LOG_FILE" done done < "$INPUT_FILE" ``` Here, groups are checked for their existence and created if they are not, users are added to their specified groups using other `usermod`. Every action here is then logged to the log file. `echo "$(date +'%Y-%m-%d %H:%M:%S') - User creation process completed." | tee -a "$LOG_FILE" ` Finally, a message concluding the creation process and logs to the log file. To use this script, you will need to create an input file with the `.txt` extension. Before you run the file, ensure you change the file permissions of the script using the `chmod +x script.sh` command. Here is an example of what the input file should look like: ``` coby;navy luffy;straw-hats edward-newgate;whitebeard shanks;red-hair ``` coby,luffy,edward-newgate, and shanks are usernames while the navy,straw-hats,whitebeard, and red hair are the personal groups of the users. To run the script: `sudo ./script.sh input.txt` Conclusion: Using the bash script simply makes user management seamless for system admins. This is a task given by the HNG internship. To find out about this internship visit: https://hng.tech/internship or https://hng.tech/hire to also participate. Thank you for your time. Here is the full script: ``` #!/bin/bash # Check if the input file is provided if [ -z "$1" ]; then echo "Usage: $0 <input_file>" exit 1 fi INPUT_FILE="$1" LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" # Ensure the log file exists touch "$LOG_FILE" # Ensure the secure directory and password file exist with correct permissions mkdir -p /var/secure touch "$PASSWORD_FILE" chmod 600 "$PASSWORD_FILE" # Function to generate a random password generate_password() { tr -dc A-Za-z0-9 </dev/urandom | head -c 12 ; echo '' } # Read the input file line by line while IFS=";" read -r user groups; do # Remove leading/trailing whitespace from user and groups user=$(echo "$user" | xargs) groups=$(echo "$groups" | xargs) # Create a personal group with the same name as the user if ! getent group "$user" &>/dev/null; then groupadd "$user" echo "$(date +'%Y-%m-%d %H:%M:%S') - Created personal group $user" | tee -a "$LOG_FILE" fi if id "$user" &>/dev/null; then echo "$(date +'%Y-%m-%d %H:%M:%S') - User $user already exists. Skipping..." | tee -a "$LOG_FILE" continue fi # Create the user with the personal group useradd -m -s /bin/bash -g "$user" "$user" echo "$(date +'%Y-%m-%d %H:%M:%S') - Created user $user with personal group $user" | tee -a "$LOG_FILE" # Set the home directory permissions chmod 700 "/home/$user" chown "$user:$user" "/home/$user" echo "$(date +'%Y-%m-%d %H:%M:%S') - Set permissions for /home/$user" | tee -a "$LOG_FILE" # Generate a random password and set it password=$(generate_password) echo "$user:$password" | chpasswd echo "$(date +'%Y-%m-%d %H:%M:%S') - Set password for $user" | tee -a "$LOG_FILE" # Securely store the password echo "$user,$password" >> "$PASSWORD_FILE" echo "$(date +'%Y-%m-%d %H:%M:%S') - Stored password for $user in $PASSWORD_FILE" | tee -a "$LOG_FILE" # Add user to specified groups IFS="," read -r -a group_array <<< "$groups" for group in "${group_array[@]}"; do group=$(echo "$group" | xargs) # Remove leading/trailing whitespace if ! getent group "$group" &>/dev/null; then groupadd "$group" echo "$(date +'%Y-%m-%d %H:%M:%S') - Created group $group" | tee -a "$LOG_FILE" fi usermod -aG "$group" "$user" echo "$(date +'%Y-%m-%d %H:%M:%S') - Added user $user to group $group" | tee -a "$LOG_FILE" done done < "$INPUT_FILE" echo "$(date +'%Y-%m-%d %H:%M:%S') - User creation process completed." | tee -a "$LOG_FILE" ```
linuxinator
1,909,496
Best AI Tools - for June 2024 🎊
Best AI tools for June - Substack :) MAIN LIST:...
0
2024-07-02T23:51:21
https://dev.to/pinkmashpotato/best-ai-tools-for-june-2024-2pl7
webdev, javascript, beginners, programming
[Best AI tools for June - Substack](https://bestaitoolsforyou.substack.com/p/best-ai-tools-for-june-2024) :) MAIN LIST: https://github.com/pink-mash-potato/awesome-ai-tools #adhd #productivity #ai #neurospicy #aitools #genai
pinkmashpotato
1,909,495
The HNG11 Internship program can significantly help me achieve my goals in several ways:
Professional Objectives: Design Leadership: HNG's team-based projects and collaborations can help...
0
2024-07-02T23:50:44
https://dev.to/leke_jeremiah_1997/the-hng11-internship-program-can-significantly-help-me-achieve-my-goals-in-several-ways-1e4l
Professional Objectives: 1. _Design Leadership_: HNG's team-based projects and collaborations can help me develop leadership skills and experience. 2. _Expertise Expansion_: HNG's diverse projects and mentorship can expose me to emerging design technologies and best practices. 3. _Cross-Functional Collaboration_: HNG's interdisciplinary teams and feedback sessions can enhance my collaboration skills with developers, product managers, and other stakeholders. 4. _Design Innovation_: HNG's project challenges and hackathons can provide opportunities for innovative design thinking and problem-solving. 5. _Design System Development_: HNG's emphasis on design systems and consistency can help me develop skills in creating and maintaining design systems. Personal Objectives: 1. _Continuous Learning_: HNG's mentorship, workshops, and feedback sessions can support my continuous learning and skill development. 2. _Mentorship_: HNG's experienced mentors can provide guidance and support, helping me achieve my goals. 3. _Personal Projects_: HNG's flexible project structure can allow me to work on personal projects and receive feedback from peers and mentors. 4. _Networking_: HNG's community of designers, developers, and product managers can expand my professional network and connections. 5. _Wellness and Balance_: HNG's emphasis on work-life balance and self-care can help me prioritize your well-being and maintain a healthy lifestyle. By actively participating in the HNG Internship program, I believe i can leverage these opportunities to achieve my goals and accelerate my growth as a designer.
leke_jeremiah_1997
1,909,489
Explorando el Ecosistema de JavaScript: Un Mapa Completo de Frameworks y Librerías
JavaScript ha evolucionado enormemente desde sus inicios, convirtiéndose en uno de los lenguajes de...
0
2024-07-02T23:21:41
https://dev.to/deveg4/explorando-el-ecosistema-de-javascript-un-mapa-completo-de-frameworks-y-librerias-41ek
javascript
![JavaScript Arbol](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t446fzhfkdeutcvtu3ix.png) JavaScript ha evolucionado enormemente desde sus inicios, convirtiéndose en uno de los lenguajes de programación más versátiles y utilizados en el desarrollo web y móvil. Con una comunidad vibrante y en constante crecimiento, el ecosistema de JavaScript está lleno de frameworks y librerías que facilitan el desarrollo de aplicaciones de todo tipo. He creado un mapa visual que detalla los frameworks y librerías más populares y útiles en el mundo de JavaScript. Este diagrama no solo enumera cada herramienta, sino que también proporciona una breve descripción de su propósito y la problemática que resuelve, junto con una indicación de su popularidad. **Descubre las Herramientas Esenciales en el Ecosistema de JavaScript**: El diagrama se organiza en varias secciones clave, cubriendo todas las áreas importantes del desarrollo con JavaScript: **Frameworks de Frontend**: Herramientas como Angular, React y Vue.js que facilitan la creación de interfaces de usuario dinámicas y complejas. **Librerías Utilitarias**: Incluyendo jQuery y Lodash, que simplifican tareas comunes en el desarrollo web, como la manipulación del DOM y el trabajo con datos. **Herramientas de Construcción**: Webpack, Babel y otros bundlers que optimizan y preparan tu código para producción. **Frameworks de Testing**: Como Jest y Mocha, que aseguran la calidad y confiabilidad del código mediante pruebas automatizadas. **Frameworks de Aplicaciones Móviles**: React Native e Ionic, que permiten construir aplicaciones móviles nativas e híbridas con tecnologías web. **Frameworks de Backend/Full-Stack**: Node.js, Express.js y otros, que traen la potencia de JavaScript al lado del servidor. ¿_Por Qué Este Mapa es Útil_? Este diagrama es una excelente referencia tanto para desarrolladores novatos como para veteranos. Proporciona una visión clara y concisa de las herramientas disponibles en el ecosistema de JavaScript, facilitando la selección de la herramienta adecuada para cada proyecto. Ya sea que estés iniciando un nuevo proyecto o buscando mejorar uno existente, este mapa te ayudará a entender las opciones disponibles y tomar decisiones informadas. Te invito a explorar este mapa y a compartir tus experiencias y opiniones sobre las diferentes herramientas. ¿Hay algún framework o librería que consideres esencial y que no esté en el diagrama? ¿Cuál ha sido tu experiencia con estas herramientas? ¡Comparte tus pensamientos en los comentarios!
deveg4
1,909,494
Custom CRM Software Development for Real Estate Agencies
In the competitive Australian real estate market, staying ahead requires more than just a keen eye...
0
2024-07-02T23:47:06
https://dev.to/brokeralvin/custom-crm-software-development-for-real-estate-agencies-2bnd
realestate, customsoftwaredevelopment, customsoftwaredevelopers, softwaredevelopment
In the competitive Australian real estate market, staying ahead requires more than just a keen eye for property values. It demands efficient management of leads, listings, and client communications. While off-the-shelf CRM solutions exist, they often fall short in addressing the unique challenges faced by real estate agencies. This is where custom CRM [software development](https://www.c9.com.au/Solutions/Web-Application-Software-Development/Custom-Software-Development) comes into play, offering tailored solutions that can transform your agency's operations and boost your bottom line. ## The Limitations of Generic CRM Systems Generic CRM systems, while useful for many industries, often lack the specific features real estate agencies need. They may not integrate well with property listing platforms, struggle to handle the complexities of property transactions, or fail to provide the detailed reporting required in the real estate sector. These shortcomings can lead to inefficiencies, missed opportunities, and frustrated agents. ## The Power of Custom CRM Software for Real Estate Bespoke CRM software, designed specifically for real estate agencies, offers a range of benefits: 1. **Streamlined Lead Management**: Custom CRMs can automatically capture and categorise leads from various sources, including your website, property portals, and social media platforms. This ensures no potential client falls through the cracks. 2. **Integrated Property Listings**: A tailored CRM can seamlessly integrate with property listing databases, allowing agents to access and update property information in real-time, directly from the CRM interface. 3. **Automated Client Communication**: Personalised email campaigns, SMS notifications, and follow-up reminders can be automated based on client preferences and transaction stages, keeping your agency top-of-mind without overwhelming your staff. 4. **Advanced Reporting and Analytics**: [Custom reporting tools can provide insights into market trends](https://www.c9.com.au/Solutions/Business-Intelligence-and-Reporting-Developers), agent performance, and ROI on marketing campaigns, enabling data-driven decision-making. 5. **Mobile Accessibility**: With many agents working on-the-go, a custom CRM can be optimised for [mobile use](https://www.c9.com.au/Solutions//Mobile-App-Developers), allowing access to critical information and tools from anywhere. 6. **Integration with Australian Real Estate Platforms**: Bespoke CRMs can be designed to integrate with popular Australian platforms like realestate.com.au and Domain, streamlining listing management and lead generation. ## The Development Process Creating a custom CRM for your real estate agency involves several key steps: 1. **Needs Analysis**: We work closely with your team to understand your specific processes, pain points, and goals. 2. **Design and Planning**: Our developers create a detailed blueprint of the CRM, focusing on user experience and functionality. 3. **Development**: Using cutting-edge technologies, we build your CRM from the ground up, ensuring it meets your unique requirements. 4. **Testing and Refinement**: Rigorous testing is conducted to ensure the CRM performs flawlessly under real-world conditions. 5. **Implementation and Training**: We assist with data migration and provide comprehensive training to ensure your team can leverage the full power of the new system. 6. **Ongoing Support and Updates**: As your business evolves, we're here to update and enhance your CRM to meet your changing needs. ## The Return on Investment While developing a custom CRM requires an initial investment, the long-term benefits far outweigh the costs. Real estate agencies that implement bespoke CRM solutions often report: - Increased lead conversion rates - Improved agent productivity - Enhanced client satisfaction and retention - More accurate forecasting and planning - Significant time savings on administrative tasks ## Conclusion In the fast-paced world of Australian real estate, having the right tools can make all the difference. Custom CRM [software development](https://www.c9.com.au/Solutions/Web-Application-Software-Development/Custom-Software-Development) offers real estate agencies the opportunity to streamline operations, improve client relationships, and gain a competitive edge. By investing in a tailored solution that addresses your specific needs, you're not just buying software – you're investing in the future success of your agency. Ready to revolutionise your real estate agency with a custom CRM solution? [Contact us today](https://www.c9.com.au/About/Contact-Developers) to discuss how we can help you build the perfect system for your unique needs. [https://www.c9.com.au/Solutions/Web-Application-Software-Development/Custom-Software-Development](https://www.c9.com.au/Solutions/Web-Application-Software-Development/Custom-Software-Development)
brokeralvin
1,909,493
Hello there, lets talk!
Hi there, I'm Carlos. I'm an Angolan 21 year old computer science student who lives in Lisbon,...
0
2024-07-02T23:46:15
https://dev.to/carlituscg/hello-there-lets-talk-3kci
webdev, networking, beginners
Hi there, I'm Carlos. I'm an Angolan 21 year old computer science student who lives in Lisbon, Portugal. Lately I've been trying to build some connections and network, so I figured out it'd just be better to create an account over here. Right now I've been exploring Web Development and UX/UI and my goal is to share my adventure with whoever wants to read it. Every once in a while I will try to update this blog with some information on what's been happening in my journey, I hope you guys enjoy it. If you'd like to share knowledge, feel free to connect with me in any way (I'm not sure about how this thing works so [here's my linkedIn](https://www.linkedin.com/in/carlos-van-dúnem/) .
carlituscg
1,909,491
Construindo um Blog Simples com Server.js: Uma Introdução Prática ao Desenvolvimento Web com Node.js
Neste tutorial, exploraremos como o server.js, um framework ágil e robusto para Node.js, pode ser...
0
2024-07-02T23:28:57
https://dev.to/gustavogarciapereira/construindo-um-blog-simples-com-serverjs-uma-introducao-pratica-ao-desenvolvimento-web-com-nodejs-5f8b
javascript, beginners, node, serverjs
Neste tutorial, exploraremos como o server.js, um framework ágil e robusto para Node.js, pode ser utilizado para criar um blog simples mas funcional. Demonstraremos passo a passo como configurar o servidor, gerenciar rotas, e processar dados, tudo isso utilizando a elegância e a simplicidade que o server.js oferece. Este guia prático é ideal para desenvolvedores que buscam implementar rapidamente aplicações web, oferecendo uma excelente introdução às capacidades do server.js, desde a criação de interfaces até o manejo eficiente de requisições web. [documentation](https://serverjs.io/documentation/) ### Sobre o Projeto O projeto apresentado é a construção de um pequeno blog utilizando o framework server.js. Este blog é capaz de listar postagens armazenadas e permitir a criação de novas postagens através de uma interface web simples. A estrutura do projeto inclui um servidor configurado com rotas específicas para exibir as postagens existentes e receber novas através de formulários web. Os posts são armazenados em uma lista em memória, simulando um banco de dados, o que facilita a demonstração e o desenvolvimento sem necessidade de configurações adicionais de banco de dados. ### Funcionalidades do Projeto O sistema do blog permite que usuários visualizem uma lista de postagens e acessem conteúdo específico clicando nos títulos listados na página inicial. Além disso, usuários podem adicionar novos posts através de uma interface simples que coleta título e conteúdo via método POST. Esta aplicação é ideal para demonstrar o funcionamento básico do server.js, incluindo o manuseio de rotas, recebimento e processamento de dados de formulários, e a renderização de páginas dinâmicas com templates Handlebars. ### Sobre o server.js Server.js é um framework leve e moderno para Node.js, projetado para simplificar a criação de servidores web e APIs. Ele abstrai muitas complexidades do gerenciamento direto de servidores HTTP, permitindo que os desenvolvedores configurem rapidamente rotas, middleware, e respostas. O server.js é construído sobre o Express, uma das bibliotecas mais populares para aplicações web em Node.js, fornecendo uma camada adicional de simplicidade e funcionalidades prontas para usar. ### Características da Ferramenta A ferramenta server.js oferece uma maneira simplificada de lidar com rotas HTTP, processamento de requisições e envio de respostas, incluindo suporte a JSON e redirecionamentos. Ela facilita a integração de middlewares e a utilização de templates para renderizar páginas web, além de permitir uma configuração detalhada através de opções diversas que controlam desde a segurança até a personalização completa do servidor. Isso torna o server.js uma escolha eficiente para desenvolvedores que buscam rapidez no desenvolvimento de aplicações web com Node.js. ### Estrutura do Projeto Primeiramente, certifique-se de ter o Node.js instalado e crie um novo diretório para o seu projeto. No diretório do projeto, crie um arquivo chamado `index.js`, que será o ponto de entrada do seu servidor. ### Instalando Dependências Você precisa instalar o pacote `server.js` antes de começar. Abra o terminal e execute: ```bash npm install server ``` ### Criando o Servidor Aqui está o código para `index.js` que configura o servidor e as rotas: ```javascript const server = require('server'); const { get, post } = server.router; const { render, json } = server.reply; // Simulando um banco de dados simples em memória let posts = [ { id: 1, title: "Primeiro Post", content: "Este é o primeiro post do blog." } ]; // Rota para mostrar a lista de posts server([ get('/', ctx => render('index.hbs', { posts })), get('/post/:id', ctx => { const post = posts.find(p => p.id == ctx.params.id); return post ? render('post.hbs', { post }) : status(404); }), post('/', ctx => { const newPost = { id: posts.length + 1, title: ctx.data.title, content: ctx.data.content }; posts.push(newPost); return redirect('/'); }) ]); console.log('Servidor rodando na porta 3000...'); ``` ### Templates Handlebars Você também precisará de alguns templates Handlebars para renderizar o HTML. Crie um diretório `views` e adicione os seguintes arquivos: - `index.hbs`: Para mostrar a lista de todos os posts. - `post.hbs`: Para mostrar um post específico. #### Exemplo de `index.hbs` ```handlebars <h1>Blog</h1> <ul> {{#each posts}} <li><a href="/post/{{this.id}}">{{this.title}}</a></li> {{/each}} </ul> <a href="/new">Novo Post</a> ``` #### Exemplo de `post.hbs` ```handlebars <article> <h1>{{post.title}}</h1> <p>{{post.content}}</p> </article> <a href="/">Voltar</a> ``` ### Rodando o Servidor Agora que você tem seu servidor configurado e seus templates prontos, você pode iniciar o servidor com o Node.js: ```bash node index.js ``` Este exemplo básico demonstra como usar o server.js para construir um blog simples, com funcionalidades de visualização e criação de posts. As rotas lidam com as requisições GET e POST, enquanto os templates Handlebars apresentam os posts de forma agradável ao usuário.
gustavogarciapereira
1,909,490
Level up your Git security: Verified commits with Kleopatra!
I just finished securing my GitHub workflow by setting up verified commits using Kleopatra and...
0
2024-07-02T23:25:59
https://dev.to/deni_sugiarto_1a01ad7c3fb/level-up-your-git-security-verified-commits-with-kleopatra-5147
git, security, github, programming
I just finished securing my GitHub workflow by setting up verified commits using Kleopatra and followed this awesome guide: [YouTube Verified Commits on GitHub from Windows PC](https://www.youtube.com/watch?v=xj9OiJL56pM) **Why verified commits?** Having a green checkmark next to your commits on GitHub isn't just for show. It tells everyone that your changes are coming from a trusted source, adding an extra layer of security to your projects. This is especially important for open-source contributions or collaborative work. **Kleopatra to the rescue!** Kleopatra is a fantastic tool for managing GPG keys on Windows. The video walks you through the entire process, from installing GPG4Win (which includes Kleopatra) to generating your key and configuring it with GitHub. **My experience:** Overall, the process was smooth sailing. The guide provides clear instructions, and using Kleopatra made managing the keys a breeze. Now I can push my commits with confidence, knowing they're cryptographically signed and verifiable. **Feeling secure?** If you're looking to add some extra security to your GitHub workflow, I highly recommend checking out verified commits with Kleopatra. It's a worthwhile investment for any developer! P.S. Have any of you tried verified commits before? Let me know your thoughts in the comments!
deni_sugiarto_1a01ad7c3fb
1,909,480
Automating User Creation and Management with Bash: A Step-by-Step Guide
Automating user creation and management can save time, reduce errors, and enhance security for SysOps...
0
2024-07-02T23:18:51
https://dev.to/centinno88/automating-user-creation-and-management-with-bash-a-step-by-step-guide-3afm
Automating user creation and management can save time, reduce errors, and enhance security for SysOps engineers. In this article, we will go over a Bash script that automates the creation of users and groups, sets up home directories, and manages passwords securely. The script reads from a text file where each line consists of a username and their corresponding groups, logs every action in a log file, and saves the randomly generated passwords for each user in a secure `.csv` file accessible only to the owner. Let's dive into the bash script and break it down step by step. **Line-by-Line Explanation** 1.) First off, we have our shebang. ``` #!/bin/bash ``` This specifies the type of interpreter script will be run with. Since it is a "bash" script, it should be run with the Bourne Again Shell (Bash) interpreter. Also, some commands in the script may not be interpreted correctly outside of Bash. 2.) The paths for the log file and the password file are set to avoid unnecessary repetition in the script. ``` # Define log and password storage files LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" ``` 3.) To ensure that the bash script runs with root privileges, an if statement checks if the Effective User ID (EUID) is equal to zero. The EUID determines the permissions the script will use to run, and 0 represents the root user ID in Linux systems. Only users with administrative privileges (users who can use sudo or the root user itself) can run the script. If someone attempts to run it without such privileges, an error message and the script's run process will be terminated. ``` # Check if the script is run with root privileges if [[ $EUID -ne 0 ]]; then echo "This script must be run with root privileges." >&2 exit 1 fi ``` 4.) To ensure that an input file is provided as an argument when running the script, this `if` statement will terminate the script if no argument is provided. In this statement, `$#` represents the argument provided when running the script. If it is equal to zero (no argument is provided) or if it is greater than or equal to 2, an error message will be printed and the script's execution is halted. ``` # Check if the input file is provided if [[ $# -eq 0 || $# -ge 2 ]]; then echo "Usage: $0 <user_file>" >&2 exit 1 fi ``` 5.) Next is the `log_action` function that records logs using bold lettering (formatted with ANSI escape codes: `\033[1m` and `\033[0m)` and a timestamp (using the `date` command to get the current date and the specified date format: `'%Y-%m-%d %H:%M:%S')`. This function is used to log important steps, success messages, and error messages in the script. ``` # Log function log_action() { echo "--------------------------------------------------" | tee -a "$LOG_FILE" echo -e "$(date +'%Y-%m-%d %H:%M:%S') - \033[1m$1\033[0m" | tee -a "$LOG_FILE" echo "--------------------------------------------------" | tee -a "$LOG_FILE" } ``` 6.) Next is the `create_user_account` function that manages the entire process of creating a user, setting up their home directories with appropriate permissions and ownership, adding them to specified groups, and assigning randomly generated passwords. Every important step is logged. **`create_user_account` function** ``` create_user_account() { local username="$1" local groups="$2" log_action "Creating user account '$username'..." # Check if user already exists if id "$username" &> /dev/null; then echo "User '$username' already exists. Skipping..." | tee -a "$LOG_FILE" return 1 fi # Create user with home directory and set shell if useradd -m -s /bin/bash "$username"; then echo "User $username created successfully." | tee -a "$LOG_FILE" else echo "Error creating user $username." | tee -a "$LOG_FILE" return 1 fi # Create user group if it does not exist (in case the script is run in other linux distributions that do not create user groups by default) if ! getent group "$username" >/dev/null; then groupadd "$username" usermod -g "$username" "$username" log_action "Group $username created." fi # Set up home directory permissions echo "Setting permissions for /home/$username..." | tee -a "$LOG_FILE" chmod 700 "/home/$username" && chown "$username:$username" "/home/$username" if [[ $? -eq 0 ]]; then echo "Permissions set for /home/$username." | tee -a "$LOG_FILE" else echo "Error setting permissions for /home/$username." | tee -a "$LOG_FILE" return 1 fi # Add user to additional groups (comma separated) echo "Adding user $username to specified additional groups..." | tee -a "$LOG_FILE" IFS=',' read -ra group_array <<< "$groups" for group in "${group_array[@]}"; do group=$(echo "$group" | xargs) # Check if group exists, if not create it if ! getent group "$group" &>/dev/null; then if groupadd "$group"; then echo "Group $group did not exist. Now created." | tee -a "$LOG_FILE" else echo "Error creating group $group." | tee -a "$LOG_FILE" continue fi fi # Add user to group if gpasswd -a "$username" "$group"; then echo "User $username added to group $group." | tee -a "$LOG_FILE" else echo "Error adding user $username to group $group." | tee -a "$LOG_FILE" fi done # Log if no additional groups are specified if [[ -z "$groups" ]]; then echo "No additional groups specified." | tee -a "$LOG_FILE" fi # Generate random password, set it for the user, and store it in a file echo "Setting password for user $username..." | tee -a "$LOG_FILE" password=$(head /dev/urandom | tr -dc A-Za-z0-9 | head -c 12) echo "$username:$password" | chpasswd if [[ $? -eq 0 ]]; then echo "Password set for user $username." | tee -a "$LOG_FILE" echo "$username,$password" >> "$PASSWORD_FILE" else echo "Error setting password for user $username. Deleting $username user account" | tee -a "$LOG_FILE" userdel -r "$username" return 1 fi } ``` - In the function, I initially set local variables to hold the values of the specified username and groups, to avoid repetition. ``` local username="$1" local groups="$2" ``` - In this next section, I use the `log_action` function to record the start of each user account creation. Additionally, I verify whether the user already exists. If the user does exist, an error message is displayed and the script's execution is stopped. ``` log_action "Creating user account '$username'..." # Check if user already exists if id "$username" &> /dev/null; then echo "User '$username' already exists. Skipping..." | tee -a "$LOG_FILE" return 1 fi ``` - Next, there is an if statement that uses the `useradd` command with the `-m` and `-s` flags to create a user with a login shell (In this case, the login shell is set to be `/bin/bash`. If you want, you can modify or remove the `-s /bin/bash` part entirely) and assigns a home directory to the user. It stops the script's run process if an error occurs during the execution of the command. ``` # Create user with home directory and set shell if useradd -m -s /bin/bash "$username"; then echo "User $username created successfully." | tee -a "$LOG_FILE" else echo "Error creating user $username." | tee -a "$LOG_FILE" return 1 fi ``` - Next, in the case that the script is run on other Linux distributions that do not create and assign a primary group with the same name as the newly created user, the if statement here will create a group with the same name as the user and add it as the user's primary group. ``` # Create user group if it does not exist (in case the script is run in other linux distributions that do not create user groups by default) if ! getent group "$username" >/dev/null; then groupadd "$username" usermod -g "$username" "$username" log_action "Group $username created." fi ``` - In this section of the function, the newly created user is designated as the owner of the newly created home directory. The owner is also granted all possible permissions for the directory. This is all done using the `chmod` and `chown` commands. If this process is unsuccessful, an error message is printed and the execution is halted. ``` # Set up home directory permissions echo "Setting permissions for /home/$username..." | tee -a "$LOG_FILE" chmod 700 "/home/$username" && chown "$username:$username" "/home/$username" if [[ $? -eq 0 ]]; then echo "Permissions set for /home/$username." | tee -a "$LOG_FILE" else echo "Error setting permissions for /home/$username." | tee -a "$LOG_FILE" return 1 fi ``` - In this section, the newly created user is added to additional groups specified in the input file. By setting the Internal Field Separator (IFS) to expect comma-separated values and using the `read` command with the `-ra` flags, the groups are individually placed inside an array called `group_array` to be used in the subsequent for loop. Within the loop, for every value in the group_array, the `xargs` command removes any whitespace, creates the group if it does not exist, and finally adds the user to the group using the `gpasswd` command with the `-a` flag. In the case where no group is specified for the user in the input file, a message will be printed. ``` # Add user to additional groups (comma separated) echo "Adding user $username to specified additional groups..." | tee -a "$LOG_FILE" IFS=',' read -ra group_array <<< "$groups" for group in "${group_array[@]}"; do group=$(echo "$group" | xargs) # Check if group exists, if not create it if ! getent group "$group" &>/dev/null; then if groupadd "$group"; then echo "Group $group did not exist. Now created." | tee -a "$LOG_FILE" else echo "Error creating group $group." | tee -a "$LOG_FILE" continue fi fi # Add user to group if gpasswd -a "$username" "$group"; then echo "User $username added to group $group." | tee -a "$LOG_FILE" else echo "Error adding user $username to group $group." | tee -a "$LOG_FILE" fi done # Log if no additional groups are specified if [[ -z "$groups" ]]; then echo "No additional groups specified." | tee -a "$LOG_FILE" fi ``` - For the final section of the function, a random 12-character password is generated and set for the user. The head command collects a stream of random bytes from the `/dev/urandom` file. This stream is piped to the `tr` command, which filters the bytes to include only alphanumeric characters (A-Z, a-z, 0-9) using the `-dc` flag. The filtered result is then piped to another head command, which selects only the first 12 characters from the edited stream. The password is then set by piping the user's name and the randomly generated password to the `chpasswd` command. The user and the generated password are saved in the designated password `.csv` file. If setting the password fails, the script deletes the user account and logs the error, to avoid any possible security risk. ``` # Generate random password, set it for the user, and store it in a file echo "Setting password for user $username..." | tee -a "$LOG_FILE" password=$(head /dev/urandom | tr -dc A-Za-z0-9 | head -c 12) echo "$username:$password" | chpasswd if [[ $? -eq 0 ]]; then echo "Password set for user $username." | tee -a "$LOG_FILE" echo "$username,$password" >> "$PASSWORD_FILE" else echo "Error setting password for user $username. Deleting $username user account" | tee -a "$LOG_FILE" userdel -r "$username" return 1 fi ``` 7.) After creating the create_user_account function, the script processes a file containing user information and creates user accounts accordingly. ``` # Process the user file user_file="$1" while IFS=';' read -r username groups; do if create_user_account "$username" "${groups%%[ ;]}"; then log_action "User account '$username' created successfully." else log_action "Error creating user account '$username'." fi done < "$user_file" ``` - The script takes the user file as its argument and assigns it to the variable `user_file`. - A while loop reads each line of the user file. The ``IFS=';`'` part sets the Internal Field Separator to a semicolon (;), splitting each line at the semicolon. The `read -r` username groups part reads the split parts into the username and groups variables. - For each line in the file, the script calls the create_user_account function with the username and the groups (with trailing spaces removed `using ${groups%%[ ;]})`. The script also logs a message if the result of the `create_user_account` was a success or failure. 8.) After all that is done, the script gives only the owner (root) and those with root privileges access to the password file, logs the completion of the script's execution and prints the log file and password file location. ``` # Keep password file accessible only to those with root privileges chmod 600 "$PASSWORD_FILE" # Log completion log_action "User creation script completed." # Print log file and password file location echo "Check $LOG_FILE for details." echo "Check $PASSWORD_FILE for user passwords." ``` Prerequisites for running the script - A Linux system - A bash terminal (Optional. You can use any available shell terminal on the Linux system). - Root privileges on your system. - The text file containing the usernames and groups, must be formatted as `username;group1,group2.` To run the script: 1.) Copy the file from or clone the repository (https://github.com/Centinno88/Automating-user-creation-and-management-with-Bash/tree/main)(url) 2.) Copy the text file containing the usernames and groups to the folder where the script is located. 3.) Then, in the directory where both the script and the text file are now located, run `sudo ./create_users.sh <text file>` **Conclusion** This script simplifies some administrative tasks. With such a script, SysOps engineers can automate user and group creation and management, allowing them to focus on more critical work while ensuring efficient and secure user management. This is one of the project assignments in the [HNG Internship](url) program designed to enhance your resume and deepen your knowledge of bash scripting. For the best experience, visit [HNG Premium](url).
centinno88
1,909,479
List of top AI tools and products directories
I recently check lots of ai directories to find top ai tool for you. :) Top Tools - AI Tools...
0
2024-07-02T23:13:40
https://dev.to/pinkmashpotato/list-of-top-ai-tools-and-products-directories-3e61
ai, directories, javascript, programming
I recently check lots of ai directories to find top ai tool for you. :) - [Top Tools](https://www.toptools.ai/) - AI Tools Directory - [Tool Pilot](https://www.toolpilot.ai) - Navigate the World of AI Tools - [TopAI.tools](https://topai.tools/) - Discover the best AI tools Everyday - [Top AI Tools Hub](https://www.topaitoolshub.com/) - - [Toolify.ai](https://www.toolify.ai/) - Best AI Companies and Tools, Auto Updated Daily By ChatGPT - [There's An AI](https://theresanai.com) - No 1 AI Aggregator - [Stratup.ai](https://stratup.ai/) - AI-Powered Startup Ideas and Tools to Fuel Your Entrepreneurial Journey - [Pugin](https://pugin.ai/) - Discover AI Plugins - [Productivity Tools](https://productivity.directory) - A curated productivity directory - [PoweredbyAI](https://poweredbyai.app) - AI TOOLS & PROMPTS! - [OpenTools](https://opentools.ai/) - Chat with our GPT to find the right AI tool for you - [OpenFuture AI](https://openfuture.ai/) - Largest Tools Directory, Fastest Update, The Most Accurate Database - [Insidr AI Directory](https://www.insidr.ai/ai-tools/) - AI Tools Directory - [Grabon AI Directory](https://www.grabon.in/indulge/ai-tools/) - The World's Best & Largest Directory Of AI Tools - [FUTUREPEDIA](https://www.futurepedia.io/) - THE LARGEST AI TOOLS DIRECTORY, UPDATED DAILY - [FutureTools](https://www.futuretools.io/) - Collects & Organizes All The Best AI Tools So YOU Too Can Become Superhuman! - [Foundr/](https://foundr.ai/) - scover The Best AI Tools at Your Fingertips - [Find my AI Tool](https://findmyaitool.com/) - Discover AI Tools for Your Business. - [All Things AI](https://allthingsai.com/) - The Curated Resource of AI Tools - [AI Top Tools](https://aitoptools.com/) - The place to go for AI Tools - [AI Tools Marketer](https://aitoolsmarketer.com/) - Unlock the Power of AI: Discover, Learn, Compare, and Optimize with the Ultimate AI Tools Directory. - [AI Tools List](https://aitoolslist.io/) - Best AI Tools Rated - [AI Tools Guru](https://aitoolguru.com/) - THE LARGEST AI TOOLS DIRECTORY - [AI Tools Directory](https://aitoolsdirectory.com/) - Curated list of AI tools - [AI Tools Arena](https://aitoolsarena.com/) - Your Ultimate Resource for AI Tools and Insights - [AI Scout](https://aiscout.net/) - AI Tools Directory - [AI PEDIA HUB](https://aipediahub.com/) - THE LARGEST AI TOOLS DIRECTORY, UPDATED DAILY. - [ainave](https://www.ainave.com) - Navigate the world of AI with ease! - [AIDir.wiki](https://www.aidir.wiki/) - Discover the Latest AI Tools and Software to Take Your Business to the Next Level - [Altern](https://altern.ai) - Find almost anything related to AI
pinkmashpotato
1,909,477
Very First Post
Well, here goes... So I will openly admit I am unsure of what to even put in this very first blog of...
0
2024-07-02T23:09:59
https://dev.to/joebush4466/very-first-post-14af
blog, javascript, learning, beginners
Well, here goes... So I will openly admit I am unsure of what to even put in this very first blog of mine. I joined the FlatIron School Software Engineering online program and am trying to make radical changes in my life. I am looking forward to the program and getting into the IT workforce. I am not sure yet which avenue I will go down. I have family in different parts of the IT world. Cousins in cyber security, siblings in web design. I personally never thought I would be trying to learn to code. But, I also think I am capable of anything I truly set my mind to. My biggest enemy is myself and that is who I'm trying to change. This is something I am committed to and promised myself I would not let myself down. So wish me luck, I will try and post regularly to show progress of what I have learned and see if I can even teach something myself. "Do not go gentle into that good night. Rage. RAGE against the dying of the light" -Dylan Thomas Joe Bush
joebush4466
1,909,476
Automating User and Group Management on Linux with a Shell Script
In the world of system administration, managing users and groups efficiently is crucial. This article...
0
2024-07-02T23:08:53
https://dev.to/hayzeddev/automating-user-and-group-management-on-linux-with-a-shell-script-5b60
bash, shell, linux
In the world of system administration, managing users and groups efficiently is crucial. This article presents a shell script that automates user and group management on Linux systems, making the process seamless and efficient. The script takes in a file which consists of users to be added to a machine and their corresponding groups ## Below is a detailed explanation of the shell script: ### 1. Script Header and Argument Check ```bash #!/bin/bash # Check if the correct number of arguments is passed if [ "$#" -lt 1 ]; then echo "Usage: $0 <filename>" exit 1 fi ``` - #!/bin/bash: This line specifies that the script should be run using the Bash shell. - The script checks if at least one argument is passed (the filename). If not, it prints the usage message and exits with a status of 1, indicating an error that filename is needed which can be either relative or absolute path to the location of the filename. The filename is the file that list all the users and groups. An example file would be given later. ### 2. Filename Argument and File Existence Check ```bash # The first argument is the filename filename="$1" # Check if the file exists if [ ! -f "$filename" ]; then echo "File not found: $filename" exit 1 fi ``` - filename="$1": Stores the first argument in the filename variable. - The script checks if the file exists. If not, it prints an error message and exits. ### 3. Function to Trim Whitespace ```bash # Function to trim leading and trailing whitespace trim() { local var="$*" # Remove leading whitespace var="${var#"${var%%[![:space:]]*}"}" # Remove trailing whitespace var="${var%"${var##*[![:space:]]}"}" echo -n "$var" } ``` - This function removes leading and trailing whitespace from a string. - local var="$*": Captures all arguments passed to the function. - var="${var#"${var%%[![:space:]]*}"}": Removes leading whitespace. - var="${var%"${var##*[![:space:]]}"}": Removes trailing whitespace. - echo -n "$var": Prints the trimmed string without a newline. ### 4. Logging and Password File Setup ```bash # Define the file to check user_mgt_file_path="/var/log/user_management.log" user_mgt_dir_path=$(dirname "$user_mgt_file_path") # Check if the directory exists if [ ! -d "$user_mgt_dir_path" ]; then # Create the directory if it doesn't exist mkdir -p "$user_mgt_dir_path" fi # Check if the file exists if [ ! -f "$user_mgt_file_path" ]; then # Create the file if it doesn't exist touch "$user_mgt_file_path" echo "File '$user_mgt_file_path' created." fi # Define the file to check user_pass_file_path="/var/secure/user_passwords.csv" user_pass_dir_path=$(dirname "$user_pass_file_path") # Check if the directory exists if [ ! -d "$user_pass_dir_path" ]; then # Create the directory if it doesn't exist mkdir -p "$user_pass_dir_path" echo "Directory $user_pass_dir_path created..." >> $user_mgt_file_path fi # Check if the file exists if [ ! -f "$user_pass_file_path" ]; then # Create the file if it doesn't exist touch "$user_pass_file_path" echo "File $user_pass_file_path created..." >> $user_mgt_file_path fi ``` - Sets up paths for the user management log file and the user passwords CSV file. - Checks if the directories exist and creates them if they don’t. - Checks if the files exist and creates them if they don’t, logging the creation actions. ### 5. Loop Through Each Line in the File ```bash # Loop through each line in the file while IFS= read -r line; do # Process each line # echo "Processing: $line" ``` - IFS=: Prevents leading/trailing whitespace from being trimmed. - read -r line: Reads each line of the file into the line variable. ### 6. Process Each Line: Extract Username and Groups ```bash # Define the username and password # Trim leading and trailing whitespace username=$(trim "${line%%;*}") password=$(LC_CTYPE=C < /dev/urandom tr -dc 'A-Za-z0-9!@#$%&*' | head -c 16) # Generate a random password echo "Password generated for user $username" >> $user_mgt_file_path usergroups=$(trim "${line#*;}") ``` - username=$(trim "${line%%;*}"): Extracts the username from the line (before the semicolon). - password=$(LC_CTYPE=C < /dev/urandom tr -dc 'A-Za-z0-9!@#$%&*' | head -c 16): Generates a random 16-character password. - usergroups=$(trim "${line#*;}"): Extracts the groups (after the semicolon). ### 7. Split User Groups and Create Groups ```bash # Split the usergroups into an array IFS=',' read -r -a groups_array <<< "$usergroups" for i in "${!groups_array[@]}"; do groups_array[$i]=$(trim "${groups_array[$i]}") done # Extract the primary group (first element) primary_group="${username}" # Extract the remaining groups if [ "${#groups_array[@]}" -gt 0 ]; then additional_groups=$(IFS=,; echo "${groups_array[*]:0}") else additional_groups="" fi ``` - Splits the usergroups string into an array using commas. - Trims each group name in the array. - Sets the primary group to the username. - Joins additional groups back into a comma-separated string if there are any. ### 8. Function to Create Groups if They Don’t Exist ```bash # Function to check if a group exists, and create it if it doesn't create_group_if_not_exists() { groupname="$1" if ! getent group "$groupname" > /dev/null 2>&1; then groupadd "$groupname" echo "User group '$groupname' created..." >> $user_mgt_file_path else echo "User group '$groupname' already exist! Skipping" >> $user_mgt_file_path fi } ``` - This function checks if a group exists using getent group. - If the group doesn’t exist, it creates the group using groupadd and logs the action. ### 9. Create Primary and Additional Groups ```bash # Check and create primary group create_group_if_not_exists "$primary_group" # Check and create additional groups for group in "${groups_array[@]}"; do create_group_if_not_exists "$group" done ``` - Calls create_group_if_not_exists to create the primary group and each additional group if they don’t exist. ### 10. Create User and Set Password ```bash # Check if the group already exists if ! getent group "$username" > /dev/null 2>&1; then # Create the group if it doesn't exist groupadd "$username" echo "Directory $user_pass_dir_path created..." >> $user_mgt_file_path # TODO fi # Check if the user already exists if id "$username" &>/dev/null; then echo "User '$username' already exists. Skipping..." echo "User '$username' already exists. Skipping..." >> $user_mgt_file_path else # Create the user with the primary group and additional groups if any if [ -n "$additional_groups" ]; then useradd -m -g "$primary_group" -G "$additional_groups" -s /bin/bash "$username" else useradd -m -g "$primary_group" -s /bin/bash "$username" fi # Create the user with the specified group and set the password echo "$username:$password" | chpasswd echo "User '$username' created! Password has also been set for the user" >> $user_mgt_file_path # Display the created username and password echo "Password for user '$username' is: $password" >> $user_pass_file_path # Set the home directory path home_directory="/home/$username" # Set permissions and ownership for the home directory chown "$username:$primary_group" "$home_directory" chmod 755 "$home_directory" # Ensure appropriate permissions for additional groups for group in "${groups_array[@]}"; do if [ "$group" != "$primary_group" ]; then chmod g+rx "$home_directory" setfacl -m "g:$group:rx" "$home_directory" fi done echo "User $username created with home directory $home_directory" >> $user_mgt_file_path echo "Users created!" fi done < "$filename" ``` - Checks if a group with the username exists and creates it if not. - Checks if a user with the username exists and skips creation if they do. - Creates the user with the primary group and additional groups if specified. - Sets the user’s password. - Logs the creation actions and stores the password in a CSV file. - Sets the home directory ownership and permissions. - Adds read and execute ### Here is the full script ```bash #!/bin/bash # Check if the correct number of arguments is passed if [ "$#" -lt 1 ]; then echo "Usage: $0 <filename>" exit 1 fi # The second argument is the filename filename="$1" # Check if the file exists if [ ! -f "$filename" ]; then echo "File not found: $filename" exit 1 fi # Function to trim leading and trailing whitespace trim() { local var="$*" # Remove leading whitespace var="${var#"${var%%[![:space:]]*}"}" # Remove trailing whitespace var="${var%"${var##*[![:space:]]}"}" echo -n "$var" } # Define the file to check user_mgt_file_path="/var/log/user_management.log" user_mgt_dir_path=$(dirname "$user_mgt_file_path") # Check if the directory exists if [ ! -d "$user_mgt_dir_path" ]; then # Create the directory if it doesn't exist mkdir -p "$user_mgt_dir_path" fi # Check if the file exists if [ ! -f "$user_mgt_file_path" ]; then # Create the file if it doesn't exist touch "$user_mgt_file_path" echo "File '$user_mgt_file_path' created." fi # Define the file to check user_pass_file_path="/var/secure/user_passwords.csv" user_pass_dir_path=$(dirname "$user_pass_file_path") # Check if the directory exists if [ ! -d "$user_pass_dir_path" ]; then # Create the directory if it doesn't exist mkdir -p "$user_pass_dir_path" echo "Directory $user_pass_dir_path created..." >> $user_mgt_file_path fi # Check if the file exists if [ ! -f "$user_pass_file_path" ]; then # Create the file if it doesn't exist touch "$user_pass_file_path" echo "File $user_pass_file_path created..." >> $user_mgt_file_path fi # Loop through each line in the file while IFS= read -r line; do # Process each line # echo "Processing: $line" # Define the username and password # Trim leading and trailing whitespace username=$(trim "${line%%;*}") password=$(LC_CTYPE=C < /dev/urandom tr -dc 'A-Za-z0-9!@#$%&*' | head -c 16) # Generate a random password echo "Password generated for user $username" >> $user_mgt_file_path usergroups=$(trim "${line#*;}") # Split the usergroups into an array IFS=',' read -r -a groups_array <<< "$usergroups" for i in "${!groups_array[@]}"; do groups_array[$i]=$(trim "${groups_array[$i]}") # groups_array[$i]=$(echo "${groups_array[$i]}" | xargs) done # Extract the primary group (first element) primary_group="${groups_array[0]}" # Extract the remaining groups if [ "${#groups_array[@]}" -gt 1 ]; then additional_groups=$(IFS=,; echo "${groups_array[*]:1}") else additional_groups="" fi # Function to check if a group exists, and create it if it doesn't create_group_if_not_exists() { groupname="$1" if ! getent group "$groupname" > /dev/null 2>&1; then groupadd "$groupname" echo "User group '$groupname' created..." >> $user_mgt_file_path else echo "User group '$groupname' already exist! Skipping" >> $user_mgt_file_path fi } # Check and create primary group create_group_if_not_exists "$primary_group" # Check and create additional groups for group in "${groups_array[@]}"; do create_group_if_not_exists "$group" done # Check if the group already exists if ! getent group "$username" > /dev/null 2>&1; then # Create the group if it doesn't exist groupadd "$username" echo "Directory $user_pass_dir_path created..." >> $user_mgt_file_path # TODO fi # Check if the user already exists if id "$username" &>/dev/null; then echo "User '$username' already exists. Skipping..." echo "User '$username' already exists. Skipping..." >> $user_mgt_file_path else # Create the user with the primary group and additional groups if any if [ -n "$additional_groups" ]; then useradd -m -g "$primary_group" -G "$additional_groups" -s /bin/bash "$username" else useradd -m -g "$primary_group" -s /bin/bash "$username" fi # Create the user with the specified group and set the password # useradd -m -g "$username" -s /bin/bash "$username" echo "$username:$password" | chpasswd echo "User '$username' created! Password has also been set for the user" >> $user_mgt_file_path # Display the created username and password echo "Password for user '$username' is: $password" >> $user_pass_file_path # Set the home directory path home_directory="/home/$username" # Set permissions and ownership for the home directory chown "$username:$primary_group" "$home_directory" chmod 755 "$home_directory" # Ensure appropriate permissions for additional groups for group in "${groups_array[@]}"; do if [ "$group" != "$primary_group" ]; then chmod g+rx "$home_directory" setfacl -m "g:$group:rx" "$home_directory" fi done echo "User $username created with home directory $home_directory" >> $user_mgt_file_path echo "Users created!" fi done < "$filename" ``` ## How the Script Works 1. Argument Check: The script starts by checking if the correct number of arguments is passed. It expects a filename as an argument. 2. File Existence Check: It then checks if the provided file exists. 3. Whitespace Trimming Function: A function trim is defined to remove leading and trailing whitespaces from strings. 4. Log and Password File Management: The script ensures that the directories and files for logging user management actions and storing user passwords exist. 5. Processing the Input File: The script reads the input file line by line, extracting the username and user groups, generating random passwords, and managing user and group creation accordingly. ## Using the Script To use the script, save it to a file, for example, create_users.sh, and make it executable: ```bash chmod +x user_mgmt.sh ``` Create a file called users.txt. Make sure it is in the same directory as create_users.sh Here is an example ```txt light; sudo,dev,www-data idimma; sudo mayowa; dev,www-data ``` Run the script with the input file containing user information: ```bash ./create_users.sh users.txt # OR bash create_users.sh users.txt ``` Learn More About the HNG Internship - https://hng.tech/internship - https://hng.tech/hire ## Summary This script automates a significant part of user and group management, ensuring consistency and saving time for system administrators. The HNG Internship program encourages such practical projects, enhancing the skills of interns through real-world tasks. If you’re interested in joining or hiring from the HNG Internship, visit the links above. Feel free to ask if you have any questions or need further modifications!
hayzeddev
1,909,475
My HNG Journey. Stage One: Automating Linux User and Group Management using a Bash Script
Introduction With a new HNG stage, comes a new and slightly difficult task. This is going...
0
2024-07-02T23:07:00
https://dev.to/ravencodess/my-hng-journey-stage-one-creating-a-multi-purpose-bash-script-559j
bash, backend, github, linux
## Introduction With a new [HNG ](https://hng.tech/internship)stage, comes a new and slightly difficult task. This is going to be a long read, so I'll keep the introduction short and just get into it The source code can be found on my [GitHub repo](https://github.com/Ravencodess/shell-script-stage-1.git) ### Requirements In this task, we will be writing a Bash script that takes in one argument which will be a TXT file that contains a list of usernames and group names. For example; ```javascript john; admin, developer, tester kourtney; hr, product ``` The script must accomplish the following tasks; - Create Users and groups based on the file content, Usernames and user groups are separated by semicolon ";"- Ignore whitespace. - Each User must have a personal group with the same group name as the username, this group name will not be written in the text file. - A user can have multiple groups, each group delimited by comma "," - The file `/var/log/user_management.log` should be created and contain a log of all actions performed by your script. - The file `/var/secure/user_passwords.txt` should be created and contain a list of all users and their passwords delimited by comma, and only the file owner should be able to read it. - Handle errors gracefully. ### Prerequisites - Basic understanding of Linux CLI and Bash Scripting #### Step 1 **Handle Command Line Arguments and Input File Errors** We want to ensure the script is being passed only one argument and that argument is indeed a file. If both cases are not true, we want to print an error to the terminal. ```javascript #!/bin/bash # Check if the correct number of command line arguments is provided if [ "$#" -ne 1 ]; then echo "Usage: $0 <user_info_file>" exit 1 fi # Assign the file name from the command line argument input_file=$1 # Check if the input file exists if [ ! -f "$input_file" ]; then echo "Error: File $input_file not found." exit 1 fi ``` #### Step 2 **Create a Logging Function** One of our requirements states that we log all our actions to `/var/log/user_management.log`, let's create a function to handle that, and move it to the top of our script. ``` javascript #!/bin/bash # Function to log actions to /var/log/user_management.log log_action() { local log_file="/var/log/user_management.log" local timestamp=$(date +"%Y-%m-%d %T") local action="$1" echo "[$timestamp] $action" | sudo tee -a "$log_file" > /dev/null } --- ``` #### Step 3 **Create Log and Password File** Now that we have our logging function defined we can log any action that happens during the script execution. Next we need to create the log file itself and also create the password file, and also assign permissions to allow only the file owner to access it. ```javascript --- # Check and create the log_file if it does not exist log_file="/var/log/user_management.log" if [ ! -f "$log_file" ]; then # Create the log file sudo touch "$log_file" log_action "$log_file has been created." else log_action "Skipping creation of: $log_file (Already exists)" fi # Check and create the passwords_file if it does not exist passwords_file="/var/secure/user_passwords.txt" if [ ! -f "$passwords_file" ]; then # Create the file and set permissions sudo mkdir -p /var/secure/ sudo touch "$passwords_file" log_action "$passwords_file has been created." # Set ownership permissions for passwords_file sudo chmod 600 "$passwords_file" log_action "Updated passwords_file permission to file owner" else log_action "Skipping creation of: $passwords_file (Already exists)" fi ``` #### Step 4 **Read Input File** At this point the script has validated the command line argument and has confirmed it is a file, now it's time to loop through the file and read it line by line. We can accomplish this using a `while loop`. This will run until the last line of the input file. All our user and groups creation logic will be done inside this `while loop`. ```javascript --- while IFS=';' read -r username groups; do --- done < "$input_file" ``` This `while loop` reads each line of the file, uses an internal field separator (IFS) to separate the line based on the value assigned to the `IFS` variable and then assigns values to variables: username and groups. For example, a line like `dora; design, marketing` would be read as; `username=dora groups=design, marketing` The -r option ensures we don't treat backslashes '\' as escape characters ##### Step 5 **Validate Username and Group Name** Our script can now read a user input file line by line, but first we must validate the strings that are passed into our $username and $group variables to ensure they comply with Unix naming standards. We can handle this logic by creating a `validate_name` function and move it to the top of our script ```javascript #!/bin/bash # Function to validate username and group name validate_name() { local name=$1 local name_type=$2 # "username" or "groupname" # Check if the name contains only allowed characters and starts with a letter if [[ ! "$name" =~ ^[a-z][a-z0-9_-]*$ ]]; then log_action "Error: $name_type '$name' is invalid. It must start with a lowercase letter and contain only lowercase letters, digits, hyphens, and underscores." return 1 fi # Check if the name is no longer than 32 characters if [ ${#name} -gt 32 ]; then log_action "Error: $name_type '$name' is too long. It must be 32 characters or less." return 1 fi return 0 } --- ``` This function runs two checks; - It checks if the name complies with naming standards using a Regex expression (name begins with a lowercase letter and name must only include lowercase letters, numbers, dashes and underscores) - It makes sure the name is not longer than 32 characters. Finally it logs all action into the log file created earlier #### Step 6 **Check if User or Group Already Exists** After validating the string passed into our variables, we also need to run a check to validate if these names already exist on the system. We don't want to create duplicate users or groups. We can achieve this by creating a `user_exists` and `group_exists` function and move it to the top of our script ```javascript #!/bin/bash # Function to check if a user exists user_exists() { local username=$1 if getent passwd "$username" > /dev/null 2>&1; then return 0 # User exists else return 1 # User does not exist fi } # Function to check if a group exists group_exists() { local group_name=$1 if getent group "$group_name" > /dev/null 2>&1; then return 0 # Group exists else return 1 # Group does not exist fi } --- ``` #### Step 7 **Create User** Now it's time to use our while loop to begin creating users, we will carry out these tasks in this step - String manipulation, which involves removing or collapsing white spaces - Call the validate_name and user_exists functions to ensure we are creating a valid and unique username - Generate a random password and assign it to the newly created user Let's first define the `generate_password` function and place it alongside the functions we created earlier at the top of our script ```javascript --- # Function to generate a random password generate_password() { openssl rand -base64 12 } --- ``` Now everything is in place to create a user, we will utilize the `while loop` we created in **Step 4**. ```javascript --- # Read the file line by line and process while IFS=';' read -r username groups; do # Extract the user name username=$(echo "$username" | xargs) # Validate username if ! validate_name "$username" "username"; then log_action "Invalid username: $username. Skipping." continue fi # Check if the user already exists if user_exists "$username"; then log_action "Skipped creation of user: $username (Already exists)" continue else # Generate a random password for the user password=$(generate_password) # Create the user with home directory and set password sudo useradd -m -s /bin/bash "$username" echo "$username:$password" | sudo chpasswd log_action "Successfully Created User: $username" fi # Ensure the user has a group with their own name, This is the default behaviour in most linux distros if ! group_exists "$username"; then sudo groupadd "$username" log_action "Successfully created group: $username" sudo usermod -aG "$username" "$username" log_action "User: $username added to Group: $username" else log_action "User: $username added to Group: $username" fi done < "$input_file" ``` #### Step 8 **Create Group(s)** The next action to take is to create the groups for the user that was just created. We need to also validate the group name and check if it already exists before creating it and adding our user into it. We need to form a `group_array` based on the content of the `groups` variable so we can loop through it and create a group for each name in the array. Under the user creation logic, we can create groups with this. ```javascript while IFS=';' read -r username groups; do --- # Extract the groups and remove any spaces groups=$(echo "$groups" | tr -d ' ') # Split the groups by comma IFS=',' read -r -a group_array <<< "$groups" # Create the groups and add the user to each group for group in "${group_array[@]}"; do # Validate group name if ! validate_name "$group" "groupname"; then log_action "Invalid Group name: $group. Skipping Group for user $username." continue fi # Check if the group already exists if ! group_exists "$group"; then # Create the group if it does not exist sudo groupadd "$group" log_action "Successfully created Group: $group" else log_action "Group: $group already exists" fi # Add the user to the group sudo usermod -aG "$group" "$username" done done < "$input_file" ``` ### Step 9 **Store Password Information in Secure Password File** Let's round up the script execution by setting proper home directory permissions and also sending username and password information to the `passwords_file` we created in **Step 3**. ```javascript --- # Set permissions for home directory sudo chmod 700 "/home/$username" sudo chown "$username:$username" "/home/$username" log_action "Updated permissions for home directory: '/home/$username' of User: $username to '$username:$username'" # Log the user created action log_action "Successfully Created user: $username with Groups: $username ${group_array[*]}" # Store username and password in secure file echo "$username,$password" | sudo tee -a "$passwords_file" > /dev/null log_action "Stored username and password in $passwords_file" done < "$input_file" ``` #### Step 10 **Putting it All Together** We've come to the end of the script, I did mention it was a long one 😁. But I enjoyed explaining every paragraph to you 🤗. If you want to discover amazing talents at HNG click [here](https://hng.tech/hire) Thank you for reading ♥ Here's the full script for your reference ```javascript #!/bin/bash # Function to check if a user exists user_exists() { local username=$1 if getent passwd "$username" > /dev/null 2>&1; then return 0 # User exists else return 1 # User does not exist fi } # Function to check if a group exists group_exists() { local group_name=$1 if getent group "$group_name" > /dev/null 2>&1; then return 0 # Group exists else return 1 # Group does not exist fi } # Function to validate username and group name validate_name() { local name=$1 local name_type=$2 # "username" or "groupname" # Check if the name contains only allowed characters and starts with a letter if [[ ! "$name" =~ ^[a-z][a-z0-9_-]*$ ]]; then log_action "Error: $name_type '$name' is invalid. It must start with a lowercase letter and contain only lowercase letters, digits, hyphens, and underscores." return 1 fi # Check if the name is no longer than 32 characters if [ ${#name} -gt 32 ]; then log_action "Error: $name_type '$name' is too long. It must be 32 characters or less." return 1 fi return 0 } # Function to generate a random password generate_password() { openssl rand -base64 12 } # Function to log actions to /var/log/user_management.log log_action() { local log_file="/var/log/user_management.log" local timestamp=$(date +"%Y-%m-%d %T") local action="$1" echo "[$timestamp] $action" | sudo tee -a "$log_file" > /dev/null } # Check if the correct number of command line arguments is provided if [ "$#" -ne 1 ]; then echo "Usage: $0 <user_info_file>" exit 1 fi # Assign the file name from the command line argument input_file=$1 # Check if the input file exists if [ ! -f "$input_file" ]; then echo "Error: File $input_file not found." exit 1 fi # Check and create the log_file if it does not exist log_file="/var/log/user_management.log" if [ ! -f "$log_file" ]; then # Create the log file sudo touch "$log_file" log_action "$log_file has been created." else log_action "Skipping creation of: $log_file (Already exists)" fi # Check and create the passwords_file if it does not exist passwords_file="/var/secure/user_passwords.txt" if [ ! -f "$passwords_file" ]; then # Create the file and set permissions sudo mkdir -p /var/secure/ sudo touch "$passwords_file" log_action "$passwords_file has been created." # Set ownership permissions for passwords_file sudo chmod 600 "$passwords_file" log_action "Updated passwords_file permission to file owner" else log_action "Skipping creation of: $passwords_file (Already exists)" fi echo "----------------------------------------" echo "Generating Users and Groups" echo "----------------------------------------" # Read the file line by line and process while IFS=';' read -r username groups; do # Extract the user name username=$(echo "$username" | xargs) # Validate username if ! validate_name "$username" "username"; then log_action "Invalid username: $username. Skipping." continue fi # Check if the user already exists if user_exists "$username"; then log_action "Skipped creation of user: $username (Already exists)" continue else # Generate a random password for the user password=$(generate_password) # Create the user with home directory and set password sudo useradd -m -s /bin/bash "$username" echo "$username:$password" | sudo chpasswd log_action "Successfully Created User: $username" fi # Ensure the user has a group with their own name, This is the default behaviour in most linux distros if ! group_exists "$username"; then sudo groupadd "$username" log_action "Successfully created group: $username" sudo usermod -aG "$username" "$username" log_action "User: $username added to Group: $username" else log_action "User: $username added to Group: $username" fi # Extract the groups and remove any spaces groups=$(echo "$groups" | tr -d ' ') # Split the groups by comma IFS=',' read -r -a group_array <<< "$groups" # Create the groups and add the user to each group for group in "${group_array[@]}"; do # Validate group name if ! validate_name "$group" "groupname"; then log_action "Invalid Group name: $group. Skipping Group for user $username." continue fi # Check if the group already exists if ! group_exists "$group"; then # Create the group if it does not exist sudo groupadd "$group" log_action "Successfully created Group: $group" else log_action "Group: $group already exists" fi # Add the user to the group sudo usermod -aG "$group" "$username" done # Set permissions for home directory sudo chmod 700 "/home/$username" sudo chown "$username:$username" "/home/$username" log_action "Updated permissions for home directory: '/home/$username' of User: $username to '$username:$username'" # Log the user created action log_action "Successfully Created user: $username with Groups: $username ${group_array[*]}" # Store username and password in secure file echo "$username,$password" | sudo tee -a "$passwords_file" > /dev/null log_action "Stored username and password in $passwords_file" done < "$input_file" # Log the script execution to standard output echo "----------------------------------------" echo "Script Executed Succesfully, logs have been published here: $log_file" echo "----------------------------------------" ```
ravencodess
1,909,474
A Beginner's Guide to Open Source Contributions on GitHub
Contents Introduction Preparation Ways to Contribute Raise an issue or make a...
0
2024-07-02T23:06:42
https://dev.to/jjpark987/a-beginners-guide-to-open-source-contributions-on-github-4c3p
tutorial, beginners, github, opensource
## Contents - [Introduction](#introduction) - [Preparation](#preparation) - [Ways to Contribute](#ways-to-contribute) - [Raise an issue or make a suggestion](#raise-an-issue-or-make-a-suggestion) - [Reproduce a reported bug](#reproduce-a-reported-bug) - [Test a pull request](#test-a-pull-request) - [Solve a reported bug](#solve-a-reported-bug) - [General Guide](#general-guide) - [Summary](#summary) - [Resources](#resources) ## Introduction Being a developer isn’t all about creating cool solo projects. Some of the best repositories out there are **open source**, meaning they are maintained and updated by other developers over time. Maybe you have heard of some of these: VS Code, Git, React, TensorFlow, Django, etc. These repositories have reached their status because developers come together to fix common bugs and issues. As a developer, it is crucial to know how to contribute to open source projects on GitHub. Not only does it reinforce our problem-solving skills, but it also allows aspiring developers to be a part of something greater. This blog will introduce how to make contributions to most open source projects on GitHub. ## Preparation Before diving into making changes to any codebase, we must first select the appropriate repo. Selecting a repo as a first-time contributor can be daunting. For those absolute beginners, we could start with the [first contributions](https://github.com/firstcontributions/first-contributions#first-contributions) tutorial. This repo provides step-by-step instructions on how to make your first contribution. Once you feel a bit more comfortable, we can look for real repos. There are some resources to help you find that right repo. They are listed here: - [goodfirstissues.com](https://goodfirstissues.com/) - [goodfirstissue.dev](https://goodfirstissue.dev/) - [up-for-grabs.net](https://up-for-grabs.net/#/) - [Code 52](https://github.com/code52?WT.mc_id=-blog-scottha) More advanced folks may want to try different types of projects. A good starting point could be in (GitHub’s Explore)[https://github.com/explore]. Once you have found a repo you are interested in, we can dive into different ways to contribute. ## Ways to Contribute There are many ways to contribute to an open source repo. I will cover the most common ways here, listed in increasing order of difficulty. ### Raise an issue or make a suggestion One of the easiest ways to contribute to an open source project is by raising an issue or making a suggestion. By identifying and documenting a problem or improvement idea in the project’s issue tracker, you help maintainers prioritize and address issues efficiently. When raising an issue, make sure to provide clear and detailed steps to reproduce the problem, along with any relevant context or error messages encountered. Similarly, when making a suggestion for improvement, articulate your rationale and propose practical solutions or enhancements that align with the project’s goals. ### Reproduce a reported bug When encountering a reported bug in an open source project, your ability to replicate the issue is crucial for developers to understand and resolve it effectively. Begin by following any steps or instructions provided in the bug report to reproduce the issue on your own system. This process involves documenting the exact steps taken, any specific configurations or environments required, and noting any error messages or unexpected behaviors encountered. By accurately reproducing the bug and providing detailed feedback, you assist developers in pinpointing the root cause and implementing a targeted fix. ### Test a pull request As developers propose changes to an open source project, they submit pull requests for review and integration. Testing a pull request involves evaluating the proposed changes to ensure they function as intended and do not introduce new issues or regressions. Begin by understanding the scope and purpose of the pull request, reviewing the associated code changes, and considering potential edge cases or scenarios not covered in the initial implementation. Execute relevant test cases, perform integration testing if applicable, and validate the overall impact of the changes on the project’s functionality and performance. ### Solve a reported bug Addressing a reported bug in an open source project involves identifying the root cause of the issue, proposing a solution, and implementing the necessary code changes to resolve it. Begin by analyzing the bug report, reproducing the issue if necessary, and investigating related code areas to understand the underlying cause. Once the problem is identified, devise a strategy to fix it while adhering to project coding standards and practices, typically found in Contributions.md. Implement the solution by modifying the codebase, writing new tests if applicable, and ensuring compatibility with existing functionalities. Collaborate with project maintainers and contributors to review the proposed fix, incorporate feedback, and verify the resolution through thorough testing. ## General Guide For all types of contributions, it is best practice, especially as a beginner, to be able to isolate any changes before submitting a change. Here are the basic steps: 1. **Fork Repo**: Once you have found a repo to work on, fork to create a copy of the repo to your GitHub. This can be done by clicking the fork button on the top of the repo home page. 2. **Clone Repo**: Clone a copy of this to your local computer. This is done by going to a command line interface (such as Terminal on Mac) and running `git clone <SSH>`. The `<SSH>` can be found by clicking on code on your forked repo. 3. **Create New Branch**: Create a new branch and switch to it for your specific contribution using `git switch -c <name-of-new-branch>`. - **Raise an issue or make a suggestion**: This step may not be necessary if you’re simply raising an issue or making a suggestion without code changes. - **Reproduce a reported bug**: Use this branch to document your steps and any finding while reproducing the bug - **Test a pull request**: Use this branch to fetch and test the changes from the pull request. - **Solve a reported bug**: Use this branch to develop your bug fix. 4. **Make Changes and Test**: Depending on the type of contribution: - **Reproduce a reported bug**: Follow steps provided in the bug report and document findings. - **Test a pull request**: Run the relevant tests and verify the changes. - **Solve a reported bug**: Implement your fix and test it thoroughly to ensure it works as intended. 5. **Commit Changes**: Add and commit the changes for submission. This is done by running `git add .` followed by `git commit -m “enter commit message here”`. 6. **Push Changes**: Run `git push` to finalize the save into GitHub. 7. **Create a Pull Request**: Navigate to “Pull Requests” on your forked repo and click on “New pull request”. Provide a concise explanation of your changes and how they address the chosen issue. If needed, reference the issue number in the description. Once submitted, project maintainers will review the pull request. And that’s it. You have now completed your first open source contribution! ## Summary Contributing to open source projects is a rewarding way to improve your skills, collaborate with other developers, and make a meaningful impact on software used by people worldwide. Whether you start by raising an issue, testing a pull request, or solving a bug, every contribution helps strengthen the project and its community. By following the steps outlined in this guide, you’ll be well on your way to making your first open source contribution. Happy coding! > This guide is meant to serve as a learning tool for people who are just starting to learn how to make open source contributions. Please read the official documentation linked below for more information. Thank you for reading and if you have any questions or concerns about some of the material mentioned in this guide, please do not hesitate to contact me at jjpark987@gmail.com. ## Resources - [GitHub’s Getting Started](https://docs.github.com/en/get-started/exploring-projects-on-github/finding-ways-to-contribute-to-open-source-on-github) - [Blog: Get involved in Open Source today - How to contribute a patch to a GitHub hosted Open Source project like Code 52](https://www.hanselman.com/blog/get-involved-in-open-source-today-how-to-contribute-a-patch-to-a-github-hosted-open-source-project-like-code-52) - [first contributions](https://github.com/firstcontributions/first-contributions#first-contributions) - [goodfirstissues.com](https://goodfirstissues.com/) - [goodfirstissue.dev](https://goodfirstissue.dev/) - [up-for-grabs.net](https://up-for-grabs.net/#/) - [Code 52](https://github.com/code52?WT.mc_id=-blog-scottha)
jjpark987
1,908,482
Design Facebook - a social network
Problem definition Facebook is a social media platform for users to connect with people and engage...
0
2024-07-02T23:03:26
https://dev.to/muhammad_salem/design-facebook-a-social-network-4li8
**Problem definition** Facebook is a social media platform for users to connect with people and engage with different types of media and content. Users can connect with other users by sending friend requests or using the direct messaging feature. Each user has a profile where they can create posts to share with their friends. Facebook also allows its users to create pages regarding a topic of interest and groups to form a community of similar people. Facebook generates a personalized feed for its users based on their friends, liked pages, groups, and the content they engage with to ensure the best experience. **System Requirements** We will focus on the following set of requirements while designing Facebook: 1. Each member should be able to add information about their basic profile, work experience, education, etc. 2. Any user of our system should be able to search other members, groups or pages by their name. 3. Members should be able to send and accept/reject friend requests from other members. 4. Members should be able to follow other members without becoming their friend. 5. Members should be able to create groups and pages, as well as join already created groups, and follow pages. 6. Members should be able to create new posts to share with their friends. 7. Members should be able to add comments to posts, as well as like or share a post or comment. 8. Members should be able to create privacy lists containing their friends. 9. Members can link any post with a privacy list to make the post visible only to the members of that list. 10. Any member should be able to send messages to other members. 11. Any member should be able to add a recommendation for any page. 12. The system should send a notification to a member whenever there is a new message or friend request or comment on their post. 13. Members should be able to search through posts for a word. **Extended Requirement**: Write a function to find a connection suggestion for a member. **Use case diagram** We have three main Actors in our system: **Member**: All members can search for other members, groups, pages, or posts, as well as send friend requests, create posts, etc. **Admin**: Mainly responsible for admin functions like blocking and unblocking a member, etc. **System**: Mainly responsible for sending notifications for new messages, friend requests, etc. Here are the top use cases of our system: **Add/update profile**: Any member should be able to create their profile to reflect their work experiences, education, etc. **Search**: Members can search for other members, groups or pages. Members can send a friend request to other members. **Follow or Unfollow a member or a page**: Any member can follow or unfollow any other member or page. **Send message**: Any member can send a message to any of their friends. **Create post**: Any member can create a post to share with their friends and like or add comments to any post visible to them. **Send notification**: The system can send notifications for new messages, friend requests, etc. 1. **Identify Core Entities**: Based on the requirements, we can identify the following main entities: - User - Profile - Post - Comment - Group - Page - Message - Notification - PrivacyList Reasoning: These entities represent the main concepts in our domain. They are derived from the nouns in our requirements and represent the core objects that our system will manage. 2. **Define Relationships**: - User has one Profile - User can have many Posts - User can have many Comments - User can be a member of many Groups - User can follow many Pages - User can send/receive many Messages - User can have many Notifications - User can have many PrivacyLists - Post can have many Comments - Group can have many Users as members - Page can have many Users as followers Reasoning: These relationships define how our entities interact with each other. They're crucial for understanding the structure of our domain. 3. **Implement Core Entities**: Let's start implementing our domain model in C#, discussing the reasoning behind each class: ```csharp public class User { public string Id { get; set; } public string Name { get; set; } public Profile Profile { get; set; } public List<User> Friends { get; set; } public List<User> Followers { get; set; } public List<User> Following { get; set; } public List<Group> Groups { get; set; } public List<Page> FollowedPages { get; set; } public List<PrivacyList> PrivacyLists { get; set; } public void SendFriendRequest(User user) { /* Implementation */ } public void AcceptFriendRequest(User user) { /* Implementation */ } public void RejectFriendRequest(User user) { /* Implementation */ } public void Follow(User user) { /* Implementation */ } public void Unfollow(User user) { /* Implementation */ } public void JoinGroup(Group group) { /* Implementation */ } public void FollowPage(Page page) { /* Implementation */ } public void CreatePost(string content, PrivacyList privacyList = null) { /* Implementation */ } public void SendMessage(User recipient, string content) { /* Implementation */ } } ``` Reasoning: The User class is central to our system. It encapsulates user-related data and actions. We've included methods for friend requests, following, joining groups, etc., as these are user-initiated actions. ```csharp public class Profile { public string About { get; set; } public List<WorkExperience> WorkExperiences { get; set; } public List<Education> Educations { get; set; } // Other profile-related properties } public class WorkExperience { public string Company { get; set; } public string Position { get; set; } public DateTime StartDate { get; set; } public DateTime? EndDate { get; set; } } public class Education { public string Institution { get; set; } public string Degree { get; set; } public DateTime StartDate { get; set; } public DateTime? EndDate { get; set; } } ``` Reasoning: We've separated Profile into its own class to encapsulate profile-specific information. WorkExperience and Education are separate classes to allow for multiple entries and to keep the Profile class clean. ```csharp public class Post { public string Id { get; set; } public User Author { get; set; } public string Content { get; set; } public DateTime CreatedAt { get; set; } public List<Comment> Comments { get; set; } public List<User> Likes { get; set; } public List<User> Shares { get; set; } public PrivacyList PrivacyList { get; set; } public void AddComment(User user, string content) { /* Implementation */ } public void Like(User user) { /* Implementation */ } public void Share(User user) { /* Implementation */ } } public class Comment { public string Id { get; set; } public User Author { get; set; } public string Content { get; set; } public DateTime CreatedAt { get; set; } public List<User> Likes { get; set; } public void Like(User user) { /* Implementation */ } } ``` Reasoning: Post and Comment are separate classes but share some common properties. We've included methods for adding comments, liking, and sharing directly in the Post class as these are post-specific actions. ```csharp public class Group { public string Id { get; set; } public string Name { get; set; } public User Admin { get; set; } public List<User> Members { get; set; } public List<Post> Posts { get; set; } public void AddMember(User user) { /* Implementation */ } public void RemoveMember(User user) { /* Implementation */ } public void CreatePost(User user, string content) { /* Implementation */ } } public class Page { public string Id { get; set; } public string Name { get; set; } public User Admin { get; set; } public List<User> Followers { get; set; } public List<Post> Posts { get; set; } public List<string> Recommendations { get; set; } public void AddFollower(User user) { /* Implementation */ } public void RemoveFollower(User user) { /* Implementation */ } public void CreatePost(User admin, string content) { /* Implementation */ } public void AddRecommendation(User user, string recommendation) { /* Implementation */ } } ``` Reasoning: Group and Page are similar in some ways (they both have members/followers and posts) but different in others (pages have recommendations). We've kept them as separate classes to allow for these differences. ```csharp public class Message { public string Id { get; set; } public User Sender { get; set; } public User Recipient { get; set; } public string Content { get; set; } public DateTime SentAt { get; set; } } public class Notification { public string Id { get; set; } public User Recipient { get; set; } public string Content { get; set; } public DateTime CreatedAt { get; set; } public bool IsRead { get; set; } } public class PrivacyList { public string Id { get; set; } public string Name { get; set; } public List<User> Members { get; set; } } ``` Reasoning: Message, Notification, and PrivacyList are relatively simple classes that encapsulate their respective concepts. 4. **Implement the Search Functionality**: ```csharp public class SearchService { public List<User> SearchUsers(string query) { /* Implementation */ } public List<Group> SearchGroups(string query) { /* Implementation */ } public List<Page> SearchPages(string query) { /* Implementation */ } public List<Post> SearchPosts(string query) { /* Implementation */ } } ``` Reasoning: We've centralized search functionality in a separate service class. This allows for easy expansion of search capabilities and keeps the search logic separate from our domain entities. 5. I**mplement the Notification System**: ```csharp public class NotificationService { public void SendNotification(User recipient, string content) { /* Implementation */ } } ``` Reasoning: A separate notification service allows for centralized management of notifications and easy integration with external notification systems if needed. 6. **Implement the Connection Suggestion Functionality**: ```csharp public class ConnectionSuggestionService { public List<User> GetConnectionSuggestions(User user) { // Implementation could consider: // 1. Friends of friends // 2. Users with similar interests (based on liked pages, groups) // 3. Users with similar profile information (e.g., same school, workplace) // 4. Users who have interacted with similar posts } } ``` Reasoning: This service encapsulates the logic for suggesting connections, which could be quite complex and involve multiple factors. This design provides a solid foundation for the Facebook-like social network system. It encapsulates the main entities and their behaviors, defines clear relationships between classes, and separates concerns where appropriate. Some trade-offs and considerations: 1. We've chosen to include methods like SendFriendRequest, AcceptFriendRequest in the User class. An alternative would be to have a separate FriendshipService to manage these operations. The trade-off is between having a more feature-rich User class versus a potentially more maintainable and scalable separate service. 2. We've kept the design relatively simple for clarity. In a real-world scenario, you might need to consider more complex scenarios like privacy settings, user roles, etc. 3. For scalability, you might want to consider separating read and write models (CQRS pattern) for high-traffic entities like Post. 4. In a real implementation, you'd need to carefully consider data consistency, especially for operations that modify multiple entities (like adding a comment, which affects both the Post and creates a new Comment). 5. This design doesn't address how data will be persisted. In a real-world scenario, you'd need to consider how these domain objects map to a database schema.
muhammad_salem
1,908,262
How To Automate The Creation Of Users And Groups In Linux Using Bash Script.
INTRODUCTION Imagine you work in very big firm, and your company recruited about 100 new staffs and...
0
2024-07-02T23:00:16
https://dev.to/onyeka_embedded/how-to-automate-the-creation-of-users-and-groups-in-linux-using-bash-script-33am
linux, aws, devops, cloudcomputing
--- **INTRODUCTION** Imagine you work in very big firm, and your company recruited about 100 new staffs and you are saddle with the responsibility of creating user accounts for them as well as adding them to different groups in a Linux system. Performing this tasks manually can be very tiring and also error prone. In this post, I will walk you through the process of automating this process using a simple BASH script. **REQUIREMENTS** - Linux machine - Basic knowledge of scripting - A .txt file that contain names of the employees(users) and their groups N/B: The usernames and groups should be separated by ';', and in a situation where a user belongs to more than one group, the groups should be separated with a comma(','). check example below; _employees.txt_ ``` Onyeka;electronics,devOps Charles;admin Bukola;marketing ``` **Step 1** Open your terminal and create a script named create_users.sh, you can use nano or vim `nano create_users.sh` **Step 2** Let's create directories for storing the generated users and their passwords, also the log files. We'll make sure shebang (#!/bin/bash) is added on top of the script before every other thing. ```bash #!/bin/bash #create main directory to save files mkdir var cd var #move inside the created dir #create log folder and user_mgt.log inside the folder mkdir log && touch log/user_management.log #create secure folder and user_passwd file inside the folder mkdir secure && touch secure/user_passwords.txt #Read and Write permission for the owner only chmod 700 secure # go back to the home dir cd .. ``` As shown above, the script will create a dir named var, inside the var dir, two more folders are created named log and secure with user_management.log and user_passwords.txt inside them respectively. Then restrict access to secure folder using #chmod. **Step 3** Here, we'll create functions for generating random password, creating new user, new group and adding created users to different groups. ```bash #function to generate password generate_password() { local password=$(openssl rand -base64 12) echo "$password" } #Create users, groups and generate password #for them, then assign groups to the created users #function to create users createUser(){ local user="$1" id "$user" &>/dev/null if [ $? -eq 1 ]; then #check if user is existing sudo useradd -m "$user" echo "user $user created" else echo "$user already created" fi } #function to create group createGroup(){ local group="$1" getent group "$group" &>/dev/null if [ $? -eq 2 ]; then #check if group has been created sudo groupadd "$group" echo "group $group created" else echo "$group already created" fi } #function to add users to group addUser_to_group(){ local user="$1" local group="$2" sudo usermod -aG "$group" "$user" echo "$user added to group: $group" } ``` **Step 4** This is the 'MAIN' entry point of the script. Firstly, we use the code below to check the argument (.txt file that contains users and their groups) provided for validation purposes, then save the file in a variable (user_file). ```bash if [[ $# -ne 1 ]]; then echo "error: check the file provided" exit 1 fi # user details user_file="$1" ``` After that, we read the file line by line, validate it, create users, create group and generate passwords for the users as shown in the code snippet below. ```bash # Check if the file exists if [[ ! -f "$user_file" ]]; then echo "user file not found!" exit 1 fi # Read the file line by line while IFS=";" read -r user groups; do user=$(echo $user | xargs) # Check to know if user and group # contains strings for validation if [[ -z "$user" && -z "$groups" ]]; then echo "Empty entry!!" else #create group and user if they don't exist createUser "$user" createGroup "$user" #create group with the same name as the user sudo usermod -aG "$user" "$user" #extract the groups one by one IFS=',' read -ra group_array <<< "$groups" for group in "${group_array[@]}"; do group=$(echo $group | xargs) createGroup "$group" addUser_to_group "$user" "$group" done password=$(generate_password) echo "$user:$password" | sudo chpasswd echo "password assigned to $user" echo "$user,$password" >> ./var/secure/user_passwords.txt #PASSWD_PATH fi done < "$user_file" ``` **Complete Code** ```bash #!/bin/bash #create main directory to save files mkdir var cd var #move inside the created dir #create log folder and user_mgt.log inside the folder mkdir log && touch log/user_management.log #create secure folder and user_passwd file inside the folder mkdir secure && touch secure/user_passwords.txt #Read and Write permission for the owner only chmod 700 secure # go back to the home dir cd .. #LOG_FILE_PATH=./var/log/user_management.log #PASSWD_PATH=./var/secure/user_password.txt #function to generate password generate_password() { local password=$(openssl rand -base64 12) echo "$password" } #Create users, groups and generate password #for them, then assign groups to the created users #function to create users createUser(){ local user="$1" id "$user" &>/dev/null if [ $? -eq 1 ]; then #check if user is existing sudo useradd -m "$user" echo "user $user created" else echo "$user already created" fi } #function to create group createGroup(){ local group="$1" getent group "$group" &>/dev/null if [ $? -eq 2 ]; then #check if group has been created sudo groupadd "$group" echo "group $group created" else echo "$group already created" fi } #function to add users to group addUser_to_group(){ local user="$1" local group="$2" sudo usermod -aG "$group" "$user" echo "$user added to group: $group" } ########## MAIN ENTRY POINT OF THE SCRIPT ############## #Read and validate .txt file containing #employees username and groups # Check if the correct number of arguments is provided ( if [[ $# -ne 1 ]]; then echo "error: check the file provided" exit 1 fi # user details user_file="$1" # Check if the file exists if [[ ! -f "$user_file" ]]; then echo "user file not found!" exit 1 fi # Read the file line by line while IFS=";" read -r user groups; do user=$(echo $user | xargs) # Check to know if user and group # contains strings for validation if [[ -z "$user" && -z "$groups" ]]; then echo "Empty entry!!" else #create group and user if they don't exist createUser "$user" createGroup "$user" #create group with the same name as the user sudo usermod -aG "$user" "$user" #extract the groups one by one IFS=',' read -ra group_array <<< "$groups" for group in "${group_array[@]}"; do group=$(echo $group | xargs) createGroup "$group" addUser_to_group "$user" "$group" done password=$(generate_password) echo "$user:$password" | sudo chpasswd echo "password assigned to $user" echo "$user,$password" >> ./var/secure/user_passwords.txt #Log the generated user and password to user_passwords.txt fi done < "$user_file" ) | tee -a ./var/log/user_management.log #Log all actions to user_management.txt ``` finally, make sure the script is executable by running the following command. `chmod +x create_users.sh` **How To Use The Script** ``` ./create_users.sh employee.txt #where employee.txt contains user;group(s) ``` **This is my HNG Internship task** HNG Internship is a competitive online bootcamp for coders, designers and other technical talent. It is designed for people who want to rapidly upskill themselves, learn new technologies and build products in a collaborative and fun environment. https://hng.tech/internship https://hng.tech/premium
onyeka_embedded
1,909,472
BITCOIN RECOVERY EXPERT
OPTIMUM HACKERS RECOVERY IS THE BEST CRYPTO EXPERT TO RECOVER ALL YOUR LOST BTC AND HACKED BANK...
0
2024-07-02T22:59:06
https://dev.to/seren_williams/bitcoin-recovery-expert-4833
bitcoin, recovery, retriever, lostfunds
OPTIMUM HACKERS RECOVERY IS THE BEST CRYPTO EXPERT TO RECOVER ALL YOUR LOST BTC AND HACKED BANK ACCOUNTS. RECOVER YOUR LOST ASSETS WITH OPTIMUM HACKERS RECOVERY Hello everyone, I’d like to use this medium to express my gratitude to OPTIMUM HACKERS RECOVERY for assisting me in recovering my stolen crypto worth $821,550 through their hacking skills. I was skeptical, but it worked, and I got my money back. I’m so glad I discovered them because I thought I’d never get my money back from those fake online investment websites. You can also reach them via. Em:ail support@optimumhackersrecovery.com WHATSAPP: +1 321 313 1201 Website: https://optimumhackersrecovery.com
seren_williams
1,909,471
React Js vs Vue Js: A comparative journey in frontend development
This article compares two popular frontend technologies, Vue.js and ReactJS, highlighting their key...
0
2024-07-02T22:57:02
https://dev.to/vicztech/react-js-vs-vue-js-a-comparative-journey-in-frontend-development-45ei
This article compares two popular frontend technologies, Vue.js and ReactJS, highlighting their key differences, strengths, and weaknesses. ReactJS: Solving the "Phantom Problem" ReactJS was developed by Facebook to address the "Phantom Problem" — UI synchronization issues that arose as Facebook's user base grew. React uses a declarative approach and a virtual DOM to ensure efficient and consistent UI updates. Vue.js: The Progressive Framework Vue.js, created by Evan You, offers a flexible and approachable alternative to other frameworks. It combines features from Angular and React, emphasizing simplicity and ease of integration, making it suitable for both small and large-scale applications. Core Concepts - Declarative vs Component-Based: *ReactJS: Utilizes a declarative approach with JSX (JavaScript XML) to describe the UI. React efficiently updates the DOM through a virtual DOM. *Vue.js: Uses a declarative approach with an intuitive template syntax. Vue's reactivity system tracks changes and updates the DOM efficiently. - Virtual DOM *ReactJS: The virtual DOM minimizes direct DOM manipulations, calculating the minimal changes required and improving performance. *Vue.js: Similar to React, Vue uses a virtual DOM but often outperforms React due to its optimized reactivity system. Strengths and Weaknesses -ReactJS: *Strengths: -Ecosystem: Extensive libraries and tools. -Community: Large and active with abundant resources. -Flexibility: Can be integrated into any part of a project without imposing a strict structure. *Weaknesses: -Learning Curve: Steep for beginners due to JSX and the extensive ecosystem. -State Management: Prop drilling can be cumbersome, requiring additional libraries like Redux or Zustand for complex state management. -Vue.js: *Strengths: -Ease of Use: Intuitive and easier to learn for beginners. -Integration: Can be incrementally adopted, making it suitable for existing projects. -Performance: Efficient reactivity system often outperforms React in many scenarios. *Weaknesses: -Community Size: Smaller compared to React, leading to fewer third-party libraries and resources. -Flexibility: While flexible, it may not offer the same level of freedom as React in very large projects. My Journey with ReactJS in HNG Internship** At HNG, I looks forward to leveraging ReactJS's powerful features to build robust applications. The program's practical, hands-on learning aligns with my goal of mastering React and becoming a proficient frontend developer. Check out the [HNG Internship](https://hng.tech/internship) and learn more about how it can jumpstart your tech career. If you're looking to hire talented developers, visit [HNG Hire](https://hng.tech/hire). Conclusion ReactJS and Vue.js both offer unique advantages, making them popular choices for frontend development: - **ReactJS**: Ideal for complex applications due to its extensive ecosystem and declarative approach. - **Vue.js**: Perfect for rapid development with its simplicity and performance. Your choice depends on your project's needs and your familiarity with the technologies. For Victor, ReactJS remains a go-to tool, and he is eager to see his progress during the HNG Internship. Victor Ndukwe
vicztech
1,908,935
Technical Article
Introduction SysOps engineers must automate user and group management in today's fast-paced...
0
2024-07-02T12:29:11
https://dev.to/rufus_chibuike_1652bc4801/technical-article-46hb
Introduction SysOps engineers must automate user and group management in today's fast-paced technology landscape. This article examines create_users.sh, a comprehensive Bash script that automates user creation, group assignment, and password management on Linux systems. Script Overview The create_users.sh script reads a text file containing usernames and group names, creates users with home directories, assigns groups, generates random passwords, and records all actions. The credentials are securely kept, allowing only the root user to access them. Script Details Shebang and Root Check #!/bin/bash if [ "$(id -u)" -ne 0 ]; then echo "This script must be run as root" >&2 exit 1 fi The script begins with the shebang line, which specifies the script interpreter. It determines if the script is executed as root to ensure adequate permissions for user and group administration. Log and Password File Initialization LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.txt" mkdir -p /var/secure chmod 700 /var/secure echo "User creation script started at $(date)" > $LOG_FILE Log files and folders are initialized, verifying that the secure password directory exists and has the required permissions. Generate Random Passwords generate_password() { tr -dc A-Za-z0-9 </dev/urandom | head -c 12 } For secure password generation, a function is provided that generates random passwords with tr and /dev/urandom. Reading and Processing the Input File while IFS=";" read -r username groups; do username=$(echo "$username" | xargs) groups=$(echo "$groups" | xargs) The script reads the input file line by line, separating usernames and groups and eliminating whitespace. User and Group Management if id "$username" &>/dev/null; then echo "User $username already exists, skipping." >> $LOG_FILE continue fi groupadd "$username" &>> $LOG_FILE useradd -m -g "$username" -s /bin/bash "$username" &>> $LOG_FILE Users are created only if they do not already exist. Each user is given a personal group. Assigning Additional Groups and Setting Passwords IFS=',' read -ra ADDR <<< "$groups" for group in "${ADDR[@]}"; do group=$(echo "$group" | xargs) if ! getent group "$group" &>/dev/null; then groupadd "$group" &>> $LOG_FILE fi usermod -aG "$group" "$username" &>> $LOG_FILE done password=$(generate_password) echo "$username:$password" | chpasswd &>> $LOG_FILE echo "$username,$password" >> $PASSWORD_FILE Users are added to specified groups, and random passwords are generated and set. Passwords are logged securely. Finalizing Permissions chmod 600 $PASSWORD_FILE echo "User creation script completed at $(date)" >> $LOG_FILE The script ensures that the password file is only readable by the root user. Conclusion This script simplifies user management, ensuring that new employees can be onboarded quickly and securely. Proper logging and secure password storage are vital for maintaining system integrity and security. For more information about such scripts and internships, explore the HNG Internship Program and learn how to https://hng.tech/internship. Summary The create_users.sh script automates the process of establishing users, allocating groups, and password management in a secure and fast manner. SysOps engineers can maintain strong and scalable systems by understanding the script's individual components.
rufus_chibuike_1652bc4801
1,909,470
HACKED FUNDS FINALLY RECOVERED BY A BTC RECOVERY EXPERT// HIRE OPTIMUM HACKERS RECOVERY TO RECOVER YOUR LOST ASSETS
OPTIMUM HACKERS RECOVERY IS THE BEST CRYPTO EXPERT TO RECOVER ALL YOUR LOST BTC AND HACKED BANK...
0
2024-07-02T22:56:01
https://dev.to/seren_williams/hacked-funds-finally-recovered-by-a-btc-recovery-expert-hire-optimum-hackers-recovery-to-recover-your-lost-assets-39i3
bitcoinrecovery, btcrecovery, recoveryexpert, bitcoinretriever
OPTIMUM HACKERS RECOVERY IS THE BEST CRYPTO EXPERT TO RECOVER ALL YOUR LOST BTC AND HACKED BANK ACCOUNTS. RECOVER YOUR LOST ASSETS WITH OPTIMUM HACKERS RECOVERY Hello everyone, I’d like to use this medium to express my gratitude to OPTIMUM HACKERS RECOVERY for assisting me in recovering my stolen crypto worth $821,550 through their hacking skills. I was skeptical, but it worked, and I got my money back. I’m so glad I discovered them because I thought I’d never get my money back from those fake online investment websites. You can also reach them via. Em:ail support@optimumhackersrecovery.com WHATSAPP: +1 321 313 1201 Website: https://optimumhackersrecovery.com
seren_williams
1,909,469
User and groups creation automation in linux
Hey there! Ever wondered how tech teams smoothly integrate new members into their systems? Scripting...
0
2024-07-02T22:53:48
https://dev.to/celestina/user-and-groups-creation-automation-in-linux-41pb
**Hey there! Ever wondered how tech teams smoothly integrate new members into their systems? Scripting has become the unsung hero! Imagine effortlessly setting up user accounts, creating personalized groups, and ensuring security—all with a few lines of code. In this article, we'll explore how automation through scripting not only simplifies complex tasks but also minimizes errors and maximizes efficiency.** **In this article, we will be creating a Bash script that helps create users and groups on the fly. This is part of a task assigned during the [HNG Internship](https://hng.tech/internship). The internship also provides a [premium](https://hng.tech/premium) service at a stipend, exposing you to many more opportunities.** **Anyways, let's get to the party.** **Tools needed:** 1. Unix (Linux, macOS, WSL) 2. Editor (Vim, Vi, Nano, VSCode). I will be using Vim as the editor of choice; here is a link to learn more about [Vim](https://www.vim.org/). ## Scripting First, create a file that will contain the script using `touch create_users.sh`. You can also create and open the file simultaneously using Vim. ```bash touch create_users.sh vim create_users.sh ``` At the start of the script, we need to ensure that only privileged users with root privileges can execute the script. ```bash #!/bin/bash # Check if running as root if [[ $UID -ne 0 ]]; then echo "This script must be run as root" exit 1 fi ``` The script checks if the user and group file exists. This is important for error handling and preventing repetition. ```bash # Check if the file with users and their corresponding groups exists USER_FILE=$1 LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.txt" # Create the log and password files if they do not exist mkdir -p /var/secure /var/log touch $PASSWORD_FILE touch $LOG_FILE # make password file secure (read and write permissions for file owner only) chmod 600 $PASSWORD_FILE ``` Next, we define a function to log activities into the log file. ```bash # function to log actions to log file log_action() { echo "$(date) - $1" >> $LOG_FILE } ``` We create a function to handle user creation. This function will also manage group assignments and password generation. ```bash # Function to create a user create_user() { local user=$1 # Username passed as parameter local groups=$2 # Groups passed as parameter local password # Variable to store generated password # Check if user already exists if id "$user" &>/dev/null; then log "User $user already exists." return else # Create personal group for the user groupadd "$user" # Create the user with specified groups and assign a home directory useradd -m -s /bin/bash -g "$user" "$user" if [ $? -eq 0 ]; then log "User $user created with primary group: $user" else log "Failed to create user $user." return fi # Generate a random password for the user password=$(openssl rand -base64 15) # Set user's password using chpasswd echo "$user:$password" | chpasswd # Store the password securely in the password file echo "$user:$password" >> $PASSWORD_FILE # Set permissions for the user's home directory if [ ! -d "/home/$user" ]; then mkdir -p "/home/$user" chown -R "$user:$user" "/home/$user" chmod 700 "/home/$user" log "Created home directory for $user" fi log "Password for user $user created and stored securely." fi # Check and create required groups if they don't exist IFS=' ' read -ra group_list <<< "$groups" # Log the group array log "User $user will be added to groups: ${group_array[*]}" for group in "${group_array[@]}"; do group=$(echo "$group" | xargs) # Trim whitespace if ! getent group "$group" &>/dev/null; then groupadd "$group" log "Group $group created." fi done # Add the user to additional groups for group in "${group_array[@]}"; do usermod -aG "$group" "$user" done log "User $user added to groups: ${group_array[*]}" ``` This checks the list of names and groups in the provided file ``` # check if user list file is provided if [ $# -ne 1 ]; then echo "Usage: $0 <user_list_file>" exit 1 fi filename="$1" if [ ! -f "$filename" ]; then echo "Users list file $filename not found." exit 1 fi ``` Next, the script reads the user file and processes each entry to create the users. ```bash # read user list file and create users while IFS=';' read -r user groups; do user=$(echo "$user" | xargs) groups=$(echo "$groups" | xargs | tr -d ' ') # Replace commas with spaces for usermod group format groups=$(echo "$groups" | tr ',' ' ') create_user "$user" "$groups" done < "$filename" echo "Done. Check /var/log/user_management.log for details." ``` **Testing** Make the script executable: ```bash chmod +x create_users.sh ``` Now, to test the script, create a simple CSV file: ```bash vim user_data.csv ``` Add the following content to `user_data.csv`: ``` light; sudo,dev,www-data idimma; sudo mayowa; dev,www-data emeka; admin,dev sarah; www-data john; admin,sudo,dev ``` Check the log file to get the output: ```bash sudo cat /var/log/user_management.log ``` And check the password file to see the generated passwords: ```bash sudo cat /var/secure/user_passwords.txt ``` ## Outro If you got to this point, and your script was able to create the users and group, then **Congratulations** You can check out my Github for the [link](https://github.com/Celestina-OG/auto-user-creation.git) to the full script that you can clone and run directly in your terminal. ![cheers](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ngi9sm4snoj8nv9mrd5r.jpeg)
celestina
1,907,106
Object-Oriented Analysis and Design (OOAD) for the Parking Lot System
Problem definition A parking lot is a designated area for parking vehicles and is a feature found in...
0
2024-07-02T22:52:09
https://dev.to/muhammad_salem/object-oriented-analysis-and-design-ooad-for-the-parking-lot-system-i2o
**Problem definition** A parking lot is a designated area for parking vehicles and is a feature found in almost all popular venues such as shopping malls, sports stadiums, offices, etc. In a parking lot, there are a fixed number of parking spots available for different types of vehicles. Each of these spots is charged according to the time the vehicle has been parked in the parking lot. The parking time is tracked with a parking ticket issued to the vehicle at the entrance of the parking lot. Once the vehicle is ready to exit, it can either pay at the automated exit panel or to the parking agent at the exit using a card or cash payment method. **System Requirements** We will focus on the following set of requirements while designing the parking lot: The parking lot should have multiple floors where customers can park their cars. The parking lot should have multiple entry and exit points. Customers can collect a parking ticket from the entry points and can pay the parking fee at the exit points on their way out. Customers can pay the tickets at the automated exit panel or to the parking attendant. Customers can pay via both cash and credit cards. Customers should also be able to pay the parking fee at the customer’s info portal on each floor. If the customer has paid at the info portal, they don’t have to pay at the exit. The system should not allow more vehicles than the maximum capacity of the parking lot. If the parking is full, the system should be able to show a message at the entrance panel and on the parking display board on the ground floor. Each parking floor will have many parking spots. The system should support multiple types of parking spots such as Compact, Large, Handicapped, Motorcycle, etc. The Parking lot should have some parking spots specified for electric cars. These spots should have an electric panel through which customers can pay and charge their vehicles. The system should support parking for different types of vehicles like car, truck, van, motorcycle, etc. Each parking floor should have a display board showing any free parking spot for each spot type. The system should support a per-hour parking fee model. For example, customers have to pay $4 for the first hour, $3.5 for the second and third hours, and $2.5 for all the remaining hours. **Use case diagram** Here are the main Actors in our system: **Admin**: Mainly responsible for adding and modifying parking floors, parking spots, entrance, and exit panels, adding/removing parking attendants, etc. **Customer**: All customers can get a parking ticket and pay for it. Parking attendant: Parking attendants can do all the activities on the customer’s behalf, and can take cash for ticket payment. **System**: To display messages on different info panels, as well as assigning and removing a vehicle from a parking spot. Here are the top use cases for Parking Lot: **Add/Remove/Edit parking floor**: To add, remove or modify a parking floor from the system. Each floor can have its own display board to show free parking spots. **Add/Remove/Edit parking spot**: To add, remove or modify a parking spot on a parking floor. **Add/Remove a parking attendant**: To add or remove a parking attendant from the system. **Take ticket**: To provide customers with a new parking ticket when entering the parking lot. **Scan ticket**: To scan a ticket to find out the total charge. Credit card payment: To pay the ticket fee with credit card. **Cash payment**: To pay the parking ticket through cash. Add/Modify parking rate: To allow admin to add or modify the hourly parking rate. **Step 1: Identify the Core Objects (Classes)** "First, I'll carefully read through the requirements and identify the nouns. These often translate to our core objects or classes." Core objects identified: 1. ParkingLot 2. Floor 3. ParkingSpot 4. Vehicle 5. Ticket 6. EntrancePanel 7. ExitPanel 8. CustomerInfoPortal 9. ParkingAttendant 10. Payment 11. ElectricPanel 12. DisplayBoard **Step 2: Analyze Relationships Between Objects** "Now, I'll think about how these objects relate to each other." - ParkingLot has multiple Floors - Floor has multiple ParkingSpots - Floor has a DisplayBoard - ParkingLot has EntrancePanels and ExitPanels - Floor has CustomerInfoPortals - Ticket is associated with a Vehicle and a ParkingSpot - Payment is associated with a Ticket - ElectricPanel is associated with certain ParkingSpots **Step 3: Identify Attributes for Each Class** "Let's think about the characteristics each class should have." 1. **ParkingLot** - name - address - floors (List<Floor>) - entrancePanels (List<EntrancePanel>) - exitPanels (List<ExitPanel>) - maxCapacity 2. **Floor** - floorNumber - parkingSpots (List<ParkingSpot>) - displayBoard - customerInfoPortals (List<CustomerInfoPortal>) 3. **ParkingSpot** - spotNumber - type (enum: Compact, Large, Handicapped, Motorcycle, Electric) - isOccupied - vehicle (nullable) 4. **Vehicle** - licenseNumber - type (enum: Car, Truck, Van, Motorcycle) 5. **Ticket** - ticketNumber - issueTime - paymentTime (nullable) - vehicle - parkingSpot - paymentStatus (enum: Unpaid, Paid) 6. **EntrancePanel** - id 7. **ExitPanel** - id 8. **CustomerInfoPortal** - id 9. **ParkingAttendant** - id - name 10. **Payment** - amount - paymentTime - paymentMethod (enum: Cash, CreditCard) 11. **ElectricPanel** - id - associatedParkingSpot 12. **DisplayBoard** - id - freeSpotsCounts (Dictionary<SpotType, int>) **Step 4: Identify Methods (Behaviors) for Each Class** "Now, let's think about what actions each object can perform or what can be done to it." 1. **ParkingLot** - isFull() - addFloor(Floor) - getAvailableSpot(VehicleType) 2. **Floor** - addParkingSpot(ParkingSpot) - updateDisplayBoard() - getAvailableSpot(VehicleType) 3. **ParkingSpot** - occupy(Vehicle) - vacate() 4. **Vehicle** - (No specific methods, mainly used for identification) 5. **Ticket** - calculateFee() - markAsPaid() 6. **EntrancePanel** - printTicket(Vehicle) 7. **ExitPanel** - processPayment(Ticket, PaymentMethod) - validateTicket(Ticket) 8. **CustomerInfoPortal** - processPayment(Ticket, PaymentMethod) - getFloorSummary() 9. **ParkingAttendant** - processPayment(Ticket, PaymentMethod) 10. **Payment** - processPayment() 11. **ElectricPanel** - startCharging() - stopCharging() - calculateChargingFee() 12. **DisplayBoard** - updateFreeSpotsCounts(Dictionary<SpotType, int>) **Step 5: Identify Abstractions and Inheritance** "Are there any commonalities that we can abstract? Can we use inheritance to simplify our design?" - We can create an abstract class 'ParkingSpot' with subclasses for each type (Compact, Large, Handicapped, Motorcycle, ElectricSpot) - We can create an interface 'PaymentProcessor' that EntrancePanel, ExitPanel, and CustomerInfoPortal can implement - Vehicle can be an abstract class with subclasses Car, Truck, Van, Motorcycle **Step 6: Consider Design Patterns** "Are there any design patterns that could improve our system?" - Singleton pattern for ParkingLot (assuming one parking lot in the system) - Factory pattern for creating different types of ParkingSpots and Vehicles - Strategy pattern for different payment methods - Observer pattern for updating DisplayBoard when ParkingSpots change state **Step 7: Refine the Design** "Let's refine our design based on these considerations." ```java public abstract class ParkingSpot { private String spotNumber; private boolean isOccupied; private Vehicle vehicle; public abstract boolean canFitVehicle(Vehicle vehicle); public void occupy(Vehicle vehicle) { this.vehicle = vehicle; this.isOccupied = true; } public void vacate() { this.vehicle = null; this.isOccupied = false; } } public class CompactSpot extends ParkingSpot { public boolean canFitVehicle(Vehicle vehicle) { return vehicle.getType() == VehicleType.CAR; } } public class LargeSpot extends ParkingSpot { public boolean canFitVehicle(Vehicle vehicle) { return vehicle.getType() == VehicleType.CAR || vehicle.getType() == VehicleType.TRUCK; } } public abstract class Vehicle { private String licenseNumber; private VehicleType type; public abstract VehicleType getType(); } public class Car extends Vehicle { public VehicleType getType() { return VehicleType.CAR; } } public interface PaymentProcessor { public boolean processPayment(Ticket ticket, PaymentMethod method); } public class ExitPanel implements PaymentProcessor { public boolean processPayment(Ticket ticket, PaymentMethod method) { // Implementation } } public class ParkingLot { private static ParkingLot instance = null; private List<Floor> floors; // Other attributes private ParkingLot() { // Private constructor } public static ParkingLot getInstance() { if (instance == null) { instance = new ParkingLot(); } return instance; } public ParkingSpot getAvailableSpot(Vehicle vehicle) { // Implementation } } public class ParkingSpotFactory { public static ParkingSpot createParkingSpot(ParkingSpotType type) { switch(type) { case COMPACT: return new CompactSpot(); case LARGE: return new LargeSpot(); // Other cases } } } ``` **Step 8: Consider Edge Cases and Error Handling** "What could go wrong? How should we handle errors?" - What if a vehicle tries to exit without paying? - What if the parking lot is full when a vehicle tries to enter? - What if an electric vehicle is parked in a non-electric spot? We should add appropriate exception handling and error messages for these scenarios. **Step 9: Think About Scalability and Performance** "How will this system scale? Are there any potential performance bottlenecks?" - The getAvailableSpot method could become slow as the parking lot grows. We might need to implement a more efficient data structure to track available spots. - We should consider caching frequently accessed data, like the number of available spots per floor. **Step 10: Consider Future Extensions** "How can we make this system easy to extend in the future?" - Use dependency injection to make it easy to swap out components (like payment processors) - Design the API in a way that makes it easy to add new features (like valet parking or reserved spots) This thought process demonstrates how to approach the object-oriented analysis and design of a system like a Parking Lot. It involves iterative thinking, constantly refining the design as new considerations come to light. The end result is a flexible, extensible system that meets the current requirements while being prepared for future changes. What I provided is indeed part of the Domain Modeling process, which is a crucial step in Object-Oriented Analysis and Design (OOAD). This process helps us understand the problem domain and create a conceptual model before we start coding. Now, let's dive into how this translates to building a system using ASP.NET Core Web API and a layered architecture. Thought Process of a Professional Software Engineer: **1. Understanding the Relationship between Domain Modeling and Implementation** "The domain model we created serves as a blueprint for our actual implementation. It helps us understand the core concepts and their relationships. Now, we need to translate this conceptual model into a concrete implementation using ASP.NET Core Web API and a layered architecture." **2. Choosing a Layered Architecture** "For our parking lot system, we'll use a common layered architecture: 1. Presentation Layer (API Controllers) 2. Application Layer (Services) 3. Domain Layer (Entities and Interfaces) 4. Infrastructure Layer (Data Access, External Services) This separation of concerns will make our application more maintainable and testable." **3. Setting Up the Project Structure** "Let's create a solution with multiple projects: - ParkingLot.API (ASP.NET Core Web API project) - ParkingLot.Application (Class Library) - ParkingLot.Domain (Class Library) - ParkingLot.Infrastructure (Class Library)" **4. Implementing the Domain Layer** "We'll start by implementing our domain entities in the ParkingLot.Domain project. These will closely resemble the classes we identified in our domain model." ```csharp // ParkingLot.Domain/Entities/ParkingLot.cs public class ParkingLot { public int Id { get; set; } public string Name { get; set; } public int Capacity { get; set; } public List<Floor> Floors { get; set; } // Other properties and methods } // ParkingLot.Domain/Entities/Floor.cs public class Floor { public int Id { get; set; } public int FloorNumber { get; set; } public List<ParkingSpot> ParkingSpots { get; set; } // Other properties and methods } // ParkingLot.Domain/Entities/ParkingSpot.cs public abstract class ParkingSpot { public int Id { get; set; } public string SpotNumber { get; set; } public bool IsOccupied { get; set; } public abstract bool CanFitVehicle(Vehicle vehicle); // Other properties and methods } // Additional entity classes... ``` **5. Implementing the Application Layer** "In the ParkingLot.Application project, we'll define interfaces for our services and implement them. These services will contain our business logic." ```csharp // ParkingLot.Application/Interfaces/IParkingService.cs public interface IParkingService { Task<Ticket> ParkVehicle(Vehicle vehicle); Task<Payment> ExitParking(Ticket ticket); // Other method signatures } // ParkingLot.Application/Services/ParkingService.cs public class ParkingService : IParkingService { private readonly IParkingLotRepository _parkingLotRepository; public ParkingService(IParkingLotRepository parkingLotRepository) { _parkingLotRepository = parkingLotRepository; } public async Task<Ticket> ParkVehicle(Vehicle vehicle) { var parkingLot = await _parkingLotRepository.GetParkingLotAsync(); var availableSpot = parkingLot.FindAvailableSpot(vehicle); if (availableSpot == null) throw new NoAvailableSpotException(); availableSpot.OccupySpot(vehicle); var ticket = new Ticket(vehicle, availableSpot); await _parkingLotRepository.SaveChangesAsync(); return ticket; } // Implement other methods... } ``` **6. Implementing the Infrastructure Layer** "In the ParkingLot.Infrastructure project, we'll implement our data access logic and any external service integrations." ```csharp // ParkingLot.Infrastructure/Data/ParkingLotDbContext.cs public class ParkingLotDbContext : DbContext { public DbSet<ParkingLot> ParkingLots { get; set; } public DbSet<Floor> Floors { get; set; } public DbSet<ParkingSpot> ParkingSpots { get; set; } // Other DbSets... protected override void OnModelCreating(ModelBuilder modelBuilder) { // Configure entity relationships and constraints } } // ParkingLot.Infrastructure/Repositories/ParkingLotRepository.cs public class ParkingLotRepository : IParkingLotRepository { private readonly ParkingLotDbContext _context; public ParkingLotRepository(ParkingLotDbContext context) { _context = context; } public async Task<ParkingLot> GetParkingLotAsync() { return await _context.ParkingLots .Include(pl => pl.Floors) .ThenInclude(f => f.ParkingSpots) .FirstOrDefaultAsync(); } // Implement other methods... } ``` **7. Implementing the API Layer** "Finally, in our ParkingLot.API project, we'll create controllers that use our application services to handle HTTP requests." ```csharp // ParkingLot.API/Controllers/ParkingController.cs [ApiController] [Route("api/[controller]")] public class ParkingController : ControllerBase { private readonly IParkingService _parkingService; public ParkingController(IParkingService parkingService) { _parkingService = parkingService; } [HttpPost("park")] public async Task<ActionResult<TicketDto>> ParkVehicle(VehicleDto vehicleDto) { var vehicle = new Vehicle(vehicleDto.LicenseNumber, vehicleDto.VehicleType); var ticket = await _parkingService.ParkVehicle(vehicle); return Ok(new TicketDto(ticket)); } [HttpPost("exit")] public async Task<ActionResult<PaymentDto>> ExitParking(TicketDto ticketDto) { var ticket = new Ticket(ticketDto.TicketNumber); var payment = await _parkingService.ExitParking(ticket); return Ok(new PaymentDto(payment)); } // Other action methods... } ``` **8. Configuring Dependency Injection** "In our Startup.cs file, we'll configure our dependency injection to wire up our services and repositories." ```csharp public void ConfigureServices(IServiceCollection services) { services.AddDbContext<ParkingLotDbContext>(options => options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection"))); services.AddScoped<IParkingLotRepository, ParkingLotRepository>(); services.AddScoped<IParkingService, ParkingService>(); services.AddControllers(); // Other service configurations... } ``` **9. Implementing Cross-Cutting Concerns** "We should also consider implementing cross-cutting concerns like logging, error handling, and validation." ```csharp // Example of a global exception handler middleware public class ExceptionMiddleware { private readonly RequestDelegate _next; private readonly ILogger<ExceptionMiddleware> _logger; public ExceptionMiddleware(RequestDelegate next, ILogger<ExceptionMiddleware> logger) { _next = next; _logger = logger; } public async Task InvokeAsync(HttpContext httpContext) { try { await _next(httpContext); } catch (Exception ex) { _logger.LogError($"Something went wrong: {ex}"); await HandleExceptionAsync(httpContext, ex); } } private async Task HandleExceptionAsync(HttpContext context, Exception exception) { context.Response.ContentType = "application/json"; context.Response.StatusCode = (int)HttpStatusCode.InternalServerError; await context.Response.WriteAsync(new ErrorDetails() { StatusCode = context.Response.StatusCode, Message = "Internal Server Error." }.ToString()); } } // In Startup.cs public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { app.UseMiddleware<ExceptionMiddleware>(); // Other middleware configurations... } ``` **10. Testing** "Throughout this process, we should be writing unit tests for our domain logic, integration tests for our data access, and API tests for our endpoints." This approach shows how we translate our domain model into a fully-fledged ASP.NET Core Web API application using a layered architecture. The domain model guides our implementation, ensuring that our code accurately represents the problem domain. The layered architecture helps us separate concerns, making the application more maintainable and testable. Remember, this is an iterative process. As you implement and test, you might discover new insights that cause you to revisit and refine your domain model. This cycle of modeling, implementing, and refining is a key part of professional software development.
muhammad_salem
1,909,039
ClippyAI - Developing a Local AI Agent
Introduction As a developer, I’ve always been passionate about creating tools that solve...
0
2024-07-02T22:50:48
https://dev.to/mrdoe/clippyai-59h7
ai, dotnet, avalonia
## Introduction As a developer, I’ve always been passionate about creating tools that solve real-world problems. But there was one issue that consistently irked me: the never-ending stream of repetitive emails. Whether it was customer inquiries, tech support requests, or project updates, my inbox overflowed with similar questions day in and day out. It was like Groundhog Day, but with email threads. ## The Annoyance Factor Picture this: You’re sipping your morning coffee, already diving into some exciting coding challenges, and suddenly, ping! —another email lands in your inbox. It’s the same query you’ve answered a hundred times before. You sigh, type in your well-crafted response, and hit send. Rinse and repeat. It’s not just time-consuming; it’s soul-draining, because you totally lose focus of your coding work. ## The Eureka Moment One fateful afternoon, as I stared at my screen, contemplating the meaning of life (and another email), it hit me: Why not build an AI agent that assists you on these repetitive tasks? Of course could I just use ChatGPT, but that's notowed at my company due to data privacy reasons. It's also very distracting and time-consuming to navigate to the website and copy and paste the source email, ask ChatGPT to write an answer and copy and paste the reply back to your email application. The data protection part is easily solvable: With today's modern CPUs and GPUs it is possible to use Ollama and host the interference of an AI model of your choice locally at reasonable speed. But how to integrate an agent for Ollama into your OS? The DeepL Windows desktop app came into mind, where you just hit Ctrl+C twice and the app instantly translates the text you selected. So my idea was to create a daemon who watches the clipboard for changes and then sends the content along with a task description to Ollama. ClippyAI wouldn’t just suggest — I wanted it to take action. When I hit reply, it would automatically type out the response. Imagine the joy of watching ClippyAI do the grunt work while I sipped my coffee. I chose the name ClippyAI as a mixture of "Clipboard" and "AI" and it was also inspired by the nostalgic Microsoft Office paperclip, which everyone hated these days. Because I'm mostly a .NET developer, I used .NET 8 as foundation. I wanted to create a multi-platform application, because I'm using Windows at work and Linux at home, so I chose the Avalonia framework for this project. After I realized, that the main idea was working well, I extended the ClippyAI's tasks to not just answering emails, but to also explain or translate the copied text or even to do some custom user defined tasks with it. ![Screenshot](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/guxbutwq4vqkj0ebefrl.png) **Key Features** - Clipboard Integration: ClippyAI monitors your clipboard activity in real-time. Whenever you copy text, URLs, or other content, it automatically sends it to the Ollama AI model for analysis. - Context-Aware Responses: Modern AI models such as Llama3 or Gemma2 are able to consider the context of your task, if you give them enough input. Whether you’re drafting an email, writing code, or composing a document, ClippyAI can provide relevant and accurate responses. - Workflow Enhancement: By automating repetitive typing tasks, ClippyAI frees up your time and mental energy. Say goodbye to monotonous copy-paste routines! ## Getting Started - Install Ollama from https://ollama.com. - Run `ollama pull gemma2` on the command line to install the gemma2 AI model on your PC. - Clone the ClippyAI repository from [https://github.com/MrDoe/ClippyAI](https://github.com/MrDoe/ClippyAI) and build and run it in Visual Studio or VS Code. If you are using VS Code, you need to build it twice, because during the first build the necessary resources files will be generated. - Check the configuration in App.config. The standard values should work fine for most installations. ## Early Development Phase While ClippyAI shows some promise, it’s essential to note that it’s still in its early development phase. As with any cutting-edge technology, there are risks involved. Here’s what you should be aware of: - Use it at your own risk: ClippyAI is experimental. It may occasionally produce unexpected results or errors. Always double-check the generated content before finalizing it. - Document safety: ClippyAI may unintentionally delete or overwrite your existing documents, if you are using the keyboard output and Auto-Mode. So be careful where to place your cursor! - Known issues: German umlauts and other special characters are currently not typed in keyboard mode under Linux/X11. - Developers Wanted: ClippyAI is an open-source project that is open for contributions from developers like you. If you want to join the development, clone the repo and submit a pull request! ## Conclusion ClippyAI is still a work in progress. It won’t win any Turing Awards yet, but it’s a little side project I want to extend further. So, the next time you receive a prompt reply from me, know that ClippyAI is doing its thing. And if it ever goes rogue, blame the coffee. Disclaimer: ClippyAI may occasionally channel its inner HAL 9000. Use at your own risk.
mrdoe
1,909,468
Bootstrap - Sistema de cuadricula
Sistema de cuadricula. En Bootstrap, el sistema de cuadrícula es útil para crear...
0
2024-07-02T22:50:04
https://dev.to/fernandomoyano/bootstrap-sistema-de-cuadricula-31cl
# Sistema de cuadricula. --- En Bootstrap, el sistema de cuadrícula es útil para crear diseños rápidos de páginas web que respondan a los diferentes dispositivos según el tamaño de las pantallas. El sistema de cuadrícula de arranque utilizará una serie de contenedores , filas y columnas para definir el diseño y alinear el contenido de forma adecuada según el dispositivo. En el sistema de cuadrícula, puede agregar hasta 12 columnas y tantas filas como desee, y las columnas se reorganizarán automáticamente según el tamaño de la pantalla del dispositivo. Usando un sistema de cuadrícula bootstrap, podemos crear un diseño de página web responsivo definiendo las 12 columnas individualmente o agrupando las columnas para crear columnas más anchas. --- ![Sistema de cuadricula](https://www.tutlane.com/images/bootstrap/bootstrap_grid_system_sample_diagram.PNG) --- # Clases de Cuadricula de Bootstrap --- Bootstrap ha incluido cinco clases de cuadrícula predefinidas para escalar el contenido según el dispositivo o el tamaño de la ventana gráfica. .col-* .col-sm-* .col-md-* .col-lg-* .col-xl-* | Clase | Tipo de dispositivo | Ancho | |------------|---------------------|---------------| | .columna-* | Extra Pequeño | <576 píxeles | | .col-sm-* | Pequeño | ≥576 píxeles | | .col-md-* | Medio | ≥768 píxeles | | .col-lg-* | Grande | ≥992 píxeles | | .col-xl-* | Extra grande | ≥1200 píxeles | --- Para crear diseños de páginas web responsivas utilizando un sistema de cuadrícula bootstrap, necesitamos usar filas y columnas dentro del contenedor o contenedor-fluido como se muestra a continuación. --- ```HTML <div class="contenedor"> <div class="fila"> <div class="col-*-*"></div> <div class="col- * - * "></div> <div class=" col - * - * "> </div> </div> </div> ``` --- - Si observa el código anterior, creamos un diseño de cuadrícula especificando las filas y columnas dentro de .container. Aquí, la primera estrella ( *) indicará la capacidad de respuesta, y debe ser sm, md, lg o xl. La segunda estrella ( *) indicará un número, y debe ser de 1 a 12 para cada fila. - Cada vez que creamos un diseño de cuadrícula, necesitamos colocar el contenido dentro de las columnas ( .colo .col-*-*) y esas columnas deben ser hijas de filas ( .row) y esas filas deben colocarse dentro de un contenedor ( .containero .container-fluid). - En el sistema de cuadrícula de Bootstrap, podemos agregar hasta 12 columnas dentro de una fila. Si agregamos más de 12 columnas dentro de una fila, esas columnas adicionales se colocarán en una nueva línea. --- # Diseño de cuadrícula Bootstrap con tres columnas adaptables --- A continuación se muestra un ejemplo de creación de un diseño adaptable con tres columnas para admitir los dispositivos cuyo ancho de pantalla es superior a 576 px en bootstrap. --- ```HTML <div clase ="contenedor"> <div clase ="fila"> <div clase ="col-sm-4"> col-sm-4 </div> <div clase ="col-sm-4"> col-sm-4 </div> <div clase ="col-sm-4"> col-sm-4 </div> </div> <div clase ="fila" > <div clase ="col-sm-2"> col-sm-2 </div> <div clase ="col-sm-4"> col-sm-4 </div> <div clase ="col-sm-6"> col-sm-6 </div> </div> <div clase ="fila" > <div clase ="col-sm-2"> col-sm-2 </div> <div clase ="col-sm-5"> col-sm-5 </div> <div clase ="col-sm-5"> col-sm-5 </div> </div> </div> ``` --- El ejemplo de Bootstrap anterior creará un diseño con tres columnas para admitir los dispositivos cuyo ancho de pantalla sea superior a 576 px en Bootstrap. Las tres columnas definidas se ajustarán automáticamente una sobre otra en dispositivos móviles o pantallas que tengan un ancho inferior a 576 px . Cuando ejecutamos el ejemplo de arranque anterior, obtendremos el resultado que se muestra a continuación. ![grilla-bootstrap](https://www.tutlane.com/images/bootstrap/bootstrap_three_column_layout_example_result.PNG) --- # Diseño de cuadrícula de Bootstrap con columnas automáticas. --- En bootstrap, podemos crear columnas de ancho igual o automático para todos los dispositivos, como extra pequeño, pequeño, mediano, grande y extra grande, especificando solo .colla clase sin ningún número de columna. El sistema de cuadrícula de arranque ajustará automáticamente las columnas definidas con .colla clase en función del ancho de la pantalla del dispositivo. El siguiente ejemplo mostrará cómo crear columnas de igual ancho para admitir todos los dispositivos en bootstrap. --- ```HTML <div class ="contenedor"> <div class ="fila"> <div class ="col"> columna 1 </div> <div class ="col"> columna 2 </div> <div class ="col"> columna 3 </div> </div> <div class ="fila"> <div class ="col"> columna 1 </div> <div class ="col"> columna 2 </div> </div> <div class ="fila"> <div class ="col"> columna 1 </div> </div> </div> ``` --- Si observa el ejemplo anterior, creamos columnas solo con la clase .col. Por lo tanto, el sistema de cuadrícula de Bootstrap ajustará el tamaño de las columnas de manera uniforme según el tamaño de la pantalla del dispositivo. Cuando ejecutamos el ejemplo de arranque anterior, obtendremos el resultado que se muestra a continuación. --- ![grilla-bootstrap-3](https://www.tutlane.com/images/bootstrap/bootstrap_auto_columns_layout_with_custom_width_example_result.PNG) --- # Diseño de cuadrícula de Bootstrap con columnas de ancho variable --- En Bootstrap, al usar col-{breakpoint}-auto clases, podemos crear columnas de ancho variable para redimensionarlas según el ancho de su contenido. A continuación se muestra un ejemplo de creación de columnas de ancho variable para redimensionarlas según el ancho del contenido en bootstrap. --- ```HTML <div class ="container"> <div class ="row" > <div class ="col-sm-5" > col 1 </div > <div class ="col-md-auto" > Contenido de columna de ancho variable </div > <div class ="col-sm-3" > col 2 </div > </div > <div class ="row" > <div class ="col" > col 1 </div > <div class ="col" > col 2 </div > <div class ="col-md-auto"> Contenido de columna de ancho variable </div > </div> </div> ``` --- Si observa el ejemplo anterior, creamos columnas de ancho variable especificando col-md-autola clase y esas columnas se ajustarán automáticamente en función del ancho del contenido. Cuando ejecutamos el ejemplo de arranque anterior, devolverá el resultado que se muestra a continuación. ![](https://www.tutlane.com/images/bootstrap/bootstrap_grid_variable_width_columns_example_result.PNG) Si observa el resultado anterior, la parte de color gris está vacía debido al ancho de las columnas ajustado según el contenido y el tamaño definido. --- # Diseño de cuadrícula de Bootstrap con diferentes tamaños de columnas --- Creamos un diseño de cuadrícula específico para los dispositivos en particular, es decir, extra pequeño, pequeño, grande o extra grande. En Bootstrap, podemos crear diseños más flexibles combinando los distintos tamaños de columna para cambiar la orientación de las columnas en función del tamaño del dispositivo. Por ejemplo, las pantallas pequeñas necesitan tener una columna en cada fila, mientras que las pantallas más grandes pueden contener más de una columna en cada fila, de modo que se puede establecer un tamaño de columna más pequeño, pero múltiples columnas en cada fila. Para entender a qué me refiero, veremos un ejemplo. A continuación se muestra un ejemplo de creación del diseño de cuadrícula de arranque mezclando diferentes tamaños de columna para los tamaños de dispositivo. --- ```HTML <div class ="contenedor"> <div class ="fila "> <div class ="col-lg-4 col-md-4 col-sm-6 col-xs-12"> Área de contenido 1 </div> <div class ="col-lg-4 col-md-4 col-sm-6 col-xs-12"> Área de contenido 2 </div> <div class ="col-lg-4 col-md-4 col-sm-6 col-xs-12"> Área de contenido 3 </div> </div> </div> ``` --- El ejemplo anterior mostrará una fila con 3 columnas en pantallas grandes y medianas. En pantallas pequeñas, mostrará una fila con 2 columnas y otra fila más con otra columna en la mitad de la pantalla. En pantallas extra pequeñas, como las de los teléfonos inteligentes, veremos 3 filas y cada una tendrá una columna a pantalla completa. Cuando ejecutamos el ejemplo de arranque anterior, obtendremos el resultado que se muestra a continuación. ![](https://www.tutlane.com/images/bootstrap/bootstrap_grid_mixed_columns_width_example_result.PNG) --- Así es como podemos crear diseños de cuadrícula más flexibles combinando múltiples tamaños de columnas para cambiar la orientación de las columnas en función del tamaño del dispositivo en Bootstrap. --- # Desplazamiento de columnas en el diseño de cuadrícula de Bootstrap --- En el diseño de cuadrícula de Bootstrap, podemos mover columnas hacia la derecha mediante el uso de .offset-*clases de cuadrícula. Para desplazar una columna, debe agregar offset-*un sufijo a su clase de columna existente. Por ejemplo, si su clase de columna es , .col-md-4puede desplazar la columna con la clase .offset-md-4. El número después del desplazamiento simboliza cuántas columnas se agregarán como margen. A continuación se muestra un ejemplo de cómo compensar las columnas de la cuadrícula utilizando .offset-*clases en bootstrap. --- ```HTML <div clase="contenedor"> <div clase="fila"> <div clase="col-md-4"> .col-md-4 < /div> <div clase="col-md-4 offset-md-4"> .col-md-4 .offset-md-4 </div> </div> <div clase="fila"> <div clase="col-md-3 offset-md-3"> .col-md-3 .offset-md-3 </div> <div clase="col-md-3 offset-md-3"> .col-md-3 .offset-md-3 </div> </div> <div clase="fila"> <div clase="col-md-6 offset-md-3"> .col-md-6 .offset-md-3 </div> </div> </div> ``` --- Si observa el ejemplo anterior, estamos aumentando el margen izquierdo de una columna en 4 y 3 columnas mediante el uso de las clases offset-md-4y offset-md-3. Cuando ejecutamos el ejemplo de arranque anterior, obtendremos el resultado que se muestra a continuación. ![](https://www.tutlane.com/images/bootstrap/bootstrap_grid_offsetting_layout_example_result.PNG) --- # Reordenamiento de columnas en el diseño de cuadrícula de Bootstrap --- En el sistema de cuadrícula de bootstrap, podemos reordenar la apariencia de las columnas de la cuadrícula mediante el uso .order-*de clases sin cambiar su orden real en un diseño de cuadrícula. En el diseño de cuadrícula, el orden de las columnas de la cuadrícula variará según los números de orden definidos. Por ejemplo, las columnas de la cuadrícula con un orden inferior (por ejemplo, .col, .order-1) o sin clases de orden aparecerán primero, y las columnas con números de orden superior aparecerán después de las columnas con números de orden inferior. El sistema de cuadrícula admitirá el orden de las columnas del 1 al 12 en los cinco niveles de la cuadrícula. A continuación se muestra un ejemplo de cómo cambiar la apariencia de las columnas de la cuadrícula especificando el número de orden en bootstrap. --- ```HTML <div class ="contenedor" > <div class ="fila" > <div class ="col order-10" > Primero </div> <div class ="col order-4" > Segundo </div> <div class ="col" > Tercero </div > </div> </div> ``` --- Si observa el ejemplo anterior, estamos cambiando el orden de las columnas especificando los números de orden ( order-10, order-4) y para la tercera columna, no especificamos ninguna clase de orden. Cuando ejecutamos el ejemplo de arranque anterior, obtendremos el resultado que se muestra a continuación. ![](https://www.tutlane.com/images/bootstrap/bootstrap_grid_column_reordering_example_result.PNG) Si observa el resultado anterior, la apariencia de las columnas de la cuadrícula cambió según el número de orden de las columnas. La tercera columna apareció primero porque no especificamos ninguna clase de orden, por lo que se considerará la más baja. También podemos cambiar el orden de las columnas del diseño de la cuadrícula utilizando las clases .order-firsty .order-lastcomo se muestra a continuación. --- ```HTML <div class ="contenedor" > <div class ="fila" > <div class ="col order-last" > Primero </div> <div class ="col order-first" > Segundo </div> <div class ="col" > Tercero </div> </div > </div > ``` --- Si observa el ejemplo anterior, estamos intentando cambiar la apariencia de la columna mediante el uso de las clases .order-firsty .order-last. Cuando ejecutamos el ejemplo de arranque anterior, obtendremos el resultado que se muestra a continuación. ![](https://www.tutlane.com/images/bootstrap/bootstrap_grid_change_columns_order_example_result.PNG) Si observa el resultado anterior, la segunda columna apareció primero debido a la asignación de la .order-firstclase y la primera columna se movió al último debido a la .order-lastclase. --- # Columnas anidadas en cuadrícula de Bootstrap --- Anidar columnas significa que puedes agregar columnas dentro de otra columna. Puedes insertar hasta 12 columnas dentro de otra columna sin tener que usar necesariamente las 12 columnas. Para ello, debes agregar una nueva fila ( .row) y un conjunto de .col-md-*columnas dentro de una columna existente .col-md-*. A continuación se muestra un ejemplo de cómo anidar las columnas de la cuadrícula dentro de otra columna en el diseño de la cuadrícula de bootstrap. --- ```HTML <div class="container"> <div class="row"> <div class="col-sm-9"> Primera columna <div class="row"> <div class="col-xs-8 col-sm-6"> Columna 1 dentro de la primera columna </div> <div class="col-xs-4 col-sm-6"> Columna 2 dentro de la primera columna </div> </div> </div> <div class="col"> Segunda columna </div> </div> </div> ``` --- Si observa el ejemplo anterior, creamos columnas de cuadrícula anidadas agregando una nueva fila ( .row) y un conjunto de columnas dentro de una .col-sm-9columna existente. Cuando ejecutamos el ejemplo de arranque anterior, obtendremos el resultado que se muestra a continuación. ![](https://www.tutlane.com/images/bootstrap/bootstrap_grid_system_nest_columns_example_result.PNG) Así es como podemos crear columnas anidadas en el diseño de la cuadrícula de Bootstrap según nuestros requisitos. --- # Ajuste de columnas de la cuadrícula Bootstrap --- Bootstrap está diseñado para 12 columnas por fila. Si utiliza más de 12 columnas en una fila, las columnas adicionales se colocarán en una nueva línea como una unidad. A continuación se muestra un ejemplo de creación de un diseño de cuadrícula de arranque con más de 12 columnas por fila para lograr el ajuste de columnas. --- ```HTML <div class ="container"> <div class ="row"> <div class ="col-sm-9" > Primera columna < /div > <div class ="col-sm-4" > Segunda columna < /div > <div class ="col-sm-8" > Tercera columna < /div > </div> </div> ``` --- Si observa el ejemplo anterior, la primera columna tiene un ancho de 9 columnas y la segunda tiene 4 (9 + 4 = 13) , por lo que las columnas no pueden estar en una fila y las columnas adicionales se ajustarán en una nueva línea. Cuando ejecutamos el ejemplo anterior, obtendremos el resultado que se muestra a continuación. ![](https://www.tutlane.com/images/bootstrap/bootstrap_grid_column_wrapping_example_result.PNG) Así es como se producirá el ajuste de columnas en el sistema de cuadrícula de bootstrap cuando se utilizan más de 12 columnas dentro de una fila.
fernandomoyano
1,887,210
Boost Team Efficiency with Smaller PRs
Ever felt overwhelmed by massive Pull Requests (PRs) that drag on for days, or even weeks? John Kline...
0
2024-07-02T22:37:28
https://dev.to/merico/boost-team-efficiency-with-smaller-prs-4314
engineering, productivity, devops, cicd
Ever felt overwhelmed by massive Pull Requests (PRs) that drag on for days, or even weeks? John Kline from Riot Games has the answers you’ve been looking for. In a recent [DevLogue episode](https://www.youtube.com/watch?v=CVMbUcBRk4c), John dished out game-changing insights on how reducing PR size can supercharge your team's performance. ## Slice and Dice: Breaking Down the Work The first step to conquering PR bloat is breaking down work into bite-sized chunks. Forget those monstrous, long-lived PRs. Instead, aim for short-lived PRs that deliver small, incremental changes daily. This not only makes reviews a breeze but also minimizes risk and integration issues. **Two Scenarios to Ponder:** - **Case A**: Developers slice their tasks into smaller, daily PRs, resulting in manageable reviews, seamless integration, and a significant performance boost. - **Case B**: Developers slog through extensive PRs, each taking days or weeks, leading to complex, hard-to-review changes. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wl14p1200riy33p5iawl.png) ## Daily Pushes & Feature Flags: Your New Best Friends One behavioral tweak can change everything: push changes daily or almost daily. Embrace feature flags and branch by abstraction to push non-working code safely without disrupting the main branch. Riot Games is on this journey, aiming to build pipeline trust and foster a culture of incremental changes. ## Single Source of Truth: Keep It Simple Long-lived branches with hefty changes? Recipe for disaster. They create multiple sources of truth, complicating integration and wreaking havoc on code reliability. Stick to small, frequent PRs merging into the main branch to maintain a consistent and reliable codebase. ## Empower Your Developers: Tools and Training Reducing PR size isn’t just about changing habits; it's about empowering your team with the right tools and knowledge. Concepts like branch by abstraction and feature flags might seem counterintuitive at first, but they are crucial. Invest in top-notch tools and comprehensive training to smooth the transition. ## The Bottom Line Transforming your PR strategy with behavioral changes, smart planning, and the right tools can drastically enhance your team’s velocity and reliability. Start small, think big, and watch your performance soar. Catch the full podcast episode [here](https://www.youtube.com/watch?v=CVMbUcBRk4c) for John Kline’s in-depth insights on reducing PR size and its myriad benefits. {% embed https://www.youtube.com/watch?v=CVMbUcBRk4c %} What are your experiences with PR sizes? Share your thoughts in the comments below! And don’t forget, [DevInsight.ai](https://www.devinsight.ai/) is here to elevate your engineering intelligence. Try it out for free and revolutionize your team’s workflow.
joshuapoddoku
1,909,418
Day 983 : Weary
liner notes: Professional : Had a couple of meetings. I responded to a couple of community...
0
2024-07-02T22:18:40
https://dev.to/dwane/day-983-weary-370a
hiphop, code, coding, lifelongdev
_liner notes_: - Professional : Had a couple of meetings. I responded to a couple of community questions. I created a sample application to test a new feature and got it working. Asked the team some questions about what I found while building the application. Day went by pretty quickly. - Personal : Last night, I went through Bandcamp and picked out what I'll buy this week. Went through some tracks. I actually worked on the logo and got to a good point. I think I just need to add some text and should be good. I watched an episode of "Demon Slayer" and went to sleep. ![A photo of a mountain range with a beautiful sunset in the background. The mountains are rocky and jagged with some green grass and flowers growing on them. The sun is setting behind the mountains and is casting a golden light on the scene. Location: Caucasian Mountains.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8je4rzj86afjnldq5zyi.jpg) My Chromebook just restarted while I was about to publish this blog post. Not everything was save. I'm tired and weary. So I'm just wrapping this up. Going to eat dinner, work on my logo, put together social media posts for my Bandcamp picks and watch an episode of "Demon Slayer". Have a great night! peace piece Dwane / conshus https://dwane.io / https://HIPHOPandCODE.com {% youtube GHuOdcmQqVA %}
dwane
1,909,417
RETRIEVING LOST OR STOLEN BITCOIN // DIGITAL TECH GUARD RECOVERY
When a trusted family friend turned against me, I felt completely devastated. This individual...
0
2024-07-02T22:17:37
https://dev.to/helen_turner_7bff9f3bab77/retrieving-lost-or-stolen-bitcoin-digital-tech-guard-recovery-5g2a
recovery
When a trusted family friend turned against me, I felt completely devastated. This individual accessed my laptop and stole my cryptocurrency wallet, containing $120,000 worth of Bitcoin, which represented years of savings and investments. The sense of loss and violation was overwhelming. Upon a close friend's suggestion, I contacted DIGITAL TECH GUARD RECOVERY . From the moment I reached out, their team provided exceptional support and assurance. They listened to my story with empathy and assured me they had the expertise to help recover my funds. DIGITAL TECH GUARD RECOVERY team began a thorough investigation. They meticulously traced the unauthorized access and identified the methods used to steal my Bitcoin. Their technical prowess was evident as they navigated through the complex web of digital transactions. In a matter of weeks, they successfully recovered my $120,000 worth of Bitcoin. Their ability to restore my financial stability was nothing short of miraculous. They also offered guidance on securing my digital assets against future breaches, ensuring I was better protected moving forward. DIGITAL TECH GUARD RECOVERY dedication and expertise turned my despair into hope. Their unwavering support and professional excellence provided a lifeline in my darkest hour. I highly recommend their services to anyone who has faced similar betrayals. They are true guardians of digital security and trust. Throughout the recovery process, DIGITAL TECH GUARD RECOVERY exhibited a level of professionalism and empathy that exceeded my expectations. They didn't just see me as a client with a problem; they understood the emotional toll that such a betrayal can take. Their willingness to listen and provide support went above and beyond, making me feel valued and understood throughout the entire ordeal. Furthermore, their technical proficiency was truly remarkable. They left no stone unturned in their investigation, meticulously dissecting every aspect of the breach to ensure a successful recovery. Their attention to detail and expertise in navigating the complexities of digital transactions were evident every step of the way. Beyond recovering my funds, DIGITAL TECH GUARD RECOVERY also provided invaluable guidance on enhancing my digital security. They educated me on best practices for safeguarding my assets and offered practical solutions for fortifying my defenses against future threats. Their proactive approach to security gave me peace of mind knowing that I was better equipped to protect myself in the future. In conclusion, I cannot recommend DIGITAL TECH GUARD RECOVERY highly enough. Their professionalism, empathy, and technical expertise set them apart as true leaders in their field. If you find yourself in a similar situation, don't hesitate to reach out to them. They are more than just a recovery service; they are a beacon of hope in times of darkness. Reach out to DIGITAL TECH GUARD RECOVERY via email: contact@ digitaltechguard .com and their website page link https:// digitaltechguard. com
helen_turner_7bff9f3bab77
1,909,416
Using a Bash script to Automate the Creation of Users and Groups
is a typical responsibility for a DevOps engineer to add, modify, and remove users and groups. Time...
0
2024-07-02T22:15:53
https://dev.to/christian_ochenehipeter_/using-a-bash-script-to-automate-the-creation-of-users-and-groups-2i8e
is a typical responsibility for a DevOps engineer to add, modify, and remove users and groups. Time can be saved and mistakes can be decreased by automating this procedure, particularly when onboarding numerous new developers. This tutorial will guide you through the building of a Bash script that can be used to automatically create users and their groups, create home directories, generate random passwords, and log all activity. **Goals** 1. Make users and the groups they belong to 2. Users can be added to designated groups. Create home directories and grant the necessary access. Create random passwords and save them safely. 3. Keep track of every action for auditing needs. **Requirements** • Input File: A text file with the format username;group1,group2 that contains usernames and groups. • Log File: A file used to keep track of every action. • Password File: A file where generated passwords are safely kept. **Now let's get started.** A _**"shebang"**_ is the first line that we write at the beginning of every shell script we write; it sounds catchy, doesn't it? ``` #!/bin/bash ``` An executable file is indicated by the existence of a _**shebang**_. We can now discuss the juicy specifics now that it is out of the way. **Launching the Script** The script begins by specifying the locations of the password and log files (you are free to give them any names you choose): ``` LOGFILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" ``` **Check the File Input** The script determines whether an input file has been supplied as an argument: ``` if [ -z "$1" ]; then echo "Usage: $0 <name-of-text-file>" exit 1 fi ``` The script exits with a usage message instructing our users on how to utilize our script if no file is supplied. **Make Password and Log Files** This little piece of code creates the required files and directories and sets the right permissions: ``` mkdir -p /var/secure touch $LOGFILE $PASSWORD_FILE chmod 600 $PASSWORD_FILE ``` We make a directory called "/var/secure/" which will keep our passwords. After that we create the two files that were defined above for logging and saving passwords. Modifying our $PASSWORD_FILE with chmod 600 will ensure that only the user with appropriate permissions can view it(which happens to be the current user we are logged in as). **Function to Generate Random Passwords** We will create a function to provide our various users with secure, random passwords. ``` generate_random_password() { local length=${1:-10} # Default length is 10 if no argument is provided tr -dc 'A-Za-z0-9!?%+=' < /dev/urandom | head -c $length } ``` The function accepts an argument specifying the desired password length; if none is supplied, it defaults to 10. - tr -dc 'A-Za-z0-9!?%+=': This command is used to translate and delete characters that are not in the regex A-Za-z0-9!?%+= - < /dev/urandom: uses the Linux kernel's random number generator and passes the result to the command above. - | head -c $length: outputs the length of the random string specified. **Function of Logging** ``` log_message() { echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> $LOGFILE } ``` logs the message's date and time in the $LOGFILE after receiving an argument. **Make a User Function** The function create_user() manages the process of creating users and their groups. Username and groups are the two arguments it requires. ``` create_user() { local username=$1 local groups=$2 if getent passwd "$username" > /dev/null; then log_message "User $username already exists" else useradd -m $username log_message "Created user $username" fi # Add user to specified groupsgroup groups_array=($(echo $groups | tr "," "\n")) for group in "${groups_array[@]}"; do if ! getent group "$group" >/dev/null; then groupadd "$group" log_message "Created group $group" fi usermod -aG "$group" "$username" log_message "Added user $username to group $group" done # Set up home directory permissions chmod 700 /home/$username chown $username:$username /home/$username log_message "Set up home directory for user $username" # Generate a random password password=$(generate_random_password 12) echo "$username:$password" | chpasswd echo "$username,$password" >> $PASSWORD_FILE log_message "Set password for user $username" } ``` **Let's dissect it together.** - Verifying the Existence of the User ``` if id "$username" > /dev/null; then log_message "User $username already exists" else useradd -m $username log_message "Created user $username" fi ``` Our script logs users if they already exist and then continues; if not, a new user is generated and signed in. NOTE: In order to prevent it from interfering with our logs, /dev/null is forwarding the response to null. _**NOTE: Unix handles the creation of personal groups for us when we establish a new user, so we won't be creating them manually for each user.**_ - User Addition to Identified Groups After that, we continue by adding the users to their respective groups. ``` groups_array=($(echo $groups | tr "," "\n")) for group in "${groups_array[@]}"; do if ! getent group "$group" > /dev/null; then groupadd "$group" log_message "Created group $group" fi usermod -aG "$group" "$username" log_message "Added user $username to group $group" done ``` By using commas, the code sample divides the groups string and stores the pieces in a groups_array variable. Next, it iterates over each group, adding the user and making sure the group is created if it doesn't already exist: - Creating Permissions for the Home Directory ``` chmod 700 /home/$username chown $username:$username /home/$username log_message "Set up home directory for user $username" ``` chmod 700 /home/$username:: Sets the home directory permissions so only the user has full access (read, write, execute). chown $username:$username /home/$username: Changes the ownership of the home directory to the specified user and their group. echo "Set up home directory for user $username" | tee -a $LOGFILE: Logs a message indicating the home directory setup to a specified log file. - Giving Every User A Random Password ``` password=$(generate_random_password 12) echo "$username:$password" | chpasswd echo "$username,$password" >> $PASSWORD_FILE log_message "Set password for user $username" ``` This little code generates a new password using the function we previously built, updates the user's password to the produced password, saves the username and password to the $PASSWORD_FILE, and then logs a message confirming the change was successful. **Examining the Input File** The script reads the input file line by line, providing the arguments $username and $groups to the create_user() function on each line: ``` while IFS=';' read -r username groups; do create_user "$username" "$groups" done < "$1" ``` **Now, Compile Everything...** ``` #!/bin/bash # Log file location LOGFILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" # Check if the input file is provided if [ -z "$1" ]; then echo "Error: No file was provided" echo "Usage: $0 <name-of-text-file>" exit 1 fi # Create log and password files mkdir -p /var/secure touch $LOGFILE $PASSWORD_FILE chmod 600 $PASSWORD_FILE generate_random_password() { local length=${1:-10} # Default length is 10 if no argument is provided LC_ALL=C tr -dc 'A-Za-z0-9!?%+=' < /dev/urandom | head -c $length } # Function to create a user create_user() { local username=$1 local groups=$2 if getent passwd "$username" > /dev/null; then echo "User $username already exists" | tee -a $LOGFILE else useradd -m $username echo "Created user $username" | tee -a $LOGFILE fi # Add user to specified groupsgroup groups_array=($(echo $groups | tr "," "\n")) for group in "${groups_array[@]}"; do if ! getent group "$group" >/dev/null; then groupadd "$group" echo "Created group $group" | tee -a $LOGFILE fi usermod -aG "$group" "$username" echo "Added user $username to group $group" | tee -a $LOGFILE done # Set up home directory permissions chmod 700 /home/$username chown $username:$username /home/$username echo "Set up home directory for user $username" | tee -a $LOGFILE # Generate a random password password=$(generate_random_password 12) echo "$username:$password" | chpasswd echo "$username,$password" >> $PASSWORD_FILE echo "Set password for user $username" | tee -a $LOGFILE } # Read the input file and create users while IFS=';' read -r username groups; do create_user "$username" "$groups" done < "$1" echo "User creation process completed." | tee -a $LOGFILE ``` **In summary** This script offers a streamlined method for managing the creation of users and groups while making sure that all required actions are taken securely and recorded for audit purposes. SysOps engineers can save time and lower the possibility of mistakes during user onboarding by automating these procedures. For additional information and to begin your programming career, go to https://hng.tech/internship or https://hng.tech/premium. Please get in touch if you have any queries or ideas for enhancements. Cheers to automation!
christian_ochenehipeter_
1,909,413
Migrating from Nuxt 2 to Nuxt 3: A Comprehensive Guide
Migrating from Nuxt 2 to Nuxt 3 can be a tough challenge depending on the scope of your project and...
0
2024-07-02T22:08:16
https://dev.to/fido1hn/migrating-from-nuxt-2-to-nuxt-3-a-comprehensive-guide-56na
vue, nuxt, webdev, javascript
Migrating from Nuxt 2 to Nuxt 3 can be a tough challenge depending on the scope of your project and the depth of your knowledge regarding these frameworks. As of 30th, June 2024, Nuxt 2 has officially reached it’s EOL(End of life) and will no longer receive support from the Nuxt team, as efforts have shifted to maintaining and creating greater and better tools around Nuxt 3 and Nuxt 4 which is dated for release soon. This guide will walk you through the key differences between Nuxt 2 and Nuxt 3 and provide step-by-step instructions for a smooth transition. ## Understanding the Differences Before diving into the migration process, it's essential to understand the significant differences between Nuxt 2 and Nuxt 3. Some of the most notable changes include: 1. **Vue Version:** Nuxt 3 is built on top of Vue 3, which introduces new features like the Composition API, the deprecation of the "Object Rest Spread" syntax, some global APIs and more breaking changes in Vue 2 used in Nuxt 2, which may be present in your codebase. 2. **Configuration:** Nuxt 3 uses a new configuration format based on the defineNuxtConfig function, which provides better TypeScript support and an improved developer experience. This can be changed in your nuxt.config.js file. 3. **Routing:** Nuxt 3 introduces a new pages directory for routing and layout management, replacing the pages and layouts folders used in Nuxt 2. 4. **Middleware:** Nuxt 3 has a revamped middleware system, providing more flexibility and control over your application's behavior. ##Migration Process To ensure a smooth transition, follow these steps to migrate your Nuxt 2 project to Nuxt 3: 1. **Create a backup:** Before making any changes, create a backup of your project or work in a separate Git branch to avoid any potential data loss. 2. **Update dependencies:** Update your package.json file to use the latest versions of Nuxt 3, Vue 3, and other related packages. Remove any Nuxt 2 and Vue 2 dependencies. 3. **Update configuration:** Convert your nuxt.config.js or nuxt.config.ts file to use the defineNuxtConfig format. Ensure that all configuration options are compatible with Nuxt 3. 4. **Migrate layout components:** Move your layout components from the layouts directory to the new app/layouts directory. Update their structure and code to follow Nuxt 3's new layout syntax. 5. **Migrate pages and components:** Review your pages and components for compatibility with Vue 3 and Nuxt 3. Update any outdated syntax or deprecated features, and ensure all components are properly imported and registered. 6. **Update routing:** Configure your routes using the new pages directory and its associated files. Update any programmatic routing code to use the new Nuxt 3 router API. 7. **Review middleware:** Convert your existing middleware to Nuxt 3's new middleware format. Test the functionality of each middleware and make any necessary adjustments. 8. **Update plugins and modules:** Review any third-party plugins and modules used in your project. Update or replace them as needed to ensure compatibility with Nuxt 3. 9. **Test and debug:** Thoroughly test your application for any issues that may have arisen during the migration. Use browser developer tools, Nuxt's built-in debugging features, and Vue DevTools to help identify and fix problems. ## Conclusion Migrating from Nuxt 2 to Nuxt 3 can seem like a daunting task, but by following this guide and carefully reviewing your code, you can successfully update your application to take advantage of the latest features and improvements offered by Nuxt 3. Be patient, take it one step at a time, and don't hesitate to seek help from the active Nuxt community if you encounter any issues along the way.
fido1hn
1,909,411
Overcoming Backend Challenges: My Journey and Aspirations with HNG Internship
Introduction Hello, I'm Kingsley, and I'm embarking on an exciting journey with the HNG Internship....
0
2024-07-02T22:03:50
https://dev.to/that_khayy/overcoming-backend-challenges-my-journey-and-aspirations-with-hng-internship-42jc
backenddevelopment, hng, node, softwaredevelopment
**Introduction** Hello, I'm Kingsley, and I'm embarking on an exciting journey with the HNG Internship. As a budding backend developer, I recently encountered a particularly challenging problem that tested my skills and perseverance. In this blog post, I will walk you through the problem I faced, how I tackled it, and why I'm thrilled to be a part of the HNG Internship. The Problem The problem emerged while working on a user authentication system for a web application. The application needed to securely register users, handle login requests, and manage sessions. The complexity was compounded by the requirement to integrate social media login options and ensure all data transfers were encrypted. Step-by-Step Solution 1. **Understanding the Requirements**: I started by outlining the key requirements: - Secure user registration and login. - Session management. - Social media login integration. - Data encryption. 2. **Choosing the Right Tools**: Based on the requirements, I chose: - **Node.js and Express.js** for the backend framework. - **Passport.js** for authentication and social media login integration. - **bcrypt** for password hashing. - **JWT (JSON Web Tokens)** for session management. - **HTTPS** for secure data transfer. 3. **Setting Up the Project**: I initialized a new Node.js project and installed the necessary dependencies: ```bash npm init -y npm install express passport bcrypt jsonwebtoken dotenv ``` 4. **Implementing User Registration**: I created an endpoint for user registration that hashes the password using bcrypt before storing it in the database: ```javascript const bcrypt = require('bcrypt'); const saltRounds = 10; app.post('/register', async (req, res) => { const { username, password } = req.body; try { const hashedPassword = await bcrypt.hash(password, saltRounds); // Store user with hashed password in the database res.status(201).send('User registered successfully'); } catch (error) { res.status(500).send('Error registering user'); } }); ``` 5. **Implementing Login and JWT Authentication**: I set up the login endpoint to validate the user and issue a JWT for session management: ```javascript const jwt = require('jsonwebtoken'); const secretKey = 'your_secret_key'; app.post('/login', async (req, res) => { const { username, password } = req.body; // Retrieve user from the database const user = {}; // assume this is the retrieved user const match = await bcrypt.compare(password, user.hashedPassword); if (match) { const token = jwt.sign({ username }, secretKey, { expiresIn: '1h' }); res.json({ token }); } else { res.status(401).send('Invalid credentials'); } }); ``` 6. **Integrating Social Media Login**: I used Passport.js to integrate social media login options like Google and Facebook: ```javascript const passport = require('passport'); const GoogleStrategy = require('passport-google-oauth20').Strategy; passport.use(new GoogleStrategy({ clientID: 'your_client_id', clientSecret: 'your_client_secret', callbackURL: '/auth/google/callback' }, (token, tokenSecret, profile, done) => { // Find or create user in the database done(null, profile); })); app.get('/auth/google', passport.authenticate('google', { scope: ['profile'] })); app.get('/auth/google/callback', passport.authenticate('google', { failureRedirect: '/' }), (req, res) => { res.redirect('/'); }); ``` 7. **Ensuring Secure Data Transfer**: Finally, I set up HTTPS to encrypt data transfer: ```javascript const https = require('https'); const fs = require('fs'); const options = { key: fs.readFileSync('key.pem'), cert: fs.readFileSync('cert.pem') }; https.createServer(options, app).listen(port, () => { console.log(`Secure server running at https://localhost:${port}`); }); ``` #### Why HNG Internship? I am excited to be a part of the HNG Internship because it provides a unique opportunity to learn from industry experts, collaborate with fellow developers, and work on real-world projects. The internship offers a structured program that helps me improve my skills and gain valuable experience in backend development. #### Conclusion Solving this backend challenge was a significant milestone in my development journey. It reinforced the importance of understanding requirements, choosing the right tools, and implementing secure practices. As I continue to learn and grow with the HNG Internship, I look forward to tackling more complex problems and contributing to innovative projects. Learn more about the HNG Internship and its benefits [here](https://hng.tech/internship) and [here](https://hng.tech/premium). If you're interested in hiring talented developers from the program, check out [this link](https://hng.tech/hire).
that_khayy
1,909,409
Understanding NDAs and Why You Must Sign NDA Online with OpenSign
In today's fast-paced and interconnected business world, safeguarding confidential information is...
0
2024-07-02T22:01:18
https://dev.to/docusignlog-in/understanding-ndas-and-why-you-must-sign-nda-online-with-opensign-4p71
programming, opensource, signndaonline, digitalsignatures
In today's fast-paced and interconnected business world, safeguarding confidential information is more critical than ever. Whether you're a startup founder, a seasoned entrepreneur, or a professional navigating corporate landscapes, Non-Disclosure Agreements (NDAs) play a pivotal role in protecting sensitive information. NDAs serve as legally binding contracts that ensure the confidentiality of shared information between parties, fostering trust and safeguarding intellectual property. This comprehensive blog article delves into the essence of NDAs, their significance, and why utilizing OpenSign for NDA management is a game-changer for businesses and individuals alike. A Non-Disclosure Agreement (NDA), also known as a confidentiality agreement, is a legal contract between two or more parties that outlines the confidentiality of shared information and restricts its disclosure to third parties. NDAs are commonly used in various business contexts to protect sensitive information, trade secrets, proprietary data, and other confidential materials. There are three main types of NDAs: unilateral NDAs, bilateral (mutual) NDAs, and multilateral NDAs. In a unilateral NDA, only one party discloses confidential information to the other party. The receiving party is obligated to keep the information confidential. A bilateral NDA involves two parties exchanging confidential information. Both parties agree to protect each other's confidential information from unauthorized disclosure. In a multilateral NDA, three or more parties share confidential information. This type of NDA is useful in complex business arrangements involving multiple stakeholders. An effective NDA should include key elements like the definition of confidential information, obligations of the receiving party, exclusions from confidentiality, duration of the agreement, remedies for breach, and governing law and jurisdiction. These elements ensure that the NDA is comprehensive and enforceable, providing the necessary protection for confidential information. NDAs are essential for several reasons. For startups and tech companies, intellectual property (IP) is often the most valuable asset. NDAs help safeguard IP by preventing unauthorized disclosure of proprietary information, innovative ideas, and technological advancements. This protection is crucial for maintaining a competitive edge in the market. NDAs foster trust between parties by establishing clear expectations regarding the handling of confidential information. When entering into business negotiations, partnerships, or collaborations, NDAs create a sense of security and professionalism, encouraging open communication and cooperation. In today's digital age, information can be easily disseminated and misused. NDAs act as a legal barrier against unauthorized disclosure, ensuring that sensitive information remains confidential and is not exploited by competitors or malicious actors. NDAs facilitate business relationships by providing a legal framework for sharing confidential information. Whether negotiating a merger, exploring a partnership, or discussing a potential investment, NDAs enable parties to exchange information without fear of exposure. Breach of confidentiality can lead to significant legal and financial repercussions. NDAs mitigate these risks by providing legal recourse in case of a breach, protecting the disclosing party's interests and ensuring accountability. While NDAs are crucial, managing them can be challenging, especially for businesses dealing with multiple agreements. Common challenges include the time-consuming process of drafting, negotiating, and signing NDAs, the risk of human error, security concerns, and inefficient tracking and management. Traditional methods involving physical documents and manual signatures further exacerbate these challenges. OpenSign revolutionizes the way businesses and individuals handle NDAs by offering a seamless, secure, and efficient digital signature solution. OpenSign provides an intuitive and user-friendly interface that simplifies the entire NDA signing process. Whether you're a seasoned professional or new to digital signatures, OpenSign's platform is designed to be accessible and easy to navigate. Security is paramount when dealing with confidential information. OpenSign employs advanced encryption and security protocols to ensure the integrity and confidentiality of your NDAs. With features like recipient authentication using OTP and secure document storage, you can trust that your sensitive information is protected. Say goodbye to the time-consuming process of printing, signing, scanning, and sending physical documents. OpenSign allows you to create, send, and sign NDAs digitally in a matter of minutes. This efficiency frees up valuable time and resources for more critical business activities. OpenSign's automated workflows minimize the risk of human error by guiding you through each step of the signing process. Automatic reminders and notifications ensure that all parties complete their signatures on time, reducing the chances of oversight. OpenSign seamlessly integrates with popular platforms like Dropbox, Zapier, and more, allowing you to streamline your document management processes. You can import templates, organize documents, and automate workflows, enhancing overall productivity. OpenSign offers a comprehensive suite of features tailored to meet your specific needs. From advanced field validations and regular expression validations to custom email templates and document templates, OpenSign provides the tools necessary to create robust NDAs. Whether you're a small startup or a large enterprise, OpenSign caters to businesses of all sizes. With flexible pricing plans and scalable solutions, OpenSign grows with your business, ensuring that your NDA management remains efficient and effective. OpenSign ensures that your NDAs comply with legal standards and regulations. The platform provides completion certificates and audit trails, offering a verifiable record of the signing process. This compliance is crucial for upholding the enforceability of your NDAs. By adopting OpenSign, you contribute to a more sustainable and environmentally friendly business practice. Digital signatures reduce the need for paper, printing, and shipping, helping to minimize your carbon footprint. OpenSign understands that every business has unique requirements. The platform offers customizable solutions, including whitelabeling, custom domains, and priority support for enterprise clients. This customization ensures that OpenSign aligns with your brand and operational needs. In an era where information is a valuable currency, NDAs are indispensable tools for protecting confidential information and fostering trust in business relationships. However, managing NDAs can be challenging without the right tools. OpenSign offers a state-of-the-art digital signature solution that streamlines the NDA signing process, enhances security, and boosts efficiency. By leveraging OpenSign, businesses and individuals can navigate the complexities of NDA management with ease, ensuring that their confidential information remains protected and their business relationships thrive. Whether you're a startup founder looking to safeguard your innovative ideas or a seasoned entrepreneur seeking to streamline your NDA processes, OpenSign is your go-to solution for all your digital signature needs. Embrace the future of NDA management with OpenSign and experience the benefits of a seamless, secure, and efficient digital signature platform. Visit www.opensignlabs.com to learn more and start transforming the way you sign NDAs online today.
alexopensource
1,909,410
Enhancing Content Testing with Comprehensive Bug Reporting
In a modern software organization, a robust testing strategy is essential for ensuring the quality of...
0
2024-07-02T21:59:35
https://dev.to/maria_koko_474/enhancing-content-testing-with-comprehensive-bug-reporting-5gii
In a modern software organization, a robust testing strategy is essential for ensuring the quality of your digital products. One key aspect of this testing strategy is the use of content testing techniques - a set of methods and tools that allow you to evaluate the effectiveness, usability, and performance of your website or application content. As part of this content testing efforts, several issues that require attention are identified. To ensure these problems are addressed effectively, they are documented using a comprehensive bug reporting template. This approach not only improves the efficiency of the bug resolution process but also ensures that the development team has a clear understanding of the problem and the necessary context to implement effective solutions. Now, let's dive deeper into some specific issues that were identified during the testing process and explore potential solutions to address issues on the [scrape any website](https://scrapeanyweb.site/). One of the most critical issues that was identified during the content testing process was the non-functionality of the "Edit" button on the url section. The inability to edit the url information is a significant limitation for users, as it prevents them from updating the url details if need be. This can lead to user frustration and a poor overall experience with the application. To resolve this issue, the development team should investigate the root cause and implement the necessary fixes to ensure the edit button is fully functional. This may involve reviewing the code to identify and fix any bugs or errors, checking the integration between the front-end and back-end components to ensure the edit functionality is properly implemented, and verifying the application configuration to ensure the edit feature is enabled and properly configured. Another significant issue that was identified is the lack of user-friendly feedback and confirmation prompts in the application. Specifically, the application did not provide any prompts or confirmation messages to the user during the URL scraping process or when deleting files. This lack of feedback can lead to confusion, uncertainty, and potential data loss for users. To resolve this issue, it is recommended to implement progress indicators, status messages, and confirmation dialogs to inform the user about the status of the URL scraping process. Additionally, implementing a confirmation prompt that requires the user to explicitly confirm their intent before deleting a file can help prevent accidental data loss. Next, the inability to recover from accidental delete actions. If a user mistakenly clicks on the delete button while scraping a URL, the scraping process is interrupted, and the user has to start over from the beginning, losing all their progress. To address this problem, we recommend temporarily disabling the delete functionality during scraping, displaying a confirmation prompt before deleting a file, and providing a recovery mechanism that allows users to resume the scraping process from where it left off. And the last issue that was identified was the application's handling of error codes. Instead of displaying valid HTTP status codes when an error occurred, the application displayed a generic "Error: -1" message. This non-standard error code does not provide any useful information to the user about the nature of the problem or how to resolve it. To address this issue, we recommend carefully reviewing the code responsible for handling errors and translating them into HTTP status codes, displaying clear and meaningful error messages to the user, and providing troubleshooting guidance to help users resolve issues more effectively. [Findings link](https://docs.google.com/spreadsheets/d/19XDentX50iYNKjNNJ1AohdvSFwCcn14d44bxiUAJ9aA/edit?usp=sharing) By addressing these issues and implementing the recommended solutions, the application can significantly enhance its content testing capabilities, improve user experience, and ensure the quality and effectiveness of its content. By providing clear and informative feedback, implementing safeguards against accidental actions, and handling errors effectively, the application can create a more engaging, reliable, and user-centric content. download scrape app using [Windows store download page](https://apps.microsoft.com/detail/9mzxn37vw0s2?hl=en-us&gl=NG). Thanks for reading.
maria_koko_474
1,884,337
Generating PDF documents in Laravel
Laravel and DomPDF: Step by step guide to generating PDF Documents with Images and...
0
2024-07-02T21:56:47
https://dev.to/alphaolomi/generating-pdf-documents-in-laravel-n07
webdev, laravel, php, tutorial
# Laravel and DomPDF: Step by step guide to generating PDF Documents with Images and CSS Creating PDF documents is a common requirement in web applications, especially for generating invoices, receipts, certificates, tickets, and various reports. In this comprehensive tutorial, we'll delve into using Laravel and DomPDF to generate PDF documents with images and CSS. We'll cover configuration options, design considerations, output size, performance, and database queries. Additionally, we'll discuss tips and tricks for handling page breaks, loading images using base64, and more. ## Prerequisites Before we start, ensure you have the following installed: - PHP >=8.2 - Composer 2+ - Laravel 10+ ## Introduction DomPDF is a popular PHP library that allows you to generate PDF documents from HTML content. It supports CSS styling, images, and various configuration options. By integrating DomPDF with Laravel, you can easily create sophisticated PDF documents using Blade templates and Laravel's powerful features. Other popular PDF libraries include [TCPDF](https://tcpdf.org/), [FPDF](http://www.fpdf.org/), and [Snappy](https://github.com/KnpLabs/snappy). However, DomPDF is widely used due to its ease of integration and robust feature set. In this tutorial, we'll walk through the process of setting up a Laravel project, configuring DomPDF, creating a controller to handle PDF generation, designing a Blade template for the PDF content, adding routes, and optimizing performance. We'll also discuss advanced configuration options and provide tips and tricks for generating high-quality PDF documents. ## Assumptions This tutorial assumes you have a basic understanding of Laravel and PHP. If you're new to Laravel, consider going through the official [Laravel documentation](https://laravel.com/docs) to familiarize yourself with the framework. Other wise you can follow the [Laravel Bootcamp](https://bootcamp.laravel.com/) to get started with Laravel. ## Step 1: Setting Up Laravel Project First, create a new Laravel project if you don't already have one, or use an existing project, Of course, you can skip this step if you already have a Laravel project. ```bash composer create-project --prefer-dist laravel/laravel pdf-tutorial cd pdf-tutorial ``` Next, install DomPDF: ```bash composer require barryvdh/laravel-dompdf ``` Publish the configuration file: ```bash php artisan vendor:publish --provider="Barryvdh\DomPDF\ServiceProvider" ``` ## Step 2: Configuring DomPDF Open the `config/dompdf.php` file. The configuration file contains various options for customizing the PDF output. Here you can set various options including the default paper size, orientation, font, and more. - **Paper size:** You can set the default paper size. ```php 'default_paper_size' => 'a4', ``` - **Orientation:** Set the default orientation (portrait or landscape). ```php 'orientation' => 'portrait', ``` - **Font:** You can specify the default font and add custom fonts. ```php 'default_font' => 'serif', ``` ## Step 3: Creating a Controller Create a controller to handle PDF generation: ```bash php artisan make:controller PDFController ``` In `app/Http/Controllers/PDFController.php`, add the following code: ```php <?php namespace App\Http\Controllers; use Illuminate\Http\Request; use PDF; class PDFController extends Controller { public function generatePDF() { $data = [ 'title' => 'Laravel PDF Example', 'date' => date('m/d/Y'), ]; $pdf = PDF::loadView('myPDF', $data); return $pdf->download('document.pdf'); } } ``` ## Step 4: Creating a Blade Template Create a Blade template for the PDF content: ```bash touch resources/views/myPDF.blade.php ``` Add the following content to `myPDF.blade.php`: ```html <!DOCTYPE html> <html> <head> <title>Laravel PDF Example</title> <style> body { font-family: 'Arial, sans-serif'; } .container { margin: 0 auto; padding: 20px; } .header { text-align: center; margin-bottom: 20px; } .content { font-size: 12px; } </style> </head> <body> <div class="container"> <div class="header"> <h1>{{ $title }}</h1> <p>Date: {{ $date }}</p> </div> <div class="content"> <p>This is an example of a PDF document generated using Laravel and DomPDF.</p> </div> </div> </body> </html> ``` ## Step 5: Adding Routes Add routes to handle PDF generation in `routes/web.php`: ```php use App\Http\Controllers\PDFController; Route::get('generate-pdf', [PDFController::class, 'generatePDF']); ``` ## Step 6: Adding Images You can add images to the PDF by embedding them as base64-encoded strings or using URLs. Images can be embedded directly in the Blade template using base64 encoding. For example, to embed an image from the `public/images` this is how you can do it: ```html <img src="data:image/png;base64,{{ base64_encode(file_get_contents(public_path('images/logo.png'))) }}" alt="Logo"> ``` Or directly from a URL: ```html <img src="{{ asset('images/logo.png') }}" alt="Logo"> ``` ## Step 7: Optimizing Performance ### Database Queries When dealing with large datasets (e.g., 1,000+ records), use pagination or chunking to manage memory usage: ```php $data = DB::table('users')->paginate(50); $pdf = PDF::loadView('myPDF', ['data' => $data]); ``` ### Output Size To reduce the output size, minimize the use of heavy images and opt for vector graphics when possible. Also, use efficient CSS. ### Page Breaks Ensure content is well-structured for page breaks. Use CSS to handle page breaks: ```css .page-break { page-break-after: always; } ``` And in your Blade template: ```html <div class="page-break"></div> ``` ## Step 8: Advanced Configuration For more advanced configurations, refer to the DomPDF documentation. You can customize almost everything, from margins to the way fonts are loaded. ### Using Custom Fonts To use custom fonts, first, add them to your project and configure DomPDF to use them: ```php 'custom_font_dir' => base_path('resources/fonts/'), 'custom_font_data' => [ 'custom-font' => [ 'R' => 'CustomFont-Regular.ttf', 'B' => 'CustomFont-Bold.ttf', ] ], ``` In your Blade template: ```html <style> body { font-family: 'custom-font', sans-serif; } </style> ``` ## Conclusion By following this step-by-step guide, you can generate sophisticated PDF documents using Laravel and DomPDF, complete with images and CSS styling. This tutorial has covered essential configuration options, design considerations, performance optimization. You can expand this foundation to build a robust document generation system for your Laravel application. ### Potential Series and Repository This tutorial is part of a series on PDF generation with Laravel. A complete repository with various document templates (invoices, receipts, certificates, tickets, etc.) can be found [here](#). Feel free to contribute and expand the collection. Happy coding!
alphaolomi
1,909,408
[Game of Purpose] Day 45 - HUD
Today I added HUD. The first version includes engine status. It properly reacts to its change.
27,434
2024-07-02T21:49:19
https://dev.to/humberd/game-of-purpose-day-45-hud-51h7
gamedev
Today I added HUD. The first version includes engine status. It properly reacts to its change. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f6w6hhqjgo2swb5b2824.png) {% embed https://youtu.be/B05i21UdWA4 %}
humberd
1,909,407
Automating User Management with Advanced Bashscript
As part of the requirements of my DevOps internship with HNG, I am required to create a bash script...
0
2024-07-02T21:49:10
https://dev.to/laraadeboye/automating-user-management-with-advanced-bashscript-4m8p
bash, advancedbashscripting, createusersinlinux, automation
As part of the requirements of my DevOps internship with [HNG](https://hng.tech/internship), I am required to create a bash script to automate user management. Managing users in a Linux environment can be a repetitive and error-prone task if done manually, especially when dealing with a large number of users. To simplify this process, you can use a bash script to automate user creation, group assignment, and password management. Let's walk through a comprehensive bash script designed to streamline user management. This script reads a list of usernames and groups from a text file, creates users and groups accordingly, sets up home directories, generates random passwords, and logs all actions for audit purposes. You can find the complete script on my [GitHub repository](https://github.com/laraadeboye/user-creation-script). This bash script performs several key tasks: 1. **Ensures root privileges**: The script checks if it is run as root. 2. **Logs actions**: Logs important actions and errors for audit and debugging. 3. **Validates inputs**: Ensures usernames and group names are valid. 4. **Creates users and groups**: Adds new users and groups as specified in an input file. 5. **Sets passwords**: Generates and sets random passwords for new users. 6. **Configures home directories**: Sets appropriate permissions for user home directories. ### Detailed Explanation of script. - **Script Header and Logging** The script begins by defining a logging function to record actions and errors to a log file. ```bash #!/usr/bin/bash log() { echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> "$LOG_FILE" } ``` - **Password Generation** A function to generate random passwords is defined, which uses /dev/urandom for secure random number generation. ```bash generate_password() { local password_length=12 tr -dc A-Za-z0-9 </dev/urandom | head -c $password_length } ``` - **Ensuring Root Privileges** The script checks if it is run with root privileges. If not, it logs an error and exits. Root users have user id 0. ```bash if [ "$(id -u)" -ne 0 ]; then echo "This script must be run as root or with sudo privileges" >&2 log "Script not run as root or with sudo privileges" exit 1 fi ``` - **File Paths and Initial Checks** The script sets file paths for the user input file, log file, and password storage. It also verifies the existence of these files. ```bash USER_FILE="$1" LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.txt" if [ -z "$USER_FILE" ]; then echo "Usage: $0 <name-of-text-file>" log "No user file provided. Usage: $0 <name-of-text-file>" exit 1 fi if [ ! -f "$USER_FILE" ]; then echo "Input file not found" log "Input file '$USER_FILE' not found" exit 1 fi if [ ! -f "$LOG_FILE" ]; then touch "$LOG_FILE" chmod 0600 "$LOG_FILE" log "Log file created: $LOG_FILE" fi if [ ! -f "$PASSWORD_FILE" ]; then mkdir -p /var/secure touch "$PASSWORD_FILE" chmod 0600 "$PASSWORD_FILE" log "Password file created: $PASSWORD_FILE" fi ``` - **Input Validation** The script validates the usernames and group names using regular expressions to ensure they contain only alphanumeric characters, hyphens, and underscores. ```bash validate_username() { if [[ ! "$1" =~ ^[a-zA-Z0-9_-]+$ ]]; then return 1 fi return 0 } validate_groups() { IFS=',' read -ra group_list <<< "$1" for group in "${group_list[@]}"; do if [[ ! "$group" =~ ^[a-zA-Z0-9_-]+$ ]]; then return 1 fi done return 0 } ``` - **User and Group Creation** The script reads the user file line by line, creating users and their groups as necessary. It also ensures that each user has a personal group and sets up the home directory with appropriate permissions. ```bash while IFS=';' read -r username groups; do username=$(echo "$username" | xargs) groups=$(echo "$groups" | xargs) if [ -z "$username" ] || [ -z "$groups" ]; then log "Invalid line format in user file: '$username;$groups'" user_creation_failed=true continue fi if ! validate_username "$username"; then log "Invalid username format: '$username'" user_creation_failed=true continue fi if ! validate_groups "$groups"; then log "Invalid group format: '$groups'" group_creation_failed=true continue fi if id "$username" &>/dev/null; then log "User $username already exists." continue fi all_users_exist=false if ! getent group "$username" > /dev/null; then if groupadd "$username"; then log "Group $username created." else log "Failed to create group $username." group_creation_failed=true continue fi fi IFS=',' read -ra group_list <<< "$groups" for group in "${group_list[@]}"; do if ! getent group "$group" > /dev/null; then if groupadd "$group"; then log "Group $group created." else log "Failed to create group $group." group_creation_failed=true fi fi done unset IFS if useradd -m -g "$username" -G "$groups" "$username"; then log "User $username created and added to groups $groups" users_created=true any_users_created=true else log "Failed to create user $username" user_creation_failed=true continue fi password=$(generate_password) log "Generated password for $username" if echo "$username:$password" | chpasswd; then echo "$username,$password" >> "$PASSWORD_FILE" log "Password set for $username and stored securely" else log "Failed to set password for $username" password_setting_failed=true continue fi if chown "$username:$username" "/home/$username" && chmod 700 "/home/$username"; then log "Home directory for $username set up with appropriate permissions." else log "Failed to set up home directory for $username" home_directory_setup_failed=true fi done < "$USER_FILE" ``` - **Summary and Exit** After processing all lines, the script logs a summary of the actions taken and exits. ```bash log "User creation script run completed." if [ "$any_users_created" = true ]; then echo "$(date '+%Y-%m-%d %H:%M:%S') - User creation script completed successfully." elif [ "$all_users_exist" = true ]; then echo "Users already exist. Nothing left to do" else echo "$(date '+%Y-%m-%d %H:%M:%S') - No users were created successfully. Check log file." log "No users were created successfully. Please check the input file format: username;group1,group2,group3." fi [ "$user_creation_failed" = true ] && echo "Users creation incomplete." && log "Some users were not created due to errors. Check file format" [ "$password_setting_failed" = true ] && echo "Users' passwords creation incomplete." && log "Some users' passwords were not set due to errors. Check file format" [ "$group_creation_failed" = true ] && echo "Groups creation incomplete." && log "Some groups were not created due to errors. Check file format" [ "$home_directory_setup_failed" = true ] && echo "Home directories creation incomplete." && log "Some home directories were not set up due to errors." exit 0 ``` **Conclusion** This script offers a reliable way to automate processes related to user management, lowering the possibility of mistakes and saving important administrative time. System administrators can create user accounts in a consistent and safe manner with this script, which also manages passwords and group assignments. Detailed logs are kept for accountability and debugging purposes. To get real-time internship experience, sign up with [HNG](https://hng.tech/premium)
laraadeboye
1,909,406
Automating User Creation and Management with Bash: A Step-by-Step Guide
Automating user creation and management can save time, reduce errors, and enhance security for SysOps...
0
2024-07-02T21:45:58
https://dev.to/victorthegreat7/automating-user-creation-and-management-with-bash-a-step-by-step-guide-2oma
Automating user creation and management can save time, reduce errors, and enhance security for SysOps engineers. In this article, we will go over a Bash script that automates the creation of users and groups, sets up home directories, and manages passwords securely. The script reads from a text file where each line consists of a username and their corresponding groups, logs every action in a log file, and saves the randomly generated passwords for each user in a secure `.csv` file accessible only to the owner. Let's dive into the bash script and break it down step by step. ## Line-by-Line Explanation 1.) First off, we have our shebang. ``` #!/bin/bash ``` This specifies the type of interpreter script will be run with. Since it is a "bash" script, it should be run with the Bourne Again Shell (Bash) interpreter. Also, some commands in the script may not be interpreted correctly outside of Bash. 2.) The paths for the log file and the password file are set to avoid unnecessary repetition in the script. ``` # Define log and password storage files LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" ``` 3.) To ensure that the bash script runs with root privileges, an if statement checks if the Effective User ID (EUID) is equal to zero. The EUID determines the permissions the script will use to run, and 0 represents the root user ID in Linux systems. Only users with administrative privileges (users who can use sudo or the root user itself) can run the script. If someone attempts to run it without such privileges, an error message and the script's run process will be terminated. ``` # Check if the script is run with root privileges if [[ $EUID -ne 0 ]]; then echo "This script must be run with root privileges." >&2 exit 1 fi ``` 4.) To ensure that an input file is provided as an argument when running the script, this `if` statement will terminate the script if no argument is provided. In this statement, `$#` represents the argument provided when running the script. If it is equal to zero (no argument is provided) or if it is greater than or equal to 2, an error message will be printed and the script's execution is halted. ``` # Check if the input file is provided if [[ $# -eq 0 || $# -ge 2 ]]; then echo "Usage: $0 <user_file>" >&2 exit 1 fi ``` 5.) Next is the `log_action` function that records logs using bold lettering (formatted with ANSI escape codes: `\033[1m` and `\033[0m`) and a timestamp (using the `date` command to get the current date and the specified date format: `'%Y-%m-%d %H:%M:%S'`). This function is used to log important steps, success messages, and error messages in the script. ``` # Log function log_action() { echo "--------------------------------------------------" | tee -a "$LOG_FILE" echo -e "$(date +'%Y-%m-%d %H:%M:%S') - \033[1m$1\033[0m" | tee -a "$LOG_FILE" echo "--------------------------------------------------" | tee -a "$LOG_FILE" } ``` 6.) Next is the `create_user_account` function that manages the entire process of creating a user, setting up their home directories with appropriate permissions and ownership, adding them to specified groups, and assigning randomly generated passwords. Every important step is logged. **`create_user_account` function** ``` create_user_account() { local username="$1" local groups="$2" log_action "Creating user account '$username'..." # Check if user already exists if id "$username" &> /dev/null; then echo "User '$username' already exists. Skipping..." | tee -a "$LOG_FILE" return 1 fi # Create user with home directory and set shell if useradd -m -s /bin/bash "$username"; then echo "User $username created successfully." | tee -a "$LOG_FILE" else echo "Error creating user $username." | tee -a "$LOG_FILE" return 1 fi # Create user group if it does not exist (in case the script is run in other linux distributions that do not create user groups by default) if ! getent group "$username" >/dev/null; then groupadd "$username" usermod -g "$username" "$username" log_action "Group $username created." fi # Set up home directory permissions echo "Setting permissions for /home/$username..." | tee -a "$LOG_FILE" chmod 700 "/home/$username" && chown "$username:$username" "/home/$username" if [[ $? -eq 0 ]]; then echo "Permissions set for /home/$username." | tee -a "$LOG_FILE" else echo "Error setting permissions for /home/$username." | tee -a "$LOG_FILE" return 1 fi # Add user to additional groups (comma separated) echo "Adding user $username to specified additional groups..." | tee -a "$LOG_FILE" IFS=',' read -ra group_array <<< "$groups" for group in "${group_array[@]}"; do group=$(echo "$group" | xargs) # Check if group exists, if not create it if ! getent group "$group" &>/dev/null; then if groupadd "$group"; then echo "Group $group did not exist. Now created." | tee -a "$LOG_FILE" else echo "Error creating group $group." | tee -a "$LOG_FILE" continue fi fi # Add user to group if gpasswd -a "$username" "$group"; then echo "User $username added to group $group." | tee -a "$LOG_FILE" else echo "Error adding user $username to group $group." | tee -a "$LOG_FILE" fi done # Log if no additional groups are specified if [[ -z "$groups" ]]; then echo "No additional groups specified." | tee -a "$LOG_FILE" fi # Generate random password, set it for the user, and store it in a file echo "Setting password for user $username..." | tee -a "$LOG_FILE" password=$(head /dev/urandom | tr -dc A-Za-z0-9 | head -c 12) echo "$username:$password" | chpasswd if [[ $? -eq 0 ]]; then echo "Password set for user $username." | tee -a "$LOG_FILE" echo "$username,$password" >> "$PASSWORD_FILE" else echo "Error setting password for user $username. Deleting $username user account" | tee -a "$LOG_FILE" userdel -r "$username" return 1 fi } ``` - In the function, I initially set local variables to hold the values of the specified username and groups, to avoid repetition. ``` local username="$1" local groups="$2" ``` - In this next section, I use the `log_action` function to record the start of each user account creation. Additionally, I verify whether the user already exists. If the user does exist, an error message is displayed and the script's execution is stopped. ``` log_action "Creating user account '$username'..." # Check if user already exists if id "$username" &> /dev/null; then echo "User '$username' already exists. Skipping..." | tee -a "$LOG_FILE" return 1 fi ``` - Next, there is an `if` statement that uses the `useradd` command with the `-m` and `-s` flags to create a user with a login shell (In this case, the login shell is set to be `/bin/bash`. If you want, you can modify or remove the `-s /bin/bash` part entirely) and assigns a home directory to the user. It stops the script's run process if an error occurs during the execution of the command. ``` # Create user with home directory and set shell if useradd -m -s /bin/bash "$username"; then echo "User $username created successfully." | tee -a "$LOG_FILE" else echo "Error creating user $username." | tee -a "$LOG_FILE" return 1 fi ``` - Next, in the case that the script is run on other Linux distributions that do not create and assign a primary group with the same name as the newly created user, the `if` statement here will create a group with the same name as the user and add it as the user's primary group. ``` # Create user group if it does not exist (in case the script is run in other linux distributions that do not create user groups by default) if ! getent group "$username" >/dev/null; then groupadd "$username" usermod -g "$username" "$username" log_action "Group $username created." fi ``` - In this section of the function, the newly created user is designated as the owner of the newly created home directory. The owner is also granted all possible permissions for the directory. This is all done using the `chmod` and `chown` commands. If this process is unsuccessful, an error message is printed and the execution is halted. ``` # Set up home directory permissions echo "Setting permissions for /home/$username..." | tee -a "$LOG_FILE" chmod 700 "/home/$username" && chown "$username:$username" "/home/$username" if [[ $? -eq 0 ]]; then echo "Permissions set for /home/$username." | tee -a "$LOG_FILE" else echo "Error setting permissions for /home/$username." | tee -a "$LOG_FILE" return 1 fi ``` - In this section, the newly created user is added to additional groups specified in the input file. By setting the Internal Field Separator (IFS) to expect comma-separated values and using the `read` command with the `-ra` flags, the groups are individually placed inside an array called `group_array` to be used in the subsequent `for` loop. Within the loop, for every value in the group_array, the `xargs` command removes any whitespace, creates the group if it does not exist, and finally adds the user to the group using the `gpasswd` command with the `-a` flag. In the case where no group is specified for the user in the input file, a message will be printed. ``` # Add user to additional groups (comma separated) echo "Adding user $username to specified additional groups..." | tee -a "$LOG_FILE" IFS=',' read -ra group_array <<< "$groups" for group in "${group_array[@]}"; do group=$(echo "$group" | xargs) # Check if group exists, if not create it if ! getent group "$group" &>/dev/null; then if groupadd "$group"; then echo "Group $group did not exist. Now created." | tee -a "$LOG_FILE" else echo "Error creating group $group." | tee -a "$LOG_FILE" continue fi fi # Add user to group if gpasswd -a "$username" "$group"; then echo "User $username added to group $group." | tee -a "$LOG_FILE" else echo "Error adding user $username to group $group." | tee -a "$LOG_FILE" fi done # Log if no additional groups are specified if [[ -z "$groups" ]]; then echo "No additional groups specified." | tee -a "$LOG_FILE" fi ``` - For the final section of the function, a random 12-character password is generated and set for the user. The head command collects a stream of random bytes from the `/dev/urandom` file. This stream is piped to the `tr` command, which filters the bytes to include only alphanumeric characters (A-Z, a-z, 0-9) using the `-dc` flag. The filtered result is then piped to another head command, which selects only the first 12 characters from the edited stream. The password is then set by piping the user's name and the randomly generated password to the `chpasswd` command. The user and the generated password are saved in the designated password `.csv` file. If setting the password fails, the script deletes the user account and logs the error, to avoid any possible security risk. ``` # Generate random password, set it for the user, and store it in a file echo "Setting password for user $username..." | tee -a "$LOG_FILE" password=$(head /dev/urandom | tr -dc A-Za-z0-9 | head -c 12) echo "$username:$password" | chpasswd if [[ $? -eq 0 ]]; then echo "Password set for user $username." | tee -a "$LOG_FILE" echo "$username,$password" >> "$PASSWORD_FILE" else echo "Error setting password for user $username. Deleting $username user account" | tee -a "$LOG_FILE" userdel -r "$username" return 1 fi ``` 7.) After creating the create_user_account function, the script processes a file containing user information and creates user accounts accordingly. ``` # Process the user file user_file="$1" while IFS=';' read -r username groups; do if create_user_account "$username" "${groups%%[ ;]}"; then log_action "User account '$username' created successfully." else log_action "Error creating user account '$username'." fi done < "$user_file" ``` - The script takes the user file as its argument and assigns it to the variable `user_file`. - A while loop reads each line of the user file. The `IFS=';'` part sets the Internal Field Separator to a semicolon (;), splitting each line at the semicolon. The `read -r` username groups part reads the split parts into the username and groups variables. - For each line in the file, the script calls the create_user_account function with the username and the groups (with trailing spaces removed using `${groups%%[ ;]}`). The script also logs a message if the result of the `create_user_account` was a success or failure. 8.) After all that is done, the script gives only the owner (root) and those with root privileges access to the password file, logs the completion of the script's execution and prints the log file and password file location. ``` # Keep password file accessible only to those with root privileges chmod 600 "$PASSWORD_FILE" # Log completion log_action "User creation script completed." # Print log file and password file location echo "Check $LOG_FILE for details." echo "Check $PASSWORD_FILE for user passwords." ``` **Prerequisites for running the script** - A Linux system - A bash terminal (Optional. You can use any available shell terminal on the Linux system). - Root privileges on your system. - The text file containing the usernames and groups, must be formatted as `username;group1,group2`. **To run the script:** 1.) Copy the file from or clone the repository [HNG_Stage1](https://github.com/VictortheGreat7/HNG_Stage1) 2.) Copy the text file containing the usernames and groups to the folder where the script is located. 3.) Then, in the directory where both the script and the text file are now located, run `sudo ./create_users.sh <text file>` ## Conclusion This script simplifies some administrative tasks. With such a script, SysOps engineers can automate user and group creation and management, allowing them to focus on more critical work while ensuring efficient and secure user management. This is one of the project assignments in the [HNG Internship](https://hng.tech/internship) program designed to enhance your resume and deepen your knowledge of bash scripting. For the best experience, visit [HNG Premium](https://hng.tech/premium).
victorthegreat7
1,909,404
A new react dev slack group
Hi all! We just launched our slack group focused on react but mainly frontend development. We want...
0
2024-07-02T21:40:32
https://dev.to/jhobbie_board_3a35020a47f/a-new-react-dev-slack-group-44e6
community, react, nextjs, reactnative
Hi all! We just launched our slack group focused on react but mainly frontend development. We want to build a community of frontends devs (and all devs) that is welcoming and supportive for one another. So whether you’re just interested in frontend development or you’ve been doing it for years join us! https://join.slack.com/t/react-devs-world/shared_invite/zt-2lpxa7tie-wkl7sBhANirRaKYciVcLsg
jhobbie_board_3a35020a47f
1,909,403
Understanding Chain-of-Thought Prompting: A Revolution in Artificial Intelligence
What is Chain-of-Thought Prompting? Chain-of-Thought Prompting is a method that guides...
0
2024-07-02T21:32:47
https://dev.to/sgaglione/understanding-chain-of-thought-prompting-a-revolution-in-artificial-intelligence-36i1
llm, python, ia, ai
## What is Chain-of-Thought Prompting? Chain-of-Thought Prompting is a method that guides language models through a series of logical steps to arrive at an answer or solution. Unlike traditional approaches where models generate responses directly, CoT encourages models to “think out loud,” detailing their reasoning process before formulating a conclusion. ## How It Works - **Problem Decomposition**: The model is encouraged to break down a complex problem into simpler sub-problems. - **Reasoning Sequences**: By stimulating thought sequences, the model can approach questions in a more structured manner. - **Iterative Reflection**: The model can revise and refine its answers based on new information or identified errors. ![Wei et al. (2022)](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9dlmfn25vtean1gpdjjg.png) ## Example Prompts **Example 1**: Advanced Mathematical Problem ```python # Define the question and steps question = "If a company grows at an annual rate of 6%, what will its revenue be after 5 years, if its current revenue is 3 million euros?" steps = """ Question: If a company grows at an annual rate of 5%, what will its revenue be after 4 years, if its current revenue is 4 million euros? Step-by-step solution: 1. The formula for compound growth is C = C0 × (1 + r)^t. 2. Where C is the future revenue, C0 is the initial revenue, r is the growth rate, and t is the number of years. 3. The initial revenue C0 is 4 million euros. 4. The growth rate r is 5% or 0.05. 5. The number of years t is 4. 6. Calculate: C = 4,000,000 × (1 + 0.05)^4. 7. C = 4,000,000 × 1.21550625. 8. C ≈ 4,862,025. Answer: The revenue after 4 years will be approximately 4,862,025 euros. Question: """ # Combine question and steps into the prompt prompt = f"{steps}\n\n{question}\n\nAnswer:" # Call the OpenAI API response = openai.Completion.create( engine="engine", prompt=prompt, max_tokens=150 ) # Display the response print(response.choices[0].text.strip()) ``` **Example 2:** Applied Physics Problem ```python # Define the question and steps question = "What is the force exerted by a 12 kg object in free fall after 4 seconds, given an acceleration due to gravity of 9.8 m/s²?" steps = """ Question: What is the force exerted by a 20 kg object in free fall, given an acceleration due to gravity of 9.8 m/s²? Step-by-step solution: 1. The force exerted by an object in free fall is given by the formula F = m × a. 2. Where m is the mass of the object and a is the acceleration. 3. The mass m is 20 kg. 4. The acceleration due to gravity a is 9.8 m/s². 5. Calculate the force: F = 20 × 9.8. 6. F = 196 N (Newton). Answer: The force exerted by the object in free fall is 196 N. Question: """ # Combine question and steps into the prompt prompt = f"{steps}\n\n{question}\n\nAnswer:" # Call the OpenAI API response = openai.Completion.create( engine="engine", prompt=prompt, max_tokens=150 ) # Display the response print(response.choices[0].text.strip()) ``` **Example 3:** Financial Analysis ```python # Define the question and steps question = "Find the total amount in a savings account after 8 years if 10,000 euros are invested at an annual interest rate of 5% compounded annually." steps = """ Question: What will be the total amount in a savings account after 6 years if 7,000 euros are invested at an annual interest rate of 4% compounded annually? Step-by-step solution: 1. Use the formula for compound interest: A = P × (1 + r/n)^(nt). 2. Where A is the future amount, P is the initial principal, r is the annual interest rate, n is the number of times the interest is compounded per year, and t is the number of years. 3. The initial principal P is 7,000 euros. 4. The annual interest rate r is 4% or 0.04. 5. The interest is compounded once per year n = 1. 6. The number of years t is 6. 7. Calculate: A = 7,000 × (1 + 0.04/1)^(1×6). 8. A = 7,000 × 1.265319. 9. A ≈ 8,857.23. Answer: The total amount in the account after 6 years will be approximately 8,857.23 euros. Question: """ # Combine question and steps into the prompt prompt = f"{steps}\n\n{question}\n\nAnswer:" # Call the OpenAI API response = openai.Completion.create( engine="engine", prompt=prompt, max_tokens=150 ) # Display the response print(response.choices[0].text.strip()) ``` **Example 4:** Currency Conversion Problem ```python # Define the question and steps question = "How many euros are needed to obtain 75 US dollars if 1 euro is worth 1.15 US dollars?" steps = """ Question: How many euros are needed to obtain 50 US dollars if 1 euro is worth 1.2 US dollars? Step-by-step solution: 1. To find out how many euros are needed, we divide the amount in dollars by the exchange rate. 2. Euros needed = Dollars / Exchange rate. 3. The amount in dollars is 50. 4. The exchange rate is 1 euro for 1.2 dollars. 5. Calculate: Euros needed = 50 / 1.2. 6. Euros needed ≈ 41.67. Answer: 41.67 euros are needed to obtain 50 US dollars. Question: """ # Combine question and steps into the prompt prompt = f"{steps}\n\n{question}\n\nAnswer:" # Call the OpenAI API response = openai.Completion.create( engine="model_engine", prompt=prompt, max_tokens=150 ) # Display the response print(response.choices[0].text.strip()) ``` ## Benefits of Chain-of-Thought Prompting **1. Improved Accuracy** By breaking down problems into logical steps, CoT enhances the accuracy of responses. This is particularly useful for complex tasks like mathematics and logical analyses where each step must be exact to achieve the correct result. **2. Explainability** Language models can often seem like “black boxes.” Chain-of-Thought Prompting provides greater transparency by making the model’s thought process visible, making its decisions more explainable and verifiable. **3. Robustness** By encouraging thorough reflection, CoT helps identify and correct errors along the way, increasing the model’s robustness. ## Practical Applications **1. Education** In the educational field, Chain-of-Thought Prompting can be used to create interactive learning tools that not only provide answers but also explain the solving processes. This can help students better understand complex concepts and develop problem-solving skills. **2. Technical Support** Virtual assistants and chatbots can benefit from CoT by offering more precise and detailed technical solutions. For example, instead of simply providing a solution, the bot can explain each step of the troubleshooting process. **3. Research and Development** In research and development sectors, Chain-of-Thought Prompting can help generate hypotheses and plan experiments more systematically. By detailing the reasoning steps, researchers can better assess the validity of their approaches and adjust their methodologies accordingly. ## Future Implications **Optimization and Personalization** As models become more sophisticated, it will be crucial to develop methods to customize CoT based on specific user needs and contexts. This might involve adjustments in how models decompose problems and manage reasoning sequences. **Ethics and Responsibility** With increased transparency comes increased responsibility. Models using Chain-of-Thought Prompting must be designed to ensure they do not generate bias or misinformation. Additionally, it will be important to monitor and regulate the use of these models to prevent misuse. ## Conclusion Chain-of-Thought Prompting is a promising innovation that has the potential to transform how we interact with language models. By encouraging structured and sequential thinking, this technique not only improves the accuracy and robustness of responses but also provides better transparency and explainability. As this method evolves, it will open up new perspectives in various fields, from education to research, while raising new questions about the optimization and ethics of AI. ## Key Research Here are some key research papers on Chain-of-Thought Prompting if you would like to know more and in greater detail : > **“Chain-of-Thought Prompting Elicits Reasoning in Large Language Models” — Wei, Jason et al. (2022)** > This paper introduces the Chain-of-Thought Prompting method, which enhances the reasoning capabilities of language models by asking them to produce a sequence of reasoning steps before giving a final answer. > [arXiv:2201.11903](https://arxiv.org/abs/2201.11903) > **“Large Language Models are Zero-Shot Reasoners” — Kojima, Takeshi et al. (2022)** > The authors demonstrate how large language models can perform complex reasoning without explicit training by using well-crafted prompts. > [arXiv:2205.11916](https://arxiv.org/abs/2205.11916) > **“Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents” — Ahn, Michael et al. (2022)** > This paper explores how language models can be used to autonomously plan actions by breaking down complex tasks into manageable sub-tasks. > [arXiv:2201.07207](https://arxiv.org/abs/2201.07207) > **“Measuring Massive Multitask Language Understanding” — Hendrycks, Dan et al. (2021)** > The authors evaluate the performance of large language models on a variety of multitask challenges and emphasize the importance of task decomposition to improve understanding and accuracy. > [arXiv:2009.03300](https://arxiv.org/abs/2009.03300) > **“Emergent Abilities of Large Language Models” — Wei, Jason et al. (2022)** > This paper discusses the emergent abilities of large language models and suggests that techniques like Chain-of-Thought Prompting are essential to leverage these abilities. > [arXiv:2206.07682](https://arxiv.org/abs/2206.07682)
sgaglione
1,909,338
Automating User On-Boarding With Bash Scripting.
Introduction In Modern day IT administration, automation must become part of your...
0
2024-07-02T21:32:29
https://dev.to/eben/automating-user-on-boarding-with-bash-scripting-3p00
## Introduction In Modern day IT administration, automation must become part of your day-to-day activity. This is because automation is crucial in maintaining uniformity, and security policies of the organization as well as avoiding time wastage in doing the same things over again. One of those repetitive tasks automation takes care of is creating of user or simply a new user on-boarding in an organization. This blog post briefly examines a simple way that the user onboarding process on Linux Operating Systems can be automated using bash scripting. This article references HNG task 1 for the DevOps track. HNG is a non-profit organization that provides internship and learning programs for individuals seeking to gain real-life experience in IT. You can find out more about HNG using the links below. [HNG INTERNSHIP](https://hng.tech/internship) [HNG LEARN](https://hng.tech/learn) ##Automating User On-Boarding Using Bash The script we are going to use will enable us to achieve the following: - Create users - Generate random passwords for the users - Store the passwords in a secure file - Create groups for the users - Assign the users to their personal and additional groups. - Log all actions to a log file. This script can be reused over and over again as it is not limited to just the one-time creation of users and also it can be modified to perform additional tasks or take out what is not needed. Let's break down the provided script step by step to understand what it does piece by piece. - First off, we set the interpreter of the script which states what shell we will be using to run the script. `#!/bin/bash` - Define log file and password file locations. ``` # Define the log file location for logging LOG_FILE="/var/log/user_management.log" # Define the location of the password file where the generated passwords for each user will be stored PASSWORD_FILE="/var/secure/user_passwords.txt" ``` - Define the location of the users_txt file where we have entered the names of the users with their respective groups. ``` # Define the location of the user file for the creation of users and groups USER_FILE="users.txt" ``` - Ensure the log and password files exist. ``` # Ensure log file and password file exist and set proper permissions touch "$LOG_FILE" "$PASSWORD_FILE" chmod 600 "$LOG_FILE" "$PASSWORD_FILE" ``` - Create and Ensure the /var/secure directory exists with appropriate permissions. ``` # Create /var/secure directory if it doesn't exist if [ ! -d "/var/secure" ]; then mkdir -p /var/secure chmod 700 /var/secure echo "$(date) - Created /var/secure directory." | tee -a "$LOG_FILE" fi ``` - Generate Password Function for user password generation. ``` # Function to generate a random 12-character alphanumeric password generate_password() { tr -dc A-Za-z0-9 </dev/urandom | head -c 12 } ``` - Confirming that the users_txt file to be used for user creation exist # Ensure the users.txt file exists ``` if [ ! -f "$USER_FILE" ]; then log_message "User file $USER_FILE does not exist. Exiting." exit 1 fi ``` ###Creation of users, personal groups and password generation - Read users from the users.txt file. - Check if the user exists, if not, create the user and their personal group. - Generate and set a password for the user. - Save the password securely. ``` # Step 1: Create Users, groups and generate passwords for the users log_message "Starting user creation process..." while IFS=';' read -r username groups; do username=$(echo "$username" | xargs) groups=$(echo "$groups" | xargs) if [ -z "$username" ]; then log_message "Empty username found. Skipping." continue fi if id "$username" &>/dev/null; then log_message "User $username already exists." continue fi if groupadd "$username"; then if useradd -m -g "$username" "$username"; then log_message "User $username and group $username created." password=$(generate_password) if echo "$username:$password" | chpasswd; then log_message "Password set for user $username." echo "$username:$password" >> "$PASSWORD_FILE" log_message "Password for user $username saved to $PASSWORD_FILE." else log_message "Failed to set password for user $username." fi else log_message "Failed to create user $username." fi else log_message "Failed to create group $username." fi done < "$USER_FILE" ``` ###Group Assignment Process - Read users and their groups from the users.txt file. - Check if the user exists, if not log a message and skip to the next user. - Add the user to specified groups, creating groups if they don't exist. ``` # Step 2: Add Users to Groups log_message "Starting group assignment process..." while IFS=';' read -r username groups; do username=$(echo "$username" | xargs) groups=$(echo "$groups" | xargs) if [ -z "$username" ]; then log_message "Empty username found. Skipping." continue fi if ! id "$username" &>/dev/null; then log_message "User $username does not exist. Skipping group assignment." continue fi IFS=',' read -ra GROUP_ARRAY <<< "$groups" for group in "${GROUP_ARRAY[@]}"; do group=$(echo "$group" | xargs) if [ -z "$group" ]; then log_message "Empty group name for user $username. Skipping." continue fi if ! getent group "$group" > /dev/null 2>&1; then if groupadd "$group"; then log_message "Group $group created." else log_message "Failed to create group $group." continue fi fi if usermod -aG "$group" "$username"; then log_message "User $username added to group $group." else log_message "Failed to add user $username to group $group." fi done done < "$USER_FILE" log_message "User and group creation process completed." ``` ###Verify the User accounts that have been created - You can verify if the account was created successfully by using the id user command to show the user and the groups ``` id user_name ``` - You can also retrieve the secure passwords by viewing the contents of the password file ``` sudo cat /var/secure/user_passwords.csv ``` - You can view the logs for the operation by using this command ``` sudo cat /var/log/user_management.log ``` ##Conclusion Creating a bash script to manage user accounts can make adding new employees or users much easier. By following the steps in this guide, you can build a reliable script that: - Creates user accounts - Adds users to groups - Sets secure passwords - Logs actions for transparency and audits. [Link to the script](https://github.com/Eben-DevOps/User-Creation-Script.git) Thanks to HNG for making this possible. You can also join their premium channel to enjoy more benefits, learn and network in a great environment. by using this link to join [HNG Premium](https://hng.tech/premium)
eben
1,909,372
Install Module in Powershell without Install-Module
The term ‘Install-Module’ is not recognized as the name of a cmdlet. Have you ever found...
0
2024-07-02T21:26:27
https://medium.com/@kinneko-de/77dc0380efff
powershell, help, windows, installmodul
## The term ‘Install-Module’ is not recognized as the name of a cmdlet. Have you ever found yourself in trouble because your Powershell is not working anymore after a colleague told you to just delete every module of Windows Powershell? No problem… you can just use Install-Module to reinstall everything. Oops… Install-Module is part of the module you just deleted. Here is the guide on how to escape this chicken and egg problem. ![Chicken - Egg - Problem](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/meffvuoks88cjikxrdmb.jpg) *** ## How to restore Install-Module manually To restore functionality, you need two modules. _PowerShellGet_ and its dependency _PackageManagement_. You can get them from the [Powershell Gallery](https://www.powershellgallery.com/). Search for the modules and click on _‘Manual Download’_. There you will also find a helpful [description of how to install these Nuget packages manually](https://learn.microsoft.com/en-us/powershell/gallery/how-to/working-with-packages/manual-download?view=powershellget-3.x). Follow the instructions step by step. Only in step 5, you should first create a subfolder containing the version of the module. Restart your PowerShell and you can install all other modules now. ## Are you a developer and too lazy to do things manually? Since you already use Powershell, why not just use Powershell? {% embed https://gist.github.com/KinNeko-De/ca317f091b0dca9c4feb5ebbd0bc1f36.js %} Finally, copy the two folders _PowerShellGet_ and _PackageManagement_ to your Powershell Module folder. You will get a warning that the files are marked as untrusted. If anyone finds a solution, please leave a comment. ## If you are simply smart and pragmatic Use the Windows Recycle Bin to restore the PowerShellGet and PackageManagement modules. *** Please also leave a comment if anyone else is experiencing the same problem. Together we are strong 💪♥️❤♥️ ![Love for all](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nepjx9ua6o1vm1jqn1mp.jpg) _Foto von <a href="https://unsplash.com/de/@timmarshall?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Tim Marshall</a> auf <a href="https://unsplash.com/de/fotos/hande-die-zusammen-mit-roter-herzfarbe-geformt-wurden-cAtzHUz7Z8g?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a>_
kinneko-de