id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,908,379
Top 15 Tools for Frontend Developers: Optimize Your Workflow
In the dynamic world of frontend development, staying updated with the latest tools and techniques is...
0
2024-07-02T04:43:26
https://dev.to/vyan/top-15-tools-for-frontend-developers-optimize-your-workflow-374o
webdev, javascript, react, frontend
In the dynamic world of frontend development, staying updated with the latest tools and techniques is essential for creating efficient, high-quality web applications. Here, we've compiled a list of frontend tools that can significantly enhance your productivity and streamline your workflow. From measuring screen elements to cleaning up unused CSS rules, these tools are must-haves for any frontend developer. ## 1. PerfectPixel by PixelSnap [PerfectPixel](https://getpixelsnap.com/) is the fastest tool for measuring anything on your screen. It allows you to create pixel-perfect designs by comparing your web page with a design mockup. PerfectPixel is an extension that overlays a semi-transparent image over your HTML, enabling you to match the design precisely. It’s ideal for ensuring that your web pages look exactly as intended. ![pixel](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qst648mpf7k0vh3hnfhm.png) ## 2. Code Snippets Libraries Finding up-to-date snippets for JavaScript and React use cases can save you a lot of time. Websites like [CodeToGo](https://codetogo.io/) provide a vast collection of code snippets that you can quickly incorporate into your projects. These libraries are great for solving common problems without reinventing the wheel. ![code](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l18cr4u0efmr50rk8m43.png) ## 3. UnusedCSS Easily clean up your unused CSS rules with [UnusedCSS](https://unused-css.com/). This tool scans your stylesheets and identifies CSS rules that are not being used in your project. By removing these unused rules, you can reduce the size of your CSS files, improving load times and overall performance. ![unused](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f3xk27vqorw8frtg6pik.png) ## 4. Responsive Design Tools Master the art of creating pixel-perfect responsive websites with tools like [Responsively](https://responsively.app/). Responsively allows you to preview your web pages across multiple devices simultaneously, ensuring that your designs look great on all screen sizes. This tool is essential for developing websites that offer a consistent user experience across desktops, tablets, and smartphones. ![res](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/edsopezf0jg2sjfgzwu9.png) ## 5. Linear Gradients Collection A free collection of 180 linear gradients with CSS3 code is available at [WebGradients](https://webgradients.com/). These gradients can add depth and visual interest to your web designs. The website provides ready-to-use gradient codes, making it easy to incorporate beautiful color transitions into your projects. ![web](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f8uaqjrk1blsxi3917n2.png) ## 6. Spline 3D Design Software [Spline](https://spline.design/) is a free 3D design software that allows you to create interactive web experiences. With Spline, you can design, animate, and integrate 3D objects into your web pages. This tool is perfect for adding a new dimension to your projects and engaging users with interactive elements. ![spline](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/stv9tdi61191q103wcto.png) ## 7. SVG Shape Dividers Export beautiful SVG shape dividers for your projects using tools like [Shape Divider](https://www.shapedivider.app/). These shape dividers can be used to create visually appealing section transitions on your web pages. The tool provides customizable SVG shapes that you can easily integrate into your designs. ![shape](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/99gqmb5lnl601h12aucr.png) ## 8. Custom Wave Animations Add custom wave animation effects to your websites with [Get Waves](https://getwaves.io/). This tool allows you to create unique wave patterns that can be used as background animations or section dividers. The animations are generated in SVG format, ensuring that they are scalable and lightweight. ![custom](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k5lrmctnu80ut6urovib.png) ## 9. SVG Shape Generators Create random, unique, and organic-looking SVG shapes with [Blobmaker](https://www.blobmaker.app/). This tool generates blob-like shapes that can be used as decorative elements in your web designs. The shapes are fully customizable and can add a playful touch to your projects. ![blob](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f4xc1kytp7m3ge845v67.png) ## 10. CSS Animation Libraries Play with a collection of ready-made CSS animations using libraries like [Animista](https://animista.net/). These libraries provide a variety of animation effects that you can apply to your elements with minimal effort. They are great for adding subtle animations to enhance user interactions. ![ani](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s68ptyucnjwez984z1qn.png) ## 11. Lorem Ipsum Generators Generating dummy text is a common task in web development. Websites like [Lorem Ipsum](https://www.lipsum.com/) provide placeholder text commonly used for previewing layouts and visual mockups. These generators can save you time by quickly providing filler content for your designs. ![Lorem](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bslmxi9zfm5dugz1xn07.png) ## 12. Google Fonts Familiar to all of us, [Google Fonts](https://fonts.google.com/) offers a wide variety of free fonts that you can use in your projects. The extensive collection includes fonts for various styles and purposes, ensuring that you can find the perfect typeface for your design. ![Google](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vs7qq6q0hycpl5elo6c9.png) ## 13. Landing Page Checklist Building the best landing page requires careful planning and the right resources. The [Landing Page Checklist](https://landingpage.fyi/) provides hand-picked examples, tools, and guides to help you create high-converting landing pages. This resource covers everything from design elements to optimization tips. ![Landing](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/757xog0tq08vyuw7mxvl.png) ## 14. Museum of Websites Explore the evolution of popular websites like Google, Amazon, and more at the [Museum of Websites](https://www.kapwing.com/museum-of-websites). This resource showcases the design changes and development trends of major websites over time, offering valuable insights for frontend developers. ![Museum](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uee4xa0ccduspo5vcl10.png) ## Conclusion Incorporating these killer tools into your workflow can significantly enhance your productivity and the quality of your web projects. From design precision with PerfectPixel to efficient CSS management with UnusedCSS, these tools provide valuable support for frontend developers. Explore these resources and see how they can streamline your development process and help you create outstanding web applications. Happy coding!
vyan
1,908,401
Taming the State Beast: Redux vs. Context API in React Applications
As React applications grow in complexity, managing application state becomes a critical challenge....
0
2024-07-02T04:35:04
https://dev.to/epakconsultant/taming-the-state-beast-redux-vs-context-api-in-react-applications-4obj
As React applications grow in complexity, managing application state becomes a critical challenge. This article explores two popular approaches: Redux and Context API, empowering you to choose the right state management solution for your React projects. Understanding Application State: Application state refers to the data that reflects the current condition of your React application. It can include user data, UI preferences, or any information that needs to be shared across different components. The Challenge of State Management: Without a proper state management strategy, sharing and updating state across various components in a React application can become cumbersome. This can lead to: - Prop Drilling: Passing data down multiple levels of components as props can become messy and difficult to maintain. - Inconsistent State: Managing state within individual components can lead to inconsistencies when the same data needs to be accessed by multiple parts of the application. - Scalability Issues: As applications grow, the complexity of state management increases significantly. Redux: A Predictable State Container Redux is a popular state management library that enforces a centralized store for application state. Here's what Redux offers: - Single Source of Truth: All application state is stored in a single, centralized store in Redux, ensuring consistency and avoiding state duplication. - Pure Functions and Immutability: Redux relies on pure functions that transform state predictably without modifying the original state. This promotes predictability and easier debugging. - Middleware: Redux allows you to utilize middleware to intercept actions and perform additional logic before updating the state. Redux Example: Imagine an application managing a shopping cart. In Redux, you would define actions (e.g., "ADD_TO_CART") to describe state changes. Reducers handle these actions and update the state in the store based on predefined logic. React components connect to the Redux store to access and display the current state (e.g., shopping cart items). Context API: Built-in React State Sharing The Context API is a built-in React feature that allows you to share state across components without explicit prop drilling. Here's how it works: - Creating Context: Use the React.createContext function to define a context object that holds the state you want to share. - Providing Context: Wrap the top-level component of your application tree with a Context Provider, passing the state value you want to make accessible. - Consuming Context: Descendant components can access the shared state using the useContext hook. Context API Example: In the shopping cart example, you could create a Context object holding the cart items data. Wrap your application with a Context Provider that defines the initial state of the cart. Components within the application tree can then use the useContext hook to access and display the current cart items. [Learn YAML for Pipeline Development : The Basics of YAML For PipeLine Development](https://www.amazon.com/dp/B0CLJVPB23) Choosing the Right Tool: - Redux: Ideal for complex applications with large amounts of state, predictable state updates, and the need for middleware functionality. - Context API: Suitable for smaller applications with simpler state needs, where avoiding prop drilling is the primary concern. Considerations: - Learning Curve: Redux has a steeper learning curve due to its concepts and structure compared to the familiar React Context API. - Boilerplate Code: Redux typically involves more boilerplate code for setting up actions, reducers, and store connections. - Complexity vs. Flexibility: Redux offers more flexibility and control over state management but adds complexity. Context API provides a simpler approach but may not scale effectively for very complex applications. Conclusion: Both Redux and Context API offer valuable solutions for state management in React applications. By understanding their strengths and weaknesses, you can choose the approach that best aligns with the complexity and needs of your project. Remember, the goal is to maintain a clean and maintainable state management strategy as your React application evolves.
epakconsultant
1,908,400
New HTML <dialog> tag: An absolute game changer
Frontend Development Transformed by the New &lt;dialog&gt; Tag ❌...
0
2024-07-02T04:35:03
https://dev.to/manojgohel/new-html-tag-an-absolute-game-changer-3j8j
html, dialog, javascript, webdev
### Frontend Development Transformed by the New `<dialog>` Tag #### ❌ Before: Creating a dialog used to be a labor-intensive task. Here’s how much work it took: ```html <!-- HTML for the dialog --> <div class="dialog-overlay"> <div class="dialog"> <p>Dialog content...</p> <button class="close-button">Close</button> </div> </div> ``` ```css /* CSS for the dialog */ .dialog-overlay { position: fixed; top: 0; left: 0; width: 100%; height: 100%; background: rgba(0, 0, 0, 0.5); display: flex; justify-content: center; align-items: center; } .dialog { background: white; padding: 20px; border-radius: 5px; } .close-button { background: #f44336; color: white; border: none; padding: 10px; cursor: pointer; } ``` And that’s just the CSS for basic dialog functionality. It still looks very plain and requires additional JavaScript to handle opening and closing the dialog. #### ✅ Now: With the new `<dialog>` tag, we can achieve the same functionality with much less effort. ```html <!-- HTML using <dialog> --> <dialog> <p>Dialog content...</p> <button class="close-button">Close</button> </dialog> ``` ```javascript // JavaScript to show and close the dialog const dialog = document.querySelector('dialog'); dialog.showModal(); // To show the dialog dialog.querySelector('.close-button').addEventListener('click', () => dialog.close()); ``` We can even use the `show()` method to display a non-modal dialog, which is less intrusive as it has no backdrop. ```javascript // JavaScript to show a non-modal dialog const dialog = document.querySelector('dialog'); dialog.show(); // To show a non-modal dialog ``` #### Dialogs: A Vital UI Component Dialogs have always been a powerful tool to capture user attention and deliver important information. They are a key feature in every UI design system, from Material Design to Fluent Design. However, using dialogs often required third-party libraries or custom components. Many of these libraries didn’t follow best practices for usability and accessibility, such as dismissing the dialog with the Escape key. The new `<dialog>` tag simplifies all of this. #### Auto-Open Dialog The `open` attribute keeps the dialog open from the moment you load the page: ```html <dialog open> <p>Auto-open dialog content...</p> </dialog> ``` #### Auto-Close Button You can add close functionality with standard event listeners and the `close()` method, but the built-in `<dialog>` makes this even easier — no JavaScript needed. ```html <dialog> <p>Dialog content...</p> <button class="close-button" onclick="this.closest('dialog').close()">Close</button> </dialog> ``` #### Styling the `<dialog>` Tag The `<dialog>` tag has a special `::backdrop` pseudo-element for styling the backdrop. ```css dialog::backdrop { background: rgba(0, 0, 0, 0.5); } ``` Styling the main element is straightforward: ```css dialog { background: white; padding: 20px; border-radius: 5px; border: none; } ``` #### Final Thoughts With the new HTML `<dialog>` tag, creating modals and dialogs in our web apps has never been easier or faster. This tag significantly reduces the complexity of implementing dialogs, enhances accessibility, and allows developers to follow best practices effortlessly. Please share my blog and react as you wish... Meet me on [https://topmate.io/manojgohel](https://topmate.io/manojgohel)
manojgohel
1,908,399
The DevTool Content Machine: How to Use AI to Crank Out Awesome Content (Even With a Tiny Team)
Learn how to harness the power of AI to create high-quality technical content that resonates with...
0
2024-07-02T04:34:32
https://dev.to/swati1267/the-devtool-content-machine-how-to-use-ai-to-crank-out-awesome-content-even-with-a-tiny-team-4d2k
devrel, contentwriting, marketing, community
_Learn how to harness the power of AI to create high-quality technical content that resonates with developers,even with limited resources. Discover how Doc-E.ai can transform your content creation process and boost your DevTool's reach._ Ever feel like you need a cloning machine for your content team? As a scrappy DevTool startup, you've probably got a mountain of ideas for blog posts, [tutorials](https://www.doc-e.ai/post/how-to-create-technical-tutorials-that-developers-love-even-if-youre-not-a-coder), and docs... but who has the time to write them all? That's where the magic of AI comes in. This guide will show you how to harness the power of AI to [create high-quality content](https://www.doc-e.ai/post/content-recycling-for-the-win-turning-user-questions-into-killer-blog-posts-without-breaking-a-sweat) at warp speed, even if your team is small (or just you!). Get ready to unlock a content creation superpower. **The Content Conundrum: A Startup Struggle** Developer communities are like hungry caterpillars: They need a constant supply of fresh, relevant content to thrive. But creating that content can feel like an endless treadmill, especially when you're juggling a million other things. You know content is essential for attracting users, educating them, and building a loyal following, but who has the time and resources to do it all? **Enter AI: Your Content Creation Sidekick** No, we're not talking about Skynet taking over the world (yet). But AI has come a long way, and it's ready to become your content creation bestie. AI-powered tools can: - **Generate Ideas**: Stumped for what to write about? AI can analyze your existing content and community discussions to suggest fresh topics that resonate with your audience. - **Draft Content**: AI can whip up blog post drafts, social media posts, or even sections of documentation in minutes. - **Optimize for SEO**: No need to be an SEO expert. AI tools can suggest relevant keywords, optimize headlines, and even check for readability. **Doc-E.ai: Your AI-Powered Content Machine** Doc-E.ai takes AI-powered content creation to the next level. It's specifically designed for DevTool teams, understanding the nuances of technical language and developer needs. It's like having a tech-savvy content writer on your team 24/7, but without the coffee runs or payroll. Here's how Doc-E.ai can supercharge your content: - **Turn Conversations into Content**: Ever had a great discussion in Slack or Discord? Doc-E.ai can transform those threads into polished blog posts, FAQs, or tutorials with just a few clicks. - **Create Consistent Content**: Doc-E.ai can maintain a consistent tone and style across all your content, ensuring a cohesive brand voice. - **Offer Data-Driven Insights**: Doc-E.ai analyzes your content and community interactions to give you feedback on what's working and what's not. **Best Practices for AI-Powered Content** - **Human Touch is Key**: AI is a great tool, but it's not a replacement for human creativity and expertise. Always review and edit AI-generated content to ensure accuracy, clarity, and that special human touch. - **Start Small**: Don't try to automate everything at once. Begin with a few specific use cases, like generating FAQs from common questions or summarizing long technical discussions. - **Experiment and Iterate**: Not all AI-generated content is created equal. Experiment with different tools and prompts to find what works best for you. **The Bottom Line** By leveraging AI, you can free up your team to focus on high-impact activities like building relationships with developers and creating truly unique content that showcases your expertise. Ready to unlock the power of AI-powered content creation? [Try Doc-E.ai for free today!](https://www.doc-e.ai/post/getting-started) ‍
swati1267
1,908,398
⚡ MyFirstApp - React Native with Expo (P18) - Code Layout Favourites
⚡ MyFirstApp - React Native with Expo (P18) - Code Layout Favourites
27,894
2024-07-02T04:33:36
https://dev.to/skipperhoa/myfirstapp-react-native-with-expo-p18-code-layout-favourites-53ne
react, reactnative, webdev, tutorial
⚡ MyFirstApp - React Native with Expo (P18) - Code Layout Favourites {% youtube Y5ttPFMcO2Q %}
skipperhoa
1,908,397
⚡ MyFirstApp - React Native with Expo (P17) - Code Layout Location
⚡ MyFirstApp - React Native with Expo (P17) - Code Layout Location
27,894
2024-07-02T04:32:06
https://dev.to/skipperhoa/myfirstapp-react-native-with-expo-p17-code-layout-location-2a05
react, reactnative, webdev, tutorial
⚡ MyFirstApp - React Native with Expo (P17) - Code Layout Location {% youtube aMM95V3yIO8 %}
skipperhoa
1,908,396
Dive into the Fascinating World of Ansible Automation! 🤖
Learn the basics of Ansible and gain a technical overview of automation with Red Hat, Inc. Accelerate application time to value and improve operational efficiency.
27,895
2024-07-02T04:30:58
https://getvm.io/tutorials/ansible-basics-an-automation-technical-overview
getvm, programming, freetutorial, videocourses
Hey there, fellow IT enthusiasts! 👋 Are you tired of manual, time-consuming tasks and looking to streamline your organization's dynamic infrastructure? Well, I've got the perfect solution for you - the "Ansible Basics: An Automation Technical Overview" course on Udemy! ## Unleash the Power of Ansible This course is a comprehensive introduction to the world of Ansible, a powerful automation tool that can help you accelerate your application's time to value and improve operational efficiency. 💻 You'll learn the basics of Ansible, including how to create automation using Playbooks, variables, and the debug module. Plus, you'll dive into the Red Hat Ansible Automation Platform 2 and discover how to operationalize your automation using the automation controller. ## Hands-on Automation Experience One of the best things about this course is the hands-on approach. You'll get to build an automation job from scratch, using inventories, credentials, and job templates. And if that's not enough, you'll even learn how to use surveys to create self-service IT solutions. 🚀 ## Who Should Take This Course? This course is designed for IT leaders, administrators, engineers, and architects who want to gain a deep understanding of Ansible and automation. Whether you're a seasoned pro or a newcomer to the field, this course has something for everyone. 🤓 So, what are you waiting for? [Enroll in the "Ansible Basics: An Automation Technical Overview" course](https://www.udemy.com/course/ansible-basics-an-automation-technical-overview) and take the first step towards streamlining your IT operations. Trust me, your future self will thank you! 😉 ## Enhance Your Ansible Learning Experience with GetVM Playground 🚀 Are you ready to take your Ansible skills to the next level? Look no further than the GetVM Playground! This powerful online coding environment allows you to seamlessly put the concepts you've learned in the "Ansible Basics: An Automation Technical Overview" course into practice. 💻 With GetVM's intuitive interface, you can easily spin up virtual machines, configure your environment, and experiment with Ansible Playbooks, variables, and the debug module. No more hassle with local setup or compatibility issues – the Playground handles it all, so you can focus on learning and building your automation skills. 🤖 The best part? The GetVM Playground for this course is readily available at [https://getvm.io/tutorials/ansible-basics-an-automation-technical-overview](https://getvm.io/tutorials/ansible-basics-an-automation-technical-overview). Simply click the link, and you'll be transported to a fully-equipped virtual lab, ready for you to dive in and start automating. 🌐 So, what are you waiting for? Enhance your Ansible learning experience with the power of GetVM Playground and take your IT automation skills to new heights. Let's get coding! 💪 --- ## Practice Now! - 🔗 Visit [Ansible Basics: An Automation Technical Overview | IT Automation, Ansible Fundamentals](https://www.udemy.com/course/ansible-basics-an-automation-technical-overview) original website - 🚀 Practice [Ansible Basics: An Automation Technical Overview | IT Automation, Ansible Fundamentals](https://getvm.io/tutorials/ansible-basics-an-automation-technical-overview) on GetVM - 📖 Explore More [Free Resources on GetVM](https://getvm.io/explore) Join our [Discord](https://discord.gg/XxKAAFWVNu) or tweet us [@GetVM](https://x.com/getvmio) ! 😄
getvm
1,908,395
Dram Shop Expert
Alabama Dram Shop Law The Alabama Dram Shop Law holds corporations accountable for serving alcohol...
0
2024-07-02T04:29:30
https://dev.to/dramshop04/dram-shop-expert-43cg
Alabama Dram Shop Law The Alabama Dram Shop Law holds corporations accountable for serving alcohol to intoxicated individuals, ensuing in damage. This regulation guarantees that establishments like bars and eating places can be held responsible for the movements of their visibly intoxicated shoppers. By implementing this duty, it aims to decrease the capability dangers of excessive alcohol consumption in public venues. Liquor legal responsibility legal guidelines, which include Alabama’s, serve to defend the general public from the outcomes of alcohol-related incidents. These legal guidelines establish clear pointers for agencies that serve alcohol, making them greater vigilant in monitoring their patrons’ conduct. The purpose is to prevent injuries and injuries as a result of impaired judgment because of intoxication. Accountability: Businesses must ensure they do now not serve alcohol to visibly intoxicated individuals. Public Safety: Aimed at reducing alcohol-associated incidents inclusive of under the influence of alcohol using and assaults. Understanding the broader implications of liquor legal responsibility legal guidelines permit you to respect their importance in keeping a safer network. The Alabama Dram Shop Law exemplifies how criminal frameworks strive to efficiently balance commercial enterprise operations and public protection. By conserving establishments responsible for their role in serving alcohol responsibly, the regulation encourages corporations to teach their team of workers in recognizing symptoms of intoxication and imposing effective policies and methods. This not best reduces the risk of harm to purchasers however additionally allows protect the recognition and viability of the enterprise itself. In addition to Alabama’s Dram Shop Law, each country has its very own felony framework governing liquor legal responsibility. For instance, Missouri’s Revised Statute Section 537.053 establishes a unique set of guidelines for holding groups liable in alcohol-associated incidents. Consulting with specialists nicely-versed in those particular laws may be critical in constructing a sturdy case or mounting a sturdy protection. If you are involved in a Dram Shop litigation, in search of professional steering is paramount. Preston Rideout, Kim Schioldan, and Silver Gordon are seasoned experts with good sized revel in within the area. Their expertise can show worthwhile in assessing liability, offering professional testimony, and helping navigate the complexities of alcohol provider rules. To talk your case and explore ability techniques, take gain in their offer for a free consultation through calling (662) 466-6045. By tapping into their knowledge and insights, you may beef up your function and optimize your probabilities of a positive final results in Dram Shop litigation.Y. If you locate yourself embroiled in a prison case related to liquor liability, it’s essential to have get right of entry to to expert testimony that could aid your function. That’s in which Dram Shop Experts come into play. With great experience in areas like security use of force and Missouri Dram Shop Law, they offer complete litigation guidance and professional witness testimony which could substantially impact your case’s final results. Whether you’re in search of help with protection use of force cases or require professional testimony for Missouri Dram Shop litigation, Dram Shop Experts can offer treasured insights. Their group of specialists makes a speciality of diverse elements of liquor legal responsibility regulation and alcohol-associated damage cases, making sure you've got the essential help for the duration of your legal complaints. To learn extra about how those experts can assist you, go to their website’s practice regions section. You’ll find particular information at the various domain names they cover, which include Dram Shop Expert Witness Testimony, Liquor Liability, and Alcohol-Related Injury Practice Areas. **[Dram Shop Expert](https://www.dramshopexperts.com/practice-areas/responsible-alcohol-service/)** Protecting your rights and navigating complex felony situations require a comprehensive expertise of relevant legal guidelines and expert evaluations. With a dedication to client privacy and satisfaction, Dram Shop Experts offer a number of offerings encompassing Dram Shop, Liquor Liability, Alcohol-Related Personal Injury, Premise Liability, and Wrongful Death Expert Witness Opinions, Reports, Depositions, and Testimony. Their information may be instrumental in organising a strong criminal foundation in your case. Understanding the Alabama Dram Shop Act The Alabama Dram Shop Act is a legal framework set up to impose legal responsibility on bar owners and operators once they serve alcohol irresponsibly. Specifically, this regulation goals conditions in which an establishment serves alcohol to an man or woman who's visibly intoxicated. Under those circumstances, if the intoxicated character causes damage to another party, the established order may be held legally accountable. Key Provisions and Liability Key provisions encompass: Visible Intoxication: Establishments are prohibited from serving alcohol to purchasers who show clean signs of intoxication. Third-Party Harm: Liability arises whilst a visibly intoxicated person causes harm or damage to a 3rd party. Proof of Negligence: The plaintiff ought to demonstrate that the status quo knowingly served alcohol to an intoxicated individual and that this act immediately led to the damage suffered. Historical Origins of “Dram Shop” Laws The term “dram shop” originates from 18th-century England, where small bars and pubs sold gin by way of the dram, a small unit of liquid measure. These legal guidelines have advanced through the years to cope with public safety and keep establishments chargeable for their function in alcohol-related incidents. If you’re interested by exploring the history and evolution of dram shop laws similarly, you may seek advice from this informative resource on FasterCapital. “Dram save” legal guidelines function a deterrent against reckless alcohol carrier, encouraging corporations to undertake accountable serving practices. Each state has its very own version of these legal guidelines, tailored to nearby wishes and felony precedents. For additional insights into how those laws are carried out in one of a kind contexts, you can discover assets on Nightclub Negligence or Alcohol Intoxication Identification. These outside references provide professional witness testimony and guidance on figuring out intoxication ranges, which can be crucial in determining liability below the Alabama Dram Shop Act. Understanding those key aspects of the Alabama Dram Shop Act helps clarify the responsibilities positioned on bar owners and operators. This legal framework not handiest seeks justice for victims but also promotes safer community practices. What Qualifies as an Incident below the Alabama Dram Shop Act? The Alabama Dram Shop Act covers various incidents that could lead to a lawsuit. These incidents normally involve harm or damage because of alcohol intake, with the status quo that served the alcohol being held accountable. The maximum common examples wherein this regulation is carried out are inebriated using accidents and attack and battery instances. Drunk Driving Accidents Under the Alabama Dram Shop Act, a bar or eating place may be held legally accountable in the event that they serve alcohol to a person who is visibly drunk and that character later causes a drunk driving twist of fate. Here are some key factors approximately these cases: Fatalities: Sadly, many inebriated riding incidents result in deaths. In such cases, the families of the sufferers have the right to record proceedings in opposition to the establishments that served the intoxicated drivers. Injuries: Even in non-deadly accidents, critical accidents can arise. Injured people have the option to are looking for repayment for their clinical expenses, misplaced wages, and ache and struggling. Assault and Battery Cases Alcohol often plays a position in violent confrontations. If someone will become competitive and commits assault or battery after being over-served at an establishment, that enterprise may be held accountable. Here are a few examples: Bar Fights: A purchaser who has been over-served alcohol turns into competitive and gets into a bodily combat, causing accidents. Domestic Violence: Alcohol intake at a venue escalates violent conduct, main to disputes and damage within intimate relationships. Other Situations While under the influence of alcohol using and attack are common examples, there are other scenarios that still fall below the Alabama Dram Shop Act: Property Damage: If an intoxicated character reasons damage to property, the status quo may be held dependable. Minor Consumption: Serving alcohol to minors who then cause damage can also invoke dram keep laws. To navigate these complicated felony conditions, it’s really useful to talk over with a Dram Shop Expert like Preston Rideout, who presents specialised offerings which includes Liquor Liability and Alcohol Service Expert Witness Testimony. Additionally, institutions can mitigate dangers with the aid of implementing Responsible Alcohol Service education applications. Understanding those types of incidents helps make clear how the Alabama Dram Shop Act works in real-life situations. Establishments need to be cautious in their alcohol carrier practices to save you capability liabilities arising from these occasions. If you want help or desire to discuss your case, experience free to touch the Dram Shop Experts and talk with Preston Rideout at (662) 466-6045.”. Who Can File a Lawsuit? Eligibility for lawsuit claims underneath the Alabama Dram Shop Act commonly makes a speciality of the injured sufferer as the claimant. This way that individuals who have suffered harm due to the actions of an intoxicated man or woman can pursue legal movement in opposition to the establishment that served alcohol to the visibly intoxicated character. Key Criteria for Eligibility: Direct Injury: The plaintiff should have sustained accidents immediately caused by an intoxicated person. This consists of situations where a under the influence of alcohol driving force reasons an twist of fate or an intoxicated man or woman commits an assault. Third-Party Claims: The regulation permits 0.33-celebration claims, which means that now not most effective the injured birthday party but also their circle of relatives participants—consisting of a partner, infant, or discern—can report a lawsuit in the event that they depend upon the injured man or woman for assist. Visible Intoxication: A critical detail is proving that the establishment served alcohol to a person who became visibly intoxicated at the time of service. This frequently calls for witness testimony or expert evaluations to set up the seen signs and symptoms of intoxication. Causation: There must be a clear hyperlink among the provider of alcohol and the subsequent harm. The plaintiff wishes to illustrate that their accidents had been an immediate end result of the intoxication caused by the established order’s carrier, which generally includes establishing causation via expert reviews or medical proof. For the ones looking to recognize their rights and options underneath this law, consulting with experienced lawyers is advisable. They can help navigate these specific necessities and examine whether your scenario meets the vital standards for a a hit claim. In instances related to Alabama’s Dram Shop Law, it is able to be useful to engage with professionals who specialise in offering Dram Shop Expert Witness Testimony. These specialists, which includes Silver Tree Gordon, can offer precious insights and help in constructing a strong case. Their information degrees from dram store safety to private injury, premise legal responsibility, and wrongful death testimony. Additionally, information the importance of accountable alcohol provider and bartender schooling in dram keep cases can drastically impact the final results of a lawsuit. To delve deeper into this topic, you may find the blog post on Dram Shop Law helpful. The article emphasizes the significance of complying with responsible alcohol service practices, bartender education, and intoxication identity in accordance with the regulation. Furthermore, it may be worth exploring the correlation among alcohol-related harm and public health issues, because it presents a broader angle on the difficulty at hand. Limitations and Considerations The Alabama Dram Shop Act has unique limitations which can have an effect on the capacity to win a case. One crucial rule is that establishments can not be held liable for accidents sustained via individuals who had been already intoxicated. In different phrases, if a person consumes alcohol to the point of drunkenness and ultimately gets injured, they can't attribute blame to the business that served them. Proving causation is another enormous project in dram shop cases. Plaintiffs need to set up an immediate hyperlink between their accidents and the alcohol they consumed. This requires demonstrating the subsequent: The bar or eating place knowingly served someone who turned into visibly intoxicated: Witness statements or video evidence may be important to substantiate this declare. The alcohol served contributed to their nation of intoxication: Medical data and professional testimony from Liquor Liability Expert Witnesses can assist establish this connection. Their intoxication at once caused their accidents: A clean correlation need to be confirmed among their moves whilst underneath the influence and the damage they suffered. Understanding these obstacles is critical for absolutely everyone thinking about a lawsuit below the Alabama Dram Shop Act. Seeking steerage from attorneys that specialize in liquor legal responsibility instances, including those at Dram Shop Experts, can provide valuable advice on navigating those complexities. It’s worth noting that having strong evidence and familiarity with legal intricacies can considerably impact the final results of a dram shop case. Therefore, consulting with experts, like Dram Shop Security Expert Kim Schioldan, who can provide professional witness testimony, is highly really useful. Recent Reforms and Future Implications Recent changes to the Alabama Dram Shop Act have had a large effect on liquor liability legal guidelines. One of the important thing updates is the creation of the Dram Shop Liability Act (SB104). This new rules brings in clean policies and requirements, which include changes to legal responsibility coverage requirements and the method for filing claims. The goal of those reforms is to discover a balance among protecting corporations and supporting victims are seeking justice. Court decisions have additionally played a crucial position in shaping how this law is implemented. Judges have always stressed that plaintiffs ought to show an established order knowingly served alcohol to someone who changed into visibly drunk, and that this carrier directly led to damage. This strict requirement is supposed to make certain that agencies are careful in how they operate. Potential Future Changes There are ongoing discussions approximately other methods liquor legal responsibility legal guidelines might be updated: Social Host Responsibility: Talks are heating up approximately making personal individuals responsible for serving alcohol at social occasions that bring about harm. This should amplify dram keep standards past simply organizations. Online Alcohol Sales: As more and more people buy alcohol online, there are precise demanding situations when it comes to guidelines. Lawmakers are searching into how traditional dram store legal guidelines can be changed for digital platforms, ensuring that on line sellers also follow responsible serving practices. Key Points to Remember Here are some important matters to realize approximately the recent reforms and courtroom rulings: New Rules and Standards: The modifications encompass up to date necessities for liability coverage and stricter tips for a way matters are carried out legally. Court Decisions: There’s a focus on proving that establishments knew they have been serving alcohol to clients who were really inebriated. For professional recommendation on running a bar according to industry requirements or for criminal testimony from a dram store professional witness, you could consult with Dram Shop Experts. They provide offerings which include Industry Standard Bar Operations Expert Witness Testimony and feature an experienced crew of Dram Shop and Security Experts to be had for consultation. Future legislative developments will probably keep to cope with those complicated troubles, ensuring that each companies and individuals contribute to a safer ingesting tradition. The Importance of Liquor Liability Insurance for Businesses Impact on Businesses Establishments which includes eating places, bars, and golf equipment are extensively impacted by using dram store legal guidelines. The Alabama Dram Shop Act holds those groups accountable for serving alcohol to visibly intoxicated folks who finally reason damage. This capability legal responsibility may be financially devastating if not properly managed. Therefore, wearing good enough liquor legal responsibility coverage is critical. Why Carry Liquor Liability Insurance? Liquor liability insurance presents a monetary safety net for groups by means of overlaying legal prices, settlements, and judgments that can arise from dram save claims. Without this insurance, establishments would possibly face severe financial difficulty from good sized criminal expenses and compensatory damages. Here are some key reasons why organizations have to keep in mind wearing liquor legal responsibility insurance: Legal Protection: Covers protection charges in proceedings. Financial Security: Mitigates the hazard of massive out-of-pocket costs. Reputation Management: Helps keep commercial enterprise integrity by way of addressing claims responsibly. Risk Management Tips Minimizing publicity to liquor legal responsibility claims involves proactive measures: Staff Training: Implement comprehensive education programs for employees on accountable serving practices. Ensure they apprehend a way to discover and cope with intoxicated purchasers correctly. Policy Development: Establish clear regulations concerning alcohol carrier and put into effect them continually. Monitor Consumption: Use systems to music alcohol consumption and interfere when necessary. Secure Premises: Maintain a safe environment to save you altercations or accidents. For in addition knowledge on managing liquor legal responsibility risks, don't forget consulting sources like Dram Shop Experts who provide specialised offerings in this domain, with clients together with over 100 Dram Shop & Liquor Liability Lawyers as well as severa restaurants, nightclubs, and bars. Responsible Serving Practices Engaging in responsible serving practices is vital: Check IDs rigorously to keep away from underage drinking. Limit drink specials that inspire excessive intake. Promote alternative transportation options like experience-sharing services for purchasers who can be too intoxicated to power. Investing in bartender schooling can also decorate body of workers readiness to control hard conditions efficiently. For expert schooling services, Bartender Training at Dram Shop Experts can be a valuable resource. Effective threat control blended with right liquor legal responsibility insurance guarantees that organizations are well-organized to deal with capacity felony challenges whilst selling a more secure ingesting environment for his or her shoppers.
dramshop04
1,908,394
Week 1 of My JavaScript Journey Complete! 🎉
This week marks the completion of the first week of my JavaScript learning journey, and I couldn't be...
0
2024-07-02T04:27:20
https://dev.to/nitin_kumar_8d95be7485e37/week-1-of-my-javascript-journey-complete-313e
This week marks the completion of the first week of my JavaScript learning journey, and I couldn't be more excited about the progress I've made. 🚀 What I Learned: Basics of JavaScript: Variables, data types, Type Conversion, Memory Management, Numbers, Math, String. Variable Declarations: The differences between let and const and their scoping rules. Highlights: Built my first mini-project: a simple calculator! 🧮 I faced a few challenges but learned a lot from debugging and problem-solving. Motivation: "The only way to learn a new programming language is by writing programs in it." – Dennis Ritchie Resources I'm Using: [MDN Web Docs](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Introduction) for in-depth documentation. [chai or code ](https://youtu.be/sscX432bMZo?si=y3GlLbjG1vJ1xvoq) for interactive tutorials and exercises. JavaScript.info for comprehensive guides and examples. Check Out My Work: GitHub Repository:[javaScript Repo](https://github.com/nittinkumarhr/java-scripts) I look forward to diving deeper into JavaScript and taking on more complex projects in the coming weeks. Every small step is a step closer to mastery. #JavaScript #LearningToCode #CodingJourney Feel free to share any tips or resources that helped you when you were learning JavaScript. Let's connect and learn together! 😊
nitin_kumar_8d95be7485e37
1,908,391
LevelFields AI - Elite Investing. Effortless AI
LevelFields is an AI-driven fintech application that automates arduous investment research so...
0
2024-07-02T03:59:00
https://dev.to/levelfields-ai/levelfields-ai-elite-investing-effortless-ai-4igj
fintech, ai, finance, stocks
LevelFields is an AI-driven fintech application that automates arduous investment research so investors can find the best stocks and options trade faster and easier. Throw away your spreadsheets, calculators, and thousands of stock recommendations from "analysts" that litter the news. LevelFields removes the opinion, guesswork, and emotion from investing. [https://www.levelfields.ai](https://www.levelfields.ai)
levelfields-ai
1,908,389
MyFirstApp - React Native with Expo (P16) - Event Swipeable Remove Chat
MyFirstApp - React Native with Expo (P16) - Event Swipeable Remove Chat
27,894
2024-07-02T03:55:18
https://dev.to/skipperhoa/myfirstapp-react-native-with-expo-p16-event-swipeable-remove-chat-44l9
react, reactnative, webdev, tutorial
MyFirstApp - React Native with Expo (P16) - Event Swipeable Remove Chat {% youtube jAOFSmzf0WU %}
skipperhoa
1,908,388
MyFirstApp - React Native with Expo (P15) - Code Layout My Profile
MyFirstApp - React Native with Expo (P15) - Code Layout My Profile
27,894
2024-07-02T03:54:00
https://dev.to/skipperhoa/myfirstapp-react-native-with-expo-p15-code-layout-my-profile-2kc2
reat, reactnative, webdev, tutorial
MyFirstApp - React Native with Expo (P15) - Code Layout My Profile {% youtube sFuz-b1maQg %}
skipperhoa
1,902,683
AWS Three-Tier Architecture: Practical Implementation and Strategies for Secure Private Instance Access
Introduction Creating a three-tier architecture on AWS is an essential skill for any cloud...
0
2024-07-02T03:50:54
https://dev.to/dpa2024/aws-three-tier-architecture-practical-implementation-and-strategies-for-secure-private-instance-access-3i7g
aws, cloudcomputing, ec2
**Introduction** Creating a three-tier architecture on AWS is an essential skill for any cloud professional. This architecture involves the separation of presentation, application, and data layers to enhance security, manageability, and scalability. Here’s a detailed guide to designing and configuring this architecture, with practical insights and interesting facts along the way. **Project Overview** A three-tier architecture consists of: 1. Presentation Tier: The web server that handles the user interface. 2. Application Tier: The app server that processes business logic. 3. Data Tier: The database server that stores and manages data. **Step-by-Step Guide to Creating the Three-Tier Architecture** **_1. Setting Up the VPC and Subnets_** **Why is a VPC important in AWS architecture?** A Virtual Private Cloud (VPC) allows you to provision a logically isolated section of the AWS cloud where you can launch AWS resources in a virtual network that you define. It gives you complete control over your network settings. **Create a VPC:** VPC CIDR: 10.0.0.0/20 **Create Subnets:** Public Subnet (AZ1): 10.0.0.0/24 (For Bastion Host and Web server) _Note : Enable auto assign public ip for Public subnets_ Private Subnet 1 (AZ1): 10.0.1.0/24 (For App server ) Private Subnet 2 (AZ1): 10.0.2.0/24 (For Database Server) Private Subnet 3 (AZ2): 10.0.3.0/24 (For High availability App Server) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bphchi9v58o6chtsetmc.png) **2. Configuring the Internet Gateway and Route Tables** ### Internet Gateway An Internet Gateway enables communication between instances in your VPC and the internet. It serves as a target for routing internet traffic and supports IPv4 and IPv6. Essential for allowing internet access to your VPC resources. ### Route Table A Route Table contains rules that determine where network traffic is directed. Each subnet in a VPC must be associated with a route table, controlling the traffic flow within the VPC and to external networks. It enables routing to various destinations including internet gateways. **Inbound and Outbound Traffic** _Inbound traffic_ refers to any data packets entering your instance from an external source, such as an SSH connection from your local machine or HTTP requests from users accessing your web application. _Outbound traffic_ refers to data packets leaving your instance to reach an external destination, such as an HTTP request to an external server for updates. - Attach the Internet Gateway (IGW) to the VPC. - Create a public route table** and associate it with the public subnet. - Add a route in the public route table to direct internet-bound traffic to the IGW. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f7kifw2snrrf9ls802dr.png) **3. Setting Up the Bastion Host** What is the purpose of a bastion host in a cloud architecture? A bastion host acts as a gateway that provides secure access to instances in private subnets. It enhances security by reducing the exposure of your private instances. - Launch an EC2 instance in the public subnet to act as the bastion host. - Configure security groups to allow SSH access(port 22),HTTP,HTTPS - Create a key pair to connect to the bastion host. For this project, a PPK key pair was generated to enable connection through the PuTTY terminal. Context on PEM and PPK: PEM (Privacy Enhanced Mail): Commonly used in various applications including OpenSSH, but PuTTY cannot directly use PEM files. PPK (PuTTY Private Key): The format required by PuTTY. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7b91yc4gkjbhg142lrxy.png) **4. Setting Up the Web Server** Launch an EC2 instance in the public subnet for the web server. Install Apache HTTP Server on the instance:(user data) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5u2izkfsb0kvm014ztph.png) - Configure security groups to allow SSH , HTTP (port 80) and HTTPS (port 443) traffic. **5. Configuring the Application Server** How does the application server interact with other components in a three-tier architecture? The application server processes business logic and communicates with the web server for user requests and with the database server for data retrieval and storage. - Launch an EC2 instance in the private subnet 1 for the application server. - Configure security groups to allow SSH access from the bastion host and ICMP from the web server. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fhctwwtfof8fqnjqg1p6.png) **6. Setting Up the Database Server** - Launch an RDS instance** in private subnet 1 for the database server. - **Configure security groups** to allow MySQL/Aurora traffic from the application server and bastion host. 1. Create a DB subnet group (RDS > DB Subnet group) and choose relevant VPC and private subnets. 2. Create Databases choosing required engine (Maria DB for this project) and set required user name and password. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/49tm57uy6idayvhp2iek.png) **7. Implementing High Availability** Why is high availability important in a cloud architecture? High availability ensures that your applications remain accessible even in the event of hardware or infrastructure failures. This is achieved by distributing resources across multiple availability zones. - Launch an EC2 instance** in the private subnet 3 (AZ2) to ensure high availability. - Replicate the setup** of the application server in this instance. **8. Configuring the NAT Gateway** NAT Gateway A NAT (Network Address Translation) Gateway allows instances in a private subnet to connect to the internet or other AWS services while preventing the internet from initiating connections with those instances. This is crucial for downloading updates or software packages securely without exposing your private instances directly to the internet. Create a NAT Gateway in the public subnet and associate it with an Elastic IP. _Need for an Elastic IP_ An Elastic IP is required for a NAT Gateway to ensure a consistent public IP address that external servers can recognize. This allows for reliable communication from instances in private subnets to the internet and back. - **Update the route tables** of the private subnets to direct internet-bound traffic to the NAT Gateway. **Final Security Group Configurations** _**Bastion Host Security Group:**_ - Inbound: SSH, HTTPS, HTTP, MySQL/Aurora (DB Security group), ICMP(Web server Security Group) - Outbound: All traffic **_Web Server Security Group:_** - Inbound: HTTP (80), HTTPS (443), SSH (22), ICMP IPV4 (App server Security group) - Outbound: All traffic **_Application Server Security Group:_** - Inbound: SSH from Bastion Host, ICMP from Web Server, MySQL/Aurora from Database Server SG - Outbound: All traffic **_Database Server Security Group:_** - Inbound: MySQL/Aurora from Application Server and Bastion Host - Outbound: All traffic **_Testing and Verification_** Note : In few of the commands provided the text requires the user to replace with their relevant content. **Step1** Connect to the Bastion Host using PuTTY. open the Putty terminal > Enter the host ip (public ip of Bastion host) > load the credentials (ppk file for Bastion host keypair)> set the time interval for the session in seconds > open ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/29l2uv1qiir3hbawsvcg.png) **Step2** SSH into the Application Server from the Bastion Host: Here let us explore about various ways of accessing a private instance in the following sections categorized as Methods 1,2,3 and 4. **Method 1: Securely Copying Key Files to Bastion Host from Local Machine** This method involves transferring your private key file to the bastion host, which is then used to SSH into the private instance. a) Connect to the bastion host as in the previous step and ensure that the instance is up and running. b) Use the below command on your local computer to copy the key files . command prompt from local computer >pscp -i <file path for bastionkey(ppk) on local computer> <file path for appserver(pem) on local computer> ec2-user@bastionhost_publicip:/home/ec2-user/ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cbc7nxknusyuuvcd7xrn.png) c) Check if the files are transferred using ls command on bastion host terminal. If successfully done you can get the output as below : ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h4hl65roo60ws5o60atb.png) d) Change permissions of the file so that it can be read prior to establishing a connection with the app server. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gliff0uh0in8gbmray3y.png) e) SSH into your app server from bastion host using the command below : ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/36th37almq893ern5209.png) If successfully connected to app server via bastion host you will get to see as below : ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xfxhpwepqdw4p9379r8e.png) **Step3** Connect to the Database Server from the Application Server: a) The first step is to install all required packages to connect with MariaDB. _I struggled a bit to figure the issue in connecting the DB server and finally it got resolved when all required packages are installed as in the document._ https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToMariaDBInstance.html sudo dnf install mariadb105 sudo yum install mariadb apt-get install mariadb-client mysql --version b) After updating the packages use the below command to login to DB server with the user name and password provided during the DB instance setup. mysql -user=<username> -password=<your_password> -host=<database-server-endpoint> A successful login to DB server will lead to a screen as below where you can provide the command _show databases;_ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ohx1fo949to8ttny9lv.png) **Method 2: Using Agent Forwarding** Agent forwarding allows you to forward your SSH credentials from your local machine to the bastion host, eliminating the need to copy private keys. **Step 1 : Load SSH Keys into Pageant:** Right-click on the Pageant icon in the system tray. Select "Add Key" from the context menu. Navigate to the location of your private key file (e.g., bastionkey.ppk) and select it. Repeat the process if you need to add multiple keys (e.g., appkeypair.ppk). Verify Loaded Keys: Select "View Keys". This will display a list of currently loaded keys. Below is the image of key listing ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lwh0d8e2igvt18psiv01.png) **Step 2: Enable Agent Forwarding in PuTTY** Open PuTTY Configuration: _Session Configuration:_ In the "Host Name (or IP address)" field, enter the bastion host's public IP address. _Enable Agent Forwarding:_ In the left-hand menu, navigate to Connection > SSH > Auth. Check the box that says "Allow agent forwarding". Refer to below pic. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n428m7qaheoiegs5bbka.png) _Load the Private Key:_ Still in the Connection > SSH > Auth menu, under "Private key file for authentication", click "Browse" and select your bastionkey.ppk file. Click "Open" to connect. _SSH from the Bastion Host to the Private Instance_ Verify SSH Agent is Running: On the bastion host terminal, run: sh ssh-add -l This should list the keys loaded by Pageant. SSH into the Private Instance: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/08jiennxcnziu65lhqqn.png) On successful connection the app server will be connected via Bastion host. The crucial part of agent forwarding is that your private key remains on your local machine and is managed by Pageant. The bastion host uses the forwarded key for authentication, which means the key is not physically copied or stored on the bastion host. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ybhzajvkkyyv8bpt5k2.png) **Method 3: Using Port Forwarding / Tunneling** _**content to be updated**_ Port forwarding, or tunneling, allows you to create a secure tunnel through the bastion host to the private instance. It a method of securely sending data through a public network by Encapsulation - wrapping the original data packets inside another set of packets. This creates a secure "tunnel" that hides the data from anyone intercepting the traffic. Technical Steps with PuTTY _Open First PuTTY Session:_ Connect to the bastion host. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ssijd6gqr4c7u4sfc1i.png) Enable SSH tunneling: In PuTTY, under Connection > SSH > Tunnels, add a new forwarded port: Source port: 8888 (or any available local port). Destination: private-instance-ip:22. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/655e4irjnjf7uatyuo4x.png) Open by authenticate with the bastion host's credentials (ppk). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9i9n8do7i07gy4idaafh.png) _Open Second PuTTY Session:_ Connect to localhost:8888. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jeixhwqes8n064k6n6vt.png) Authenticate with the private instance’s credentials (ppk). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/72ktxwjx8x51aj50kvyk.png) This results in connection to Bastion host and then to App server through tunnel as in below pic. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v6m3rzko2elpdfrb9y75.png) **Method 4: Using EC2 Instance connect Endpoint** _**content to be updated : This is yet to be experimented and the results will be updated once done**_ Below AWS document has provided clear steps to create EC2 endpoint and using it to connect to the private instance. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connect-using-eice.html Following one of these 4 methods leads to accessing the app server that resides in a private subnet. Further connecting to other server from App server pretty much follow the same steps as in Method 1 except for few relevant selections for ports pertaining to Databases. Each method brings unique merits and demerits concerning security, convenience, versatility, and setup complexity. Therefore, selecting the most suitable method requires a thoughtful approach tailored to the specific requirements of your project. **Conclusion** This guide provides a comprehensive approach to designing and configuring a three-tier architecture on AWS and focuses on various methods of accessing a private server. The project covers essential aspects of VPC configurations, subnetting, security group settings and secure accessing making it a quick reference for anyone with basic to medium level understanding and looking to build a 3 tier secure architecture in AWS. **References** https://aws.amazon.com/blogs/database/securely-connect-to-an-amazon-rds-or-amazon-ec2-database-instance-remotely-with-your-preferred-gui/ https://www.ssh.com/academy/ssh/tunneling https://medium.com/adessoturkey/how-to-connect-to-private-ec2-instance-database-via-bastion-host-5b05a256f9f7
dpa2024
1,908,215
Terraform, OpenTofu e criptografia de estado
TLDR: o OpenTofu agora conta com criptografia de State files (e de Plan files também!) sem depender...
0
2024-07-02T03:49:25
https://dev.to/aws-builders/terraform-opentofu-and-state-encryption-4am1
aws, terraform, opentofu, encryption
**TLDR**: o **OpenTofu** agora conta com criptografia de **State files** (e de **Plan files** também!) sem depender do backend remoto. O **Terraform** não - e talvez nunca suporte em sua versão gratuita. --- É de extrema importância conhecer bem as ferramentas que você pretende usar para ser capaz de desenhar bons fluxos de trabalho e, especialmente, mapear os riscos envolvidos. Quem usa Terraform já está familiarizado com a relevância do **State File** e que é de extrema importância garantir não só a resiliência do arquivo como também o controle de acesso. No post abaixo, vamos fazer um estudo de caso demonstrando os problemas de ter um arquivo de State File em texto claro disponível em buckets S3. --- Vamos supor que você criou uma variável no **AWS Secrets Manager** com **Terraform**: ```bash $ aws sts get-caller-identity { "UserId": "AIDA2UC3CSEZOOZQXHZCN", "Account": "730335449394", "Arn": "arn:aws:iam::730335449394:user/cloud_user" } $ aws secretsmanager get-secret-value --secret-id senha_root { "ARN": "arn:aws:secretsmanager:us-east-1:730335449394:secret:senha_root-qiNJDs", "Name": "senha_root", "VersionId": "terraform-20240701215533811100000001", "SecretString": "z0mgp4ssw0rd", "VersionStages": [ "AWSCURRENT" ], "CreatedDate": "2024-07-01T18:55:32.978000-03:00" } ``` Um usuário **hacker** **não vai ter acesso à variável**, a menos que alguém dê acesso explícito para ele: ```bash $ aws sts get-caller-identity { "UserId": "AIDA2UC3CSEZLYNWRDSTL", "Account": "730335449394", "Arn": "arn:aws:iam::730335449394:user/hacker" } $ aws secretsmanager get-secret-value --secret-id senha_root An error occurred (AccessDeniedException) when calling the GetSecretValue operation: User: arn:aws:iam::730335449394:user/hacker is not authorized to perform: secretsmanager:GetSecretValue on resource: senha_root because no identity-based policy allows the secretsmanager:GetSecretValue action ``` Observe que o erro é claramente permissão: > no identity-based policy allows the secretsmanager:GetSecretValue action Caso o hacker consiga comprometer um usuário que **conte com acesso ao Secrets Manager**, ainda é possível usar criptografia de uma chave KMS específica (e não a padrão do Secrets Manager), o que adiciona uma segunda camada de segurança: ```bash # Agora o hacker conseguiu comprometer um usuário com acesso ao SM! # Isso deu a ele a chance de acessar secrets menos importantes: $ aws secretsmanager get-secret-value --secret-id senha_menos_importante { "ARN": "arn:aws:secretsmanager:us-east-1:730335449394:secret:senha_menos_importante-Y4reR7", "Name": "senha_menos_importante", "VersionId": "e919b63c-6937-44ca-81b9-b8b2172c6b33", "SecretString": "{\"SENHA\":\"menos_importante\"}", "VersionStages": [ "AWSCURRENT" ], "CreatedDate": "2024-07-01T20:22:02.007000-03:00" } # A senha_root, entretanto, está criptografada com uma chave KMS não padrão: $ aws secretsmanager get-secret-value --secret-id senha_root An error occurred (AccessDeniedException) when calling the GetSecretValue operation: Access to KMS is not allowed ``` Aqui, a mensagem de erro mudou: > Access to KMS is not allowed Para acessar a senha importante, é necessário não só acesso ao serviço Secrets Manager, mas também ao serviço de gestão de chaves **KMS**, que normalmente conta com **Resource Policies** mais restritas de acordo com a finalidade. Só que... Geralmente guardamos arquivos de **Terraform State** em S3! E se o hacker comprometer um usuário com acesso a buckets S3? ## Acessando arquivos de Terraform State Um arquivo **Terraform State** é um JSON escrito em **texto claro**, sem qualquer proteção aos valores salvos. Podemos até escrever o código Terraform da seguinte maneira: ```bash $ cat variables.tf variable "senha_root" { type = string description = "Senha de root mais importante que temos" sensitive = true nullable = false } ``` Note o atributo **sensitive**! Podemos também garantir que o parâmetro só é conhecido em tempo de execução de código por meio de passagem de parâmetro do usuário: ```bash # Um formulário permitiu preencher a variável SENHA antes da execução: $ terraform apply -var "senha_root=$SENHA" -auto-approve ``` Só que, na prática, o atributo **sensitive** apenas garante que o atributo não será exibido na saída da execução do comando ou em mensagens de log, como descrito [aqui](https://developer.hashicorp.com/terraform/tutorials/configuration-language/sensitive-variables): > Terraform will then redact these values in the output of Terraform commands or log messages. No mesmo link acima, um alerta importante em outra seção: > When you run Terraform commands with a local state file, Terraform stores the state as plain text, including variable values, even if you have flagged them as sensitive. Terraform needs to store these values in your state so that it can tell if you have changed them since the last time you applied your configuration. Traduzindo: o Terraform salva o State File em **texto claro**! E mais abaixo: > Marking variables as sensitive is not sufficient to secure them. Como eles mesmos descrevem: > You must also keep them secure while passing them into Terraform configuration... ...como fizemos com o formulário, mas... > ...and protect them in your state file. Claro que você pode usar as versões pagas para resolver seu problema: > HCP Terraform and Terraform Enterprise manage and share sensitive values, and encrypt all variable values before storing them. Caso contrário, você está sem sorte, como na sequência de comandos abaixo: ```bash $ aws s3api list-buckets --query 'Buckets[*].Name' --output text devsres-terraform-state-storage $ aws s3 ls --recursive s3://devsres-terraform-state-storage/ 2024-07-01 20:30:15 3987 terraform/state/senha_root $ aws s3 cp s3://devsres-terraform-state-storage/terraform/state/senha_root /tmp/ download: s3://devsres-terraform-state-storage/terraform/state/senha_root to /tmp/senha_root # A maioria das pessoas simplesmente usaria um grep mesmo: $ jq -r '.resources[].instances[].attributes | select(.secret_string != null).secret_string' /tmp/senha_root z0mgM1nh4p@ssw0rd! ``` ## Opentofu terraform state encryption Só que isso é se estivermos falando do **Terraform**. O **OpenTofu** é um projeto software livre derivado do Terraform após a mudança de licença da Hashicorp para BSL. Em vez de ser um simples "fork", o OpenTofu trouxe novas funcionalidades que respondem ao anseio de muitas pessoas da comunidade, na prática deixando os códigos agora nem sempre 100% compatíveis! Por exemplo, é possível **criptografar o Terraform State** com uma passphrase ou com um serviço de chaves como o KMS **sem precisar contar com funcionalidades nativas do S3**! Como fazer isso? Bem, aí é necessário [ler a documentação](https://opentofu.org/docs/language/state/encryption/). Essa funcionalidade está disponível desde a versão [1.7.0](https://github.com/opentofu/opentofu/releases/tag/v1.7.0) lançada em abril de 2024. O exemplo abaixo é suficiente para mostrar a criptografia com passphrase: ```bash $ cat terraform.tf terraform { backend "s3" { bucket = "devsres-terraform-state-storage" key = "tofu/state/senha_root" region = "us-east-1" # encrypt = true # kms_key_id = "arn:aws:kms:us-east-1:730335449394:key/9b8acab0-df09-4c2b-81ab-7dd33d2a4ba2" } encryption { key_provider "pbkdf2" "senha_de_state" { passphrase = "wow!criptografia!" } method "aes_gcm" "protege" { keys = key_provider.pbkdf2.senha_de_state } state { method = method.aes_gcm.protege enforced = true } } } ``` Ao usar o código acima, este é o **State File** disponível no bucket S3: ```bash $ aws s3 cp s3://devsres-terraform-state-storage/tofu/state/senha_root /tmp/ t download: s3://devsres-terraform-state-storage/tofu/state/senha_root to ../../../../../tmp/senha_root $ cat /tmp/senha_root {"serial":1,"lineage":"5d8b4611-05a3-1f8a-404a-11e1b0693e8f","meta":{"key_provider.pbkdf2.senha_de_state":"eyJzYWx0IjoiN3d4VmgxcHF3S01EQ0FzRXpqZ0xnU0w3bkJGbmFSd2pmSGVwWm45VTlkOD0iLCJpdGVyYXRpb25zIjo2MDAwMDAsImhhc2hfZnVuY3Rpb24iOiJzaGE1MTIiLCJrZXlfbGVuZ3RoIjozMn0="},"encrypted_data":"YihB4BhoJnJ/RV0tmk+CzHDrs6YBQOqVPf+JllkbiMQex0U9TglBOb+iLo1OQRIw4EyXCPDgqwdTM7yx+OFp2bCiiP+mQC5dk6pv6SQiRLKc1rvVV3pfrpvnts59s8SYSFZOzOTJfZgOeEaGKueGdgmkbQR2hXE6BRh0o3ohi/Zxw67B5pKtYNdNJFQPgLbBYDhNP/a5pCpsNi2PYw6I4xILbqLKgQPMmspQPS4+zIP2IU71VnW9anirJp5trV9dMb0v2qyzZLY4iitsbgltRYziCJURh/e4vR7P1oTbaI0OZ422r1uJLIWyrKzZH12pqtqjNmT/KXIGXtd76TcqjmMxX+s1jtyvV5F6EnwCU857DfGyBl/0WgREPo7kB9uH6MhwOCkUPTe638awLp+8Ess9cKFzxx/jn8u4GvtkBEJoTUO/0fFVh0fIudtYaRuiNuJ4Hd/rMlgfXc/smo50su23jeHEJfycksWnwjXIreMLzkXevQChZ/ix3RyPxH55dZIivyyBaILIvaP+1dEzzgCNiuD0chtxOTqOEkJnh30exznLRLR/8q3ISmN86co2C1fvs7PwmpTyYlDb33DzwmpYMbdJ79dngM4nc7d2drv6kDxyi2+KYdGKZmO1r56lQgxPewCNQPwzyC7OOp/rNWB4K2s9favdKQ2Fh3m7JtOi9PWzMTvHPZVEylam3RxbpH4IRoDEpGXw9tzRCBFNXbmBwEDTNKYM2yuDMyrqIQj0PuTitsRWtfScPZtZRs7ar0UmeAvG5Pn/H9Z3WEwk/um9Yogob7HAYsMnsBZyRoCrGxg1nJYFt65WPzcxmqbx/tO3mHwOL75RLQ6bicmVLIHbXTI92zc1OZ9uf80QIMVA4Qs3FlKXGW4rOufEFXUd1d31qoM4nZaxPB3kEJmNUiuKIaKbvcHCnbkSKsQP+WfmBYFCbBXSbwf9R5EXdU6kigX3Ixg5pkyXkeZNUXoFA707Xq6QuwWWihuAfbq16zPIpmWzXZPRiGOpAFGV5tJE4/ZXkL0l/cDf7mbkzN7/mK18wFXCWHeBBhi/KDNHpyvrFyO/RM5cLAnYQR0GBZ1FYPmVASYTwoF/m+T/zfZQMLXsinrFhmBaPOuhT5/v3O6X6S0DaMJYoQiupOPXXVwJ2Go/TGWw00AZ8J4I4mn+yz7rAG5VZWiz4H08Z3VuAWO9O5J6EYiMqGsU10DEftNy1juao3UmlnjZx8XkRNmgkDCfNHXx3+eo8mu5vGds9HrvsGYpUVX0rQQtzh5JIcUOQz9Yh/BNsRSxZ+7Y5SY45ifq1rf6kjaA0R8gBEoPvAAOE2QBZaKkq+wrtiJ4TiXJk7fni2nsSv6c29pONaQMq7E9TVdL5LhS8vzl69ticbheAJpoDUOR2R2Bf0FnpFx9d2P1mckCBqhIIuQQqMPjL+0/az4ysAA6k/LF+xfH7WBxl6RfUFHzfXty792z0HkueF62Y+1a9VRJViuaSl5f5QJ90tYnHsM3ooAb6c1m+tTevxE2nyIIvb2bzOyh7tOLVFSgJzzuX/C5VXGQ6KdeRCVQFcGP+XuodZQEhkkpWBxVIHXJfSAEFNQPvKGJJdkadz5zbyw1GJbY6CSwYB/rc5v1H6FDc1YSLNRTk2d6cEI71rtURuuYrO9K/Rq9EowPwdGRO+eom8Wu/QMsyGiab6hiAbek8frq4zAxhcOHjXTbJG0KkvteBKwNJfDeLKQ5A/JyjAIdlAK9zEBtuf+kBWNoQN85izVwftMQghpCVFIlUEqYA0Lbgo0a4bYvcDuCn6lIaekY2o2qoeS+CBKR0vp4DYFxVOBhAET92Sk1fODeZflvGkISJPMyXcKC1j1f50wN/X+cTxkb/+7r/IILbMtcxoD3sFIs29SgacaYizM/VKuAIcj7XMD9z7qTE7LQvbU+UL9C1ZyCms8havyrGkyODDWO8TNDI2bASzLtc08a9c3lAFVafqYn0ioSuP+FKpb5HopNeyEXh1U30sp1gmkPT/KF4VQdNt6joQ1El5SdpZra68IOkLrG92wKiWUPenAND+ubZYh5AX5fIOSUCAFZIzj1akGE9HJ1aA4+pHiIbC51dVvjtOVJnYWSkvOyiQteZs3Hmrr003R7COSVVbJqCYnUfcDr6HnyNT+ucoPMMpx1M1A7A3w4UQcfvxHEqOOITI/o7WMW52ZpMzBjc+o8fwSq2ybFA5SPetXTQnmyqD7B9Cx0/2s7J6neMsDPaM8H1gmHXZQBE4VPPAvoe6CrSy87I4Np3b2XH6+0coHBPChaI6Y0NGj+qqgOtKB/JSDERAU5zUELbhevPpAqOw71Cq/kuGMMC8PBAiO8Nflkkmx8Lv5qIwrAAu7NpFE4zjYAVtZw04SQxFXIxhqnXwqYkQoI/6ABMKKIacctGov1n1vIJYTC84pLHVqEnX+gGbA7JaCpewIdHvgOecgaA7l2/bSKNwck5J+7koD1GxOO9aM2YQvFpYa3VjbVTGSGzAglXaM40r+NVj/qjPbGdB/CoAnSIRfVPnJgoKtQHFC4fltm3IWJ2Xf/kQm3h4b/sks4V8AIwr6IGSE6ZtwsPA4tDDumbta2iGuZrEw9VQDad0kYJkMVZ767Kh4dx1os55jk7lbOvv9j0g7ryFMjmZx1NzfqswFa4e23yvyHpYoKjLxV2pFq2rByZifdPEWGrp37Zt7NgJtHU3wdjQSVuiB5UeX7q/1Kr8PDRWtRNxaZ55KmZhIswqwr7uji+qnsWGxq7hRtpcnPZk4GWN1GenpBHX+T/YTsylXBAOvpHWnj+r3YgcwkhTn4lrPZ3Ta9CXxMZMG2Cj1c3yvqfAHDDQ6uVHKXnai+vWJFUQvyPPsTU5njn3kBcptwnSptNPqzZnrE26qy6E8qK9bS6a1YhYjbw9J6W8ORJacJYfQj5OabGTcaBLC6zzY9GF9LsUy9WoVb8Fph5ydJLXkyARJwSlvWpzKc4psLKhtjcYIzBoEHg9SBTDrk4LKtiRjVaaa5sjjixib1O7fUkZEREG6IzhsJ3AOarRbSb1J1KpF5iC5Pc31yjkkt6yBstBqYLqINRff5nrNzT61i28ZwTZRIQtV9hb4JotJo50JJsHaN7idalZ47teRrIlvXQpOF3dkO1BHRN4rfEUHuIBV9/cdUz9ylqyQtb+qPmK7LD246H9bRZRj8e80DJtfwPBFjCPXYMCHO2a22aGIMjFXTeo1IHIdwvFIeevSHzzyDhfViknaGu6B7fEM5fGKmAbEZ5IpGxGK22UfXQDXHgOns0IDBUovgS5dlN0Ag2/swCS5i4ys/nzi01AJfm1HMfu7nIisBK39FJLDJlDycQjcvdk3tuwYktI+4uybnXl03PaE6re7+FV6FFveXYfpCrrHasAOs/Cj9isgbk+/vSSDVeS5324/DawlV54JCUsUA512ad167gCGZJcFOrxTl9vXke4fw2iHnX00q6sx3drNZ4LJxynSxiI0wF3i7Fm5c9poVQC2Tgko3usNDM8eojjVbi2e7dAPtWRi3rkAxT7Ad/RNY9EGcYg0j5KqOjeWm/leQhsgeqCQ1DVUeRCYb6q1w0Ghahd1ecmvEtm3HgtEOtBD6tPrgD5deUEjrnKLEUPKAiZMg3g5AnMEWEYVBfo/gpc37wp+luMjs3phHe2sx/l1eHOUrOR57jLxqsxOOh5FivBRv/RT2amc5n8PzA6RMLYPweF1kA5005TimawYHEfMf0bWoVe+arCeyk1MCrAKg+WlbzFC4UKH/wCYcD+Cy+kWAdfjuQxhCX3jmkgyTDO0dRzHEqbMv4NJn9zTmC3PqdA6JoXAgRvQhqYdY8x2iQA6SqUBu0K1D4BYPROSoJAp0T0e4zRAKzN2C6CS51DNYEFPS8XsvWemKYdkTKuohBfukUdyq8qz0LosLC1SmkMKxRo3iQGHJcBL+u/n1eUeHQJzwO1S79KibdwXyXOkV+m0/2TEYlzUFF5fy+kUBDPgnUPkiHNb23qTGXnry7vxE0L4Wv3hwwuHu9Au1a7ot3V/1pHcdeHusL+peR9npTVriMsAVzt4tCQDweVTgMwSuXi1J/bNo13hHR9tvw8jpJ7SW5avqTHxJT6g2PaOeztfT6h7CcxOJ1LHFUZKwLq4NNjMxJebdcX0sdYeOdO/ZsHfAN4RJbashe67ulqsoxk/rjRFkp/pbeXoC43TUByKxGcHUNpPs3u6gMH4L2mSsFs4+Euvk0gBhKqiO+CBtSauggZKrXzM9UFJsjtBNL6OSaPip3NVIG2bwUd1bhjLrdicVAGZBnoa6pIIBHndNMQCepZsPpM45OGQFDtpUIDvlrzO2uqlUqS87HYWz5hKfUri9nUwhBRnrkFkjbjchQMGDX9QcWHlfiS4GYT41OZfXWjbtrdOCw0CAvCaMH1906M4dZ3n3KnidtC0E9HU/Gq3hPNZPqN4iE4xuL9zresPyvjxWpdWzYOkmKP5qtqN5qN1xCW3Yv7HVHCblT9NbpIOKP4VbHQ4yXgrx5VQ1lDSueMFA71mvDGT7Dzdx299TH1v54oZmIofXC5suFSG7qNtlN+nbRlp6aGHeljaI6SF4vVjCBrH+H32lglJk2z27eT6jYXpnw3mfRw82czhOgBKOTFMNxFz17ydd3BQ5laBSn/6CNnLdwdhBmD8Agoyw4xDbu4YWmJtAy3OiiRRNF69KGg1+RqfaWaADpH8RLA4Bjlo00GwBUj4jeecFWjrGMbQaYwewUww/IVBt6BJR/qJ5gJpRQKnAe+NVWkrVrZk0PtEyNbMVLmWQrMPrK8nyUiZfrCmMzmpeUrNq5428Niegw+qbNRc35HbLxSOUuuDvkNytrkbuj7EAvwLlzgrAv43Mb5+0p5ywu+0+FrdbRPuivA7DysmFKFWNIF7yyLp8eqmHk0BtCzfMqWOL3ubUfht0T3o5P2eKuikRktslY5Vmwe9X3yUWMOU97UUHrfCFEU4MtNBe1lr2ylLccYE7PaBMwRnHgBum24WKAqlOfFV8FlU5wj2+J+VL1tR2OXdnjD03q9FfAi2MyYzIzY7HWF9ckw9bEb+kqQ1U0zXdtf0JYa0uZCO+ec14zIlvQm8GDspaMpf/y1cHgb4aI5dvD2OYE0CYucxfxVD9/Et68UbrIaXJNrD3LanALTLYKWOEtQOI+Gs1nFGp8SwWArRHJTsJN1LV8TcfdY6aGNBsMx74","encryption_version":"v0"} ``` O arquivo está criptografado e indisponível para acesso direto. Ah, e se você tentar usar esse bloco de código com o Terraform, ele trará um erro: ```bash $ terraform init -migrate-state ... ╷ │ Error: Unsupported block type │ │ on main.tf line 10, in terraform: │ 10: encryption { │ │ Blocks of type "encryption" are not expected here. ╵ ``` ## Extra: Mitigação na AWS com KMS Uma pessoa mais sagaz poderia argumentar: "ah, é só usar a chave KMS para criptografar **também** o seu State File"! ```bash $ cat terraform { backend "s3" { bucket = "devsres-terraform-state-storage" key = "terraform/state/senha_root" region = "us-east-1" encrypt = true kms_key_id = "arn:aws:kms:us-east-1:730335449394:key/9b8acab0-df09-4c2b-81ab-7dd33d2a4ba2" } } ``` E é verdade, isso mitiga o acesso: ```bash $ aws s3 cp s3://devsres-terraform-state-storage/terraform/state/senha_root /tmp/ download failed: s3://devsres-terraform-state-storage/terraform/state/senha_root to ../../tmp/senha_root An error occurred (AccessDenied) when calling the GetObject operation: User: arn:aws:iam::730335449394:user/hacker is not authorized to perform: kms:Decrypt on resource: arn:aws:kms:us-east-1:730335449394:key/9b8acab0-df09-4c2b-81ab-7dd33d2a4ba2 because no identity-based policy allows the kms:Decrypt action ``` Mas nem todo backend suporta esta funcionalidade. Salvo engano, não é possível usar com [Azure Blobs](https://github.com/hashicorp/terraform/issues/30944); outros backends e usuários on-premises também ficam sem opção.
marcelo_devsres
1,908,387
Mastering Binary Search in JavaScript Part I: Iterative, Recursive. Day 2 of 30 Days of DSA 🚀🦾
Introduction Binary search is a powerful search algorithm that excels in finding elements...
0
2024-07-02T03:45:12
https://dev.to/rajusaha/mastering-binary-search-in-javascript-part-i-iterative-recursive-day-2-of-30-days-of-dsa-1mg5
javascript, algorithms, learning, webdev
###Introduction Binary search is a powerful search algorithm that excels in finding elements within sorted arrays. It leverages the "divide and conquer" approach, repeatedly dividing the search space in half until the target element is located or determined to be absent. This post delves into implementing binary search in JavaScript using iterative and recursive techniques, along with exploring its application for element insertion. ####Understanding Binary Search ``` Assumptions: The input array is sorted in ascending order. Logic: 1. Initialize start and end indices to point to the beginning and end of the array, respectively. 2. While start is less than or equal to end: - Calculate the middle index using Math.floor((start + end) / 2). - If the target element is found at arr[middle]: - Return the middle index as the element's position. - If the target element is less than arr[middle]: - Recalculate end to exclude the right half of the search space (since the array is sorted). - If the target element is greater than arr[middle]: - Recalculate start to exclude the left half of the search space. 3. If the loop completes without finding the element, return null (or an appropriate indication of element not found). ``` ####Implementation: Iterative Approach ```javascript function binaryIterative(arr, target) { let start = 0; let end = arr.length - 1; while (start <= end) { let middle = Math.floor((start + end) / 2); if (arr[middle] === target) { return middle; } else if (target < arr[middle]) { end = middle - 1; } else { start = middle + 1; } } return 'Not found; } console.log(binaryIterative([2, 7, 9, 11, 25], 10)); // Output: Not found ``` ####Implementation: Recursive Approach ```javascript function binaryRecursive(arr, start, end, target) { if (start > end) { return 'Element Not Found'; // Or any appropriate indication } let middle = Math.floor((start + end) / 2); if (arr[middle] === target) { return middle; } else if (target < arr[middle]) { return binaryRecursive(arr, start, middle - 1, target); } else { return binaryRecursive(arr, middle + 1, end, target); } } function findElement(arr, target) { let start = 0; let end = arr.length - 1; return binaryRecursive(arr, start, end, target); } console.log(findElement([2, 7, 9, 11, 25], 9)); // Output: 2 ``` ####Choosing the Approach - **Iterative**: Generally preferred for larger arrays, as it avoids the overhead of function calls associated with recursion. - **Recursive**: Can be more concise and easier to understand. ####Problem I: Finding or Inserting an Element The provided `searchInsertK `function demonstrates how to modify the binary search to find the appropriate insertion point for an element in a sorted array: ```javascript function searchInsertK(arr, N, K) { let start = 0; let end = N - 1; while (start <= end) { let mid = Math.floor((start + end) / 2); if (K === arr[mid]) { return mid; // Element found } else if (K < arr[mid]) { end = mid - 1; } else { start = mid + 1; } } // If not found, return the insertion index (after the last element where K is greater) return end + 1; } ``` Binary search is an essential algorithm for efficient searching in sorted arrays. Understanding its iterative and recursive implementations provides valuable tools.
rajusaha
1,908,385
Automating Linux User Management with Bash Scripting
Managing user accounts on a Linux system can be daunting, especially in environments with frequent...
0
2024-07-02T03:44:11
https://dev.to/by_segun_moses/automating-linux-user-management-with-bash-scripting-3f8d
aws, linux, bash, scripting
Managing user accounts on a Linux system can be daunting, especially in environments with frequent employee onboarding. As a DevOps engineer, familiar with operational SysOps functionalities, I often need a reliable, automated solution to streamline this process. This is where the _create_users.sh_ Bash script comes into play, automating user creation and management based on input from a text file. ## **The Script’s Mission** The primary goal of create_users.sh is to automate the creation of user accounts on a Linux machine. Reading a specified text file containing usernames and associated groups, the script performs a series of operations to ensure each user is set up correctly with appropriate permissions and group memberships. ## **Step-by-Step Explanation** ** 1. Write your create_users.sh file (touch create_users.sh) ** Create a new script file named create_users.sh with the necessary permissions using the commands I. ``` touch create_users.sh ``` II. ``` chmod +x create_users.sh ``` ** 2. Check for Input File ** Before proceeding, the script verifies that you’ve provided an input file containing user and group information. This early check prevents errors and guides users on proper script usage. Create a text file sample user and group data sudo nano user_data.txt ``` #!/bin/bash # Log file location LOGFILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" # Check if the input file is provided if [ -z "$1" ]; then echo "Error: No file was provided" echo "Usage: $0 <name-of-text-file>" exit 1 fi # Create log and password files mkdir -p /var/secure touch $LOGFILE $PASSWORD_FILE chmod 600 $PASSWORD_FILE generate_random_password() { local length=${1:-10} # Default length is 10 if no argument is provided LC_ALL=C tr -dc 'A-Za-z0-9!?%+=' < /dev/urandom | head -c $length } # Function to create a user create_user() { local username=$1 local groups=$2 if getent passwd "$username" > /dev/null; then echo "User $username already exists" | tee -a $LOGFILE else useradd -m $username echo "Created user $username" | tee -a $LOGFILE fi # Add user to specified groupsgroup groups_array=($(echo $groups | tr "," "\n")) for group in "${groups_array[@]}"; do if ! getent group "$group" >/dev/null; then groupadd "$group" echo "Created group $group" | tee -a $LOGFILE fi usermod -aG "$group" "$username" echo "Added user $username to group $group" | tee -a $LOGFILE done # Set up home directory permissions chmod 700 /home/$username chown $username:$username /home/$username echo "Set up home directory for user $username" | tee -a $LOGFILE # Generate a random password password=$(generate_random_password 12) echo "$username:$password" | chpasswd echo "$username,$password" >> $PASSWORD_FILE echo "Set password for user $username" | tee -a $LOGFILE } # Read the input file and create users while IFS=';' read -r username groups; do create_user "$username" "$groups" done < "$1" echo "User creation process completed." | tee -a $LOGFILE ``` **3. Define Variables** Key variables such as `INPUT_FILE`, `LOG_FILE`, and `PASSWORD_FILE` are defined to manage paths and filenames throughout the script. This enhances readability and makes maintenance easier. **4. Create Directories and Secure Password File** Ensuring security is paramount. The script creates necessary directories if they don’t exist and initializes a password file (/var/secure/user_passwords.csv) with stringent permissions (`chmod 600`). This ensures that only authorized users can access sensitive password information. **5. Define Functions** Modularity is key to maintainable scripting. The script defines two functions: generate_password(): Utilizes OpenSSL to generate strong, random passwords for each user. log_message(): Logs detailed actions and timestamps into LOG_FILE, facilitating troubleshooting and audit trails. **6. Read and Process Input File** The heart of the script lies in processing the input file: It reads each line, compiles formatting, and parses usernames and associated groups. For each user: It checks if the user already exists to prevent duplication. If not, it creates the user with their primary group and creates a secure home directory. A random password is generated and securely stored in `PASSWORD_FILE`. Additional specified groups are created if needed, and the user is added to these groups. **7. End of Script** Upon completion, the script logs a message indicating successful user creation and prompts users to review the `LOG_FILE` for detailed operations performed. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2mufdkvgsp5yfymnmgnv.png) ## **Important Decisions** **Password Security**: Emphasizing security, the script employs OpenSSL to generate robust passwords. Storing passwords in a file with restricted permissions (600) ensures compliance with security best practices. **Logging**: Detailed logging (log_message()) aids in troubleshooting and provides an audit trail of script activities. This proves invaluable in diagnosing issues and maintaining accountability. **Error Handling**: The script anticipates potential errors, such as missing input files or existing user accounts, and handles them gracefully to prevent disruptions. **Modular Functions**: The script promotes code reuse and maintainability by encapsulating password generation and logging into functions. **Group Management**: Dynamic group management ensures users are assigned to appropriate groups, enhancing system organization and access control. ** ## Real-World Application ** During my transformative journey with the [HNG Internship](https://hng.tech/internship), an immersive platform renowned for nurturing tech talents, I look towards encountering multifaceted challenges necessitating agile solutions. From creating seamless user provisioning amidst project expansions to fortifying data security protocols across distributed environments, the create_users.sh script emerged as a pivotal solution to overhead and task overload. To explore how the [HNG Internship](https://hng.tech/internship) empowers emerging tech professionals, check out their comprehensive programs such as [HNG Internship](https://hng.tech/internship) and [HNG Hire](https://hng.tech/hire), known for industry excellence and transformative learning experiences.
by_segun_moses
1,908,380
Kubernetes Services: Understanding NodePort and ClusterIP
Welcome back to my blog series on CK 2024! This is the ninth post in the series where we dive deep...
0
2024-07-02T03:34:46
https://dev.to/jensen1806/kubernetes-services-understanding-nodeport-and-clusterip-2f2n
kubernetes, docker, networking, containers
Welcome back to my blog series on CK 2024! This is the ninth post in the series where we dive deep into various Kubernetes services such as ClusterIP, NodePort, ExternalName, and LoadBalancer. In this post, we'll explore how these services work and their importance in a Kubernetes environment. #### Recap from Previous Posts In our last post, we discussed ReplicaSets, ReplicationControllers, and Deployments. We learned how to create a deployment with multiple pods running an Nginx front-end application. However, we didn't expose this deployment to external users, which is what we'll focus on today using Kubernetes services. ### Introduction to Kubernetes Services Kubernetes services enable communication between various components within and outside a Kubernetes cluster. They provide a way to expose applications running on a set of pods, ensure stable IP addresses, and load balance traffic across multiple pods. The main types of services in Kubernetes are: 1. ClusterIP 2. NodePort 3. ExternalName 4. LoadBalancer ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mhg3mebrd9wn3d4v4yit.png) ### Why Use Kubernetes Services? Services in Kubernetes make your application components loosely coupled, allowing them to communicate with each other efficiently and be accessible to external users. They ensure that your pods, which can be dynamically created and destroyed, are always reachable. ### NodePort Service A NodePort service exposes your application on a static port on each node's IP address. This service allows external users to access your application by requesting the node's IP address and the NodePort. #### How NodePort Works Let's say we have a front-end application running on a pod. We want to expose this application externally. We do this by creating a NodePort service: - **NodePort**: A static port on each node's IP (e.g., 30001). - **Port**: The port exposed by the service within the cluster (e.g., 80). - **TargetPort**: The port on which the application is running inside the pod (e.g., 80). The traffic flow will be: **External user -> Node IP -> Service Port -> TargetPort** (Application Pod) Example NodePort YAML ``` apiVersion: v1 kind: Service metadata: name: nodeport-service labels: env: demo spec: type: NodePort selector: env: demo ports: - port: 80 targetPort: 80 nodePort: 30001 ``` Create the service using: ``` kubectl apply -f nodeport.yaml ``` Verify the service creation: ``` kubectl get svc ``` You should see the service listed with its ClusterIP, NodePort, and target ports. ### ClusterIP Service A ClusterIP service is the default service type in Kubernetes. It provides a stable internal IP address accessible only within the cluster. This service is used for internal communication between different pods. #### How ClusterIP Works ClusterIP allows your front-end application to communicate with back-end services or databases without exposing them to external networks. When you create a ClusterIP service, Kubernetes assigns it an internal IP address and sets up routing rules to direct traffic to the appropriate pods. Example ClusterIP YAML ``` apiVersion: v1 kind: Service metadata: name: clusterip-service labels: app: my-app spec: selector: app: my-app ports: - protocol: TCP port: 80 targetPort: 8080 ``` Create the service using: ``` kubectl apply -f clusterip.yaml ``` Verify the service creation: ``` kubectl get svc ``` You should see the service listed with its ClusterIP and ports. ### Conclusion Understanding and correctly configuring services is crucial for managing communication within a Kubernetes cluster. NodePort and ClusterIP services serve different purposes but are both essential for building scalable and robust applications. In the next post, we will dive deeper into Multi-container commands and namespaces. Stay tuned! For further reference, check out the detailed YouTube video here: {% embed https://www.youtube.com/watch?v=tHAQWLKMTB0&list=WL&index=18 %}
jensen1806
1,908,381
Generative AI and Sustainable Innovation
Introduction The rapid advancement of technology has brought about significant changes...
27,673
2024-07-02T03:34:35
https://dev.to/rapidinnovation/generative-ai-and-sustainable-innovation-1ec8
## Introduction The rapid advancement of technology has brought about significant changes in various sectors. Among these, Artificial Intelligence (AI) stands out as one of the most transformative. Within AI, Generative AI has emerged as a particularly exciting and impactful subset. This introduction provides an overview of Generative AI and highlights the importance of sustainable innovation in this field. ## How Generative AI Works Generative AI leverages advanced algorithms and neural networks to generate novel outputs based on patterns and data it has learned. This capability has far-reaching implications across various domains, from art and entertainment to healthcare and engineering. ## What is Sustainable Innovation? Sustainable innovation refers to the development and implementation of new products, services, processes, or business models that not only drive economic growth but also address environmental and social challenges. It aims to create long-term value by balancing economic, environmental, and social considerations. ## Types of Generative AI Applications in Prototyping and Development Generative AI has revolutionized prototyping and development by introducing innovative methods to enhance design processes, optimize material usage, and streamline production workflows. Key applications include design optimization, material efficiency, energy consumption reduction, and lifecycle analysis. ## Benefits of Using Generative AI for Sustainable Innovation Generative AI offers significant benefits for sustainable innovation by optimizing resource use, accelerating the innovation process, enhancing decision-making, and promoting circular economy solutions. It can help develop innovative solutions that address environmental challenges and promote sustainability. ## Challenges in Implementing Generative AI Implementing generative AI in real-world applications is fraught with challenges, including technical barriers, data privacy and security, ethical considerations, and integration with existing systems. Addressing these issues requires a multidisciplinary approach and collaboration between various stakeholders. ## Future of Generative AI in Rapid Prototyping and Product Development The future of generative AI in rapid prototyping and product development is bright, with numerous emerging trends and potential developments on the horizon. As AI technology continues to advance, it will play an increasingly central role in the design and development of new products, driving innovation and efficiency across various industries. ## Why Choose Rapid Innovation for Implementation and Development Choosing rapid innovation for implementation and development is crucial in today's fast-paced business environment. Rapid innovation allows businesses to stay agile, capitalize on emerging opportunities, continuously improve, and foster a culture of creativity. It is essential for maintaining a competitive edge and driving long-term success. ## Conclusion The journey through the realms of generative AI and sustainable innovation has been enlightening. By embracing this technology and fostering a culture of collaboration and ethical responsibility, we can pave the way for a more sustainable, equitable, and prosperous world. Generative AI can be a powerful catalyst for positive change. 📣📣Drive innovation with intelligent AI and secure blockchain technology! Check out how we can help your business grow! [Blockchain App Development](https://www.rapidinnovation.io/service- development/blockchain-app-development-company-in-usa) [Blockchain App Development](https://www.rapidinnovation.io/service- development/blockchain-app-development-company-in-usa) [AI Software Development](https://www.rapidinnovation.io/ai-software- development-company-in-usa) [AI Software Development](https://www.rapidinnovation.io/ai-software- development-company-in-usa) ## URLs * <https://www.rapidinnovation.io/post/leveraging-generative-ai-for-sustainable-innovation-in-rapid-prototyping-and-product-development> ## Hashtags #Here #are #five #relevant #hashtags #for #the #provided #text: #1. #GenerativeAI #2. #SustainableInnovation #3. #RapidPrototyping #4. #AIinIndustry #5. #FutureOfAI
rapidinnovation
1,908,366
Configurando o Spring com JPA e Microsoft SQL Server
Configurar o banco de dados em um ambiente de desenvolvimento Java pode ser uma tarefa desafiadora,...
0
2024-07-02T02:46:24
https://dev.to/kbdemiranda/configurando-o-spring-com-jpa-e-microsoft-sql-server-4igm
sqlserver, java, jpa, sql
--- title: Configurando o Spring com JPA e Microsoft SQL Server published: true description: tags: sqlserver, java, jpa, sql cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nxkdd0e1lo4oeizo1rzd.png # Use a ratio of 100:42 for best results. # published_at: 2024-06-23 14:11 +0000 --- Configurar o banco de dados em um ambiente de desenvolvimento Java pode ser uma tarefa desafiadora, especialmente quando se trata de escolher o driver correto e configurar adequadamente as dependências. Aqui, vou compartilhar como configurar um ambiente Spring MVC utilizando JPA e o SQL Server. ## Passo 1: Adicionando Dependências O primeiro passo é adicionar as dependências necessárias ao seu arquivo `pom.xml`. ```xml <dependencies> <!-- Dependência do MSSQL --> <dependency> <groupId>com.microsoft.sqlserver</groupId> <artifactId>mssql-jdbc</artifactId> <version>7.2.2.jre8</version> </dependency> <!-- Dependência do Spring Data JPA --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <!-- Dependência do Spring Boot Starter Web --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> </dependencies> ``` ## Passo 2: Configurando o JPA Agora vamos criar a classe de configuração do JPA. Vou usar a nomenclatura `JPAConfiguration.java`. ```java package br.com.meuprojeto.config; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.Profile; import org.springframework.jdbc.datasource.DriverManagerDataSource; import org.springframework.orm.jpa.JpaTransactionManager; import org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean; import org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter; import org.springframework.transaction.annotation.EnableTransactionManagement; import javax.persistence.EntityManagerFactory; import javax.sql.DataSource; import java.util.Properties; @Configuration @EnableTransactionManagement public class JPAConfiguration { @Bean public LocalContainerEntityManagerFactoryBean entityManagerFactory(DataSource dataSource, Properties additionalProperties) { LocalContainerEntityManagerFactoryBean factoryBean = new LocalContainerEntityManagerFactoryBean(); HibernateJpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter(); factoryBean.setJpaVendorAdapter(vendorAdapter); factoryBean.setPackagesToScan("br.com.meuprojeto.loja.models"); factoryBean.setDataSource(dataSource); factoryBean.setJpaProperties(additionalProperties); return factoryBean; } @Bean @Profile("dev") public Properties additionalProperties() { Properties properties = new Properties(); properties.setProperty("hibernate.dialect", "org.hibernate.dialect.SQLServerDialect"); properties.setProperty("hibernate.show_sql", "true"); properties.setProperty("hibernate.hbm2ddl.auto", "create"); properties.setProperty("javax.persistence.schema-generation.scripts.create-target", "db-schema.jpa.ddl"); return properties; } @Bean @Profile("dev") public DriverManagerDataSource dataSource() { DriverManagerDataSource dataSource = new DriverManagerDataSource(); dataSource.setUsername("sa"); dataSource.setPassword(""); // Adicione sua senha aqui dataSource.setUrl("jdbc:sqlserver://127.0.0.1;databaseName=MeuProjeto;"); dataSource.setDriverClassName("com.microsoft.sqlserver.jdbc.SQLServerDriver"); return dataSource; } @Bean public JpaTransactionManager transactionManager(EntityManagerFactory emf) { return new JpaTransactionManager(emf); } } ``` ### Destaques da Configuração 1. **EntityManagerFactory Bean**: Configura o `EntityManagerFactory` com o adaptador do Hibernate e define o pacote onde as entidades JPA estão localizadas. 2. **Propriedades Adicionais**: Configurações específicas do Hibernate, como o dialeto SQL, exibição de SQL no console e geração de esquema de banco de dados. 3. **DataSource Bean**: Configura a conexão com o banco de dados, incluindo URL, usuário, senha e driver. 4. **TransactionManager Bean**: Gerencia as transações JPA. ## Considerações Finais Ao configurar o banco de dados para um ambiente de desenvolvimento, é essencial garantir que as versões do driver e do SQL Server sejam compatíveis. No exemplo acima, a versão do driver `7.2.2.jre8` foi utilizada com sucesso com as versões mais recentes do SQL Server Developer e Express. Essa configuração deve proporcionar uma base sólida para iniciar o desenvolvimento de aplicações Spring MVC com JPA utilizando o SQL Server. Experimente e adapte conforme necessário para atender às suas necessidades específicas.
kbdemiranda
1,908,377
ApplyPass - Goodbye job applications, hello interviews
ApplyPass is an AI-powered job search tool that helps engineers land more interviews in less time....
0
2024-07-02T03:26:17
https://dev.to/applypass/applypass-goodbye-job-applications-hello-interviews-70n
software, jobs, interview
ApplyPass is an AI-powered job search tool that helps engineers land more interviews in less time. The ApplyPass team reviews and optimizes your resume, then our auto-applier sends hundreds of applications for jobs that match your specific skills and experience. ApplyPass offers a freemium service, allowing you to submit the first 100 applications for free. With our premium plan, you can send up to 400 applications per week. [https://www.applypass.com](https://www.applypass.com)
applypass
1,908,375
Agglomerative Clustering Metrics: Hierarchical Clustering Techniques
Agglomerative clustering is a hierarchical clustering method used to group similar objects together. It starts with each object as its own cluster, and then iteratively merges the most similar clusters together until a stopping criterion is met. In this lab, we will demonstrate the effect of different metrics on the hierarchical clustering using agglomerative clustering algorithm.
27,933
2024-07-02T03:22:45
https://labex.io/tutorials/ml-agglomerative-clustering-metrics-49061
coding, programming, tutorial, sklearn
## Introduction Agglomerative clustering is a hierarchical clustering method used to group similar objects together. It starts with each object as its own cluster, and then iteratively merges the most similar clusters together until a stopping criterion is met. In this lab, we will demonstrate the effect of different metrics on the hierarchical clustering using agglomerative clustering algorithm. ### VM Tips After the VM startup is done, click the top left corner to switch to the **Notebook** tab to access Jupyter Notebook for practice. Sometimes, you may need to wait a few seconds for Jupyter Notebook to finish loading. The validation of operations cannot be automated because of limitations in Jupyter Notebook. If you face issues during learning, feel free to ask Labby. Provide feedback after the session, and we will promptly resolve the problem for you. ## Import libraries and generate waveform data First, we import the necessary libraries and generate waveform data that will be used in this lab. ```python import matplotlib.pyplot as plt import matplotlib.patheffects as PathEffects import numpy as np from sklearn.cluster import AgglomerativeClustering from sklearn.metrics import pairwise_distances np.random.seed(0) # Generate waveform data n_features = 2000 t = np.pi * np.linspace(0, 1, n_features) def sqr(x): return np.sign(np.cos(x)) X = list() y = list() for i, (phi, a) in enumerate([(0.5, 0.15), (0.5, 0.6), (0.3, 0.2)]): for _ in range(30): phase_noise = 0.01 * np.random.normal() amplitude_noise = 0.04 * np.random.normal() additional_noise = 1 - 2 * np.random.rand(n_features) # Make the noise sparse additional_noise[np.abs(additional_noise) < 0.997] = 0 X.append( 12 * ( (a + amplitude_noise) * (sqr(6 * (t + phi + phase_noise))) + additional_noise ) ) y.append(i) X = np.array(X) y = np.array(y) ``` ## Plot the ground-truth labeling We plot the ground-truth labeling of the waveform data. ```python n_clusters = 3 labels = ("Waveform 1", "Waveform 2", "Waveform 3") colors = ["#f7bd01", "#377eb8", "#f781bf"] # Plot the ground-truth labelling plt.figure() plt.axes([0, 0, 1, 1]) for l, color, n in zip(range(n_clusters), colors, labels): lines = plt.plot(X[y == l].T, c=color, alpha=0.5) lines[0].set_label(n) plt.legend(loc="best") plt.axis("tight") plt.axis("off") plt.suptitle("Ground truth", size=20, y=1) ``` ## Plot the distances We plot the interclass distances for different metrics. ```python for index, metric in enumerate(["cosine", "euclidean", "cityblock"]): avg_dist = np.zeros((n_clusters, n_clusters)) plt.figure(figsize=(5, 4.5)) for i in range(n_clusters): for j in range(n_clusters): avg_dist[i, j] = pairwise_distances( X[y == i], X[y == j], metric=metric ).mean() avg_dist /= avg_dist.max() for i in range(n_clusters): for j in range(n_clusters): t = plt.text( i, j, "%5.3f" % avg_dist[i, j], verticalalignment="center", horizontalalignment="center", ) t.set_path_effects( [PathEffects.withStroke(linewidth=5, foreground="w", alpha=0.5)] ) plt.imshow(avg_dist, interpolation="nearest", cmap="cividis", vmin=0) plt.xticks(range(n_clusters), labels, rotation=45) plt.yticks(range(n_clusters), labels) plt.colorbar() plt.suptitle("Interclass %s distances" % metric, size=18, y=1) plt.tight_layout() ``` ## Plot clustering results We plot the clustering results for different metrics. ```python for index, metric in enumerate(["cosine", "euclidean", "cityblock"]): model = AgglomerativeClustering( n_clusters=n_clusters, linkage="average", metric=metric ) model.fit(X) plt.figure() plt.axes([0, 0, 1, 1]) for l, color in zip(np.arange(model.n_clusters), colors): plt.plot(X[model.labels_ == l].T, c=color, alpha=0.5) plt.axis("tight") plt.axis("off") plt.suptitle("AgglomerativeClustering(metric=%s)" % metric, size=20, y=1) ``` ## Summary In this lab, we demonstrated the effect of different metrics on the hierarchical clustering using agglomerative clustering algorithm. We generated waveform data and plotted the ground-truth labeling, interclass distances, and clustering results for different metrics. We observed that the clustering results varied with the choice of metric and that the cityblock distance performed the best in separating the waveforms. --- ## Want to learn more? - 🚀 Practice [Agglomerative Clustering Metrics](https://labex.io/tutorials/ml-agglomerative-clustering-metrics-49061) - 🌳 Learn the latest [Sklearn Skill Trees](https://labex.io/skilltrees/sklearn) - 📖 Read More [Sklearn Tutorials](https://labex.io/tutorials/category/sklearn) Join our [Discord](https://discord.gg/J6k3u69nU6) or tweet us [@WeAreLabEx](https://twitter.com/WeAreLabEx) ! 😄
labby
1,908,369
How to Deploy a React App to GitHub Pages
How to Deploy a React App to GitHub Pages Deploying a React app to GitHub Pages is a great...
0
2024-07-02T03:07:38
https://dev.to/sh20raj/how-to-deploy-a-react-app-to-github-pages-29li
javascript, webdev, react, githubpages
# How to Deploy a React App to GitHub Pages Deploying a React app to GitHub Pages is a great way to share your project with the world. GitHub Pages is a free hosting service that makes it easy to publish static websites directly from your GitHub repository. This article will guide you through the steps to deploy your React app to GitHub Pages. {% youtube https://youtu.be/0d6tf4te4lw?si=kBTsma5TjiCqbE6o %} ## Prerequisites Before we begin, make sure you have the following: 1. **Node.js and npm**: Install Node.js and npm from [nodejs.org](https://nodejs.org/). 2. **Git**: Install Git from [git-scm.com](https://git-scm.com/). 3. **GitHub Account**: Create a GitHub account if you don't have one already. ## Step 1: Create a React App If you haven't already created a React app, you can do so using Create React App. Open your terminal and run the following commands: ```bash npx create-react-app my-react-app cd my-react-app ``` Replace `my-react-app` with the name of your project. ## Step 2: Install `gh-pages` Package To deploy your React app to GitHub Pages, you'll need to use the `gh-pages` package. Install it by running: ```bash npm install gh-pages --save-dev ``` ## Step 3: Update `package.json` Add the following properties to your `package.json` file: 1. **Homepage**: Add a `homepage` field to specify the URL where your app will be hosted. The URL should be in the format `https://<username>.github.io/<repository-name>/`. ```json "homepage": "https://<username>.github.io/<repository-name>/" ``` Replace `<username>` with your GitHub username and `<repository-name>` with the name of your repository. 2. **Predeploy and Deploy Scripts**: Add `predeploy` and `deploy` scripts to automate the deployment process. ```json "scripts": { "predeploy": "npm run build", "deploy": "gh-pages -d build" } ``` Your `package.json` should now look something like this: ```json { "name": "my-react-app", "version": "0.1.0", "private": true, "homepage": "https://<username>.github.io/<repository-name>/", "dependencies": { "react": "^17.0.2", "react-dom": "^17.0.2", "react-scripts": "4.0.3" }, "scripts": { "start": "react-scripts start", "build": "react-scripts build", "test": "react-scripts test", "eject": "react-scripts eject", "predeploy": "npm run build", "deploy": "gh-pages -d build" }, "devDependencies": { "gh-pages": "^3.2.3" } } ``` ## Step 4: Initialize a Git Repository If your project is not already a Git repository, initialize it by running: ```bash git init git remote add origin https://github.com/<username>/<repository-name>.git ``` Replace `<username>` and `<repository-name>` with your GitHub username and repository name. ## Step 5: Commit Your Changes Add and commit your changes: ```bash git add . git commit -m "Initial commit" ``` ## Step 6: Push to GitHub Push your project to GitHub: ```bash git push -u origin master ``` ## Step 7: Deploy to GitHub Pages Deploy your app by running: ```bash npm run deploy ``` This command will build your React app and push the `build` directory to the `gh-pages` branch of your repository. GitHub Pages will then serve the files from this branch. ## Step 8: Access Your Deployed App After the deployment is complete, you can access your app at `https://<username>.github.io/<repository-name>/`. It might take a few minutes for the changes to be reflected. ## Conclusion Deploying a React app to GitHub Pages is a straightforward process that allows you to share your projects with ease. By following these steps, you can quickly get your app online and accessible to others. Happy coding!
sh20raj
1,908,368
Crafting a Node.js based framework SDK for Logto in minutes
Learn how to create a custom SDK for Logto using @logto/node. Previously in this article, we...
0
2024-07-02T03:06:01
https://blog.logto.io/crafting-nodejs-sdk/
webdev, node, opensource, identity
Learn how to create a custom SDK for Logto using `@logto/node`. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rctww1y15mwu75ritnu8.png) Previously in this [article](https://dev.to/logto/crafting-a-web-sdk-for-logto-in-minutes-i4p), we crafted a web SDK for Logto in minutes. Now, let's focus on Node.js, another popular platform for JavaScript developers. In this guide, we will walk you through the steps to create a simple Express SDK for Logto using `@logto/node`. This SDK will implement the sign-in flow, and you can follow the same steps to create an SDK for any other Node.js-based platform such as Koa, Next.js, NestJS, etc. # The sign-in flow Before we start, let's review the sign-in flow in Logto. The sign-in flow comprises of the following steps: 1. **Redirect to Logto**: The user is redirected to the Logto sign-in page. 2. **Authenticate**: The user inputs their credentials and authenticates with Logto. 3. **Redirect back to your app**: After successful authentication, the user is redirected back to your app with an auth code. 4. **Code exchange**: Your app exchanges the auth code for tokens and stores the tokens as the authentication state. # Brief introduction of `@logto/node` Similar to `@logto/browser`, the `@logto/node` package exposes a `LogtoClient` class that provides the core functionalities of Logto, including methods for sign-in flow: 1. `signIn()`: Generates the OIDC auth URL, and redirects to it. 2. `handleSignInCallback()`: Checks and parse the callback URL and extract the auth code, then exchange the code for tokens by calling token endpoint. 3. `getContext()`: Get the context of the current request based on the session cookie, including the authentication state and user information. # Crafting the Express SDK In the SDK, we will provide two route handlers (`/sign-in` and `/sign-in-callback`) along with a `withLogto` middleware: 1. `/sign-in`: A route handler that triggers the sign-in flow with a response that redirects to the OIDC auth URL. 2. `/sign-in-callback`: A route handler that processes the callback URL, exchanges the auth code for tokens, stores them, and completes the sign-in flow. 3. `withLogto` **middleware**: A middleware that calls `getContext()` to retrieve the context of the current request, including the authentication state and user information. To use the SDK, you can simply add the middleware to your Express app to protect routes, and use the route handlers to trigger the sign-in flow and handle the callback. ## Step 1: Install the package First, install the `@logto/node` package using npm or other package managers: ``` npm install @logto/node ``` ## Step 2: Prepare the storage adapter A storage adapter is required to initialize the `LogtoClient` instance. Assuming that the SDK user already setup the express session, we can simply implement the `Storage` class by creating a new file `storage.ts`: ``` import type { IncomingMessage } from 'node:http'; import type { Storage, StorageKey } from '@logto/node'; export default class ExpressStorage implements Storage<StorageKey> { constructor(private readonly request: IncomingMessage) {} async setItem(key: StorageKey, value: string) { this.request.session[key] = value; } async getItem(key: StorageKey) { const value = this.request.session[key]; if (value === undefined) { return null; } return String(value); } async removeItem(key: StorageKey) { this.request.session[key] = undefined; } } ``` ## Step 3: Implement the route handlers The HTTP request is stateless, so we need to init the client instance for each request. Let's prepare a function helper to create the client instance: ``` import type { IncomingMessage } from 'node:http'; import NodeClient from '@logto/node'; import type { LogtoConfig } from '@logto/node'; import type { Request, Response } from 'express'; import ExpressStorage from './storage.js'; const createNodeClient = (request: IncomingMessage, response: Response, config: LogtoConfig) => { // We assume that `session` is configured in the express app if (!request.session) { throw new Error('Express session is not configured'); } const storage = new ExpressStorage(request); return new NodeClient(config, { storage, navigate: (url) => { response.redirect(url); }, }); }; ``` In this function, we implement the `navigate` adapter along with the `ExpressStorage` adapter. The `navigate` adapter is used to redirect the user to the sign in URL. Next, let's implement the route handlers, wrapped in a function `handleAuthRoutes`: ``` import { Router } from 'express'; const baseUrl = 'http://localhost:3000'; export const handleAuthRoutes = (config: LogtoConfig): Router => { const router = Router(); router.use(`/auth/:action`, async (request, response) => { const { action } = request.params; const nodeClient = createNodeClient(request, response, config); switch (action) { case 'sign-in': { await nodeClient.signIn(`${baseUrl}/auth/sign-in-callback`); break; } case 'sign-in-callback': { if (request.url) { await nodeClient.handleSignInCallback(`${baseUrl}${request.originalUrl}`); response.redirect(baseUrl); } break; } default: { response.status(404).end(); } } }); return router; }; ``` 1. The `/auth/sign-in` route handler triggers the `sign-in` flow by calling signIn(), a sign-in state is stored in the session, and will be consumed by the `/auth/sign-in-callback` route handler. 2. The `/auth/sign-in-callback` route handler handles the callback URL and exchanges the auth code for tokens by calling `handleSignInCallback()`, the tokens are stored in the session by the `ExpressStorage` adapter. After the exchange is done, the user is redirected back to the home page. ## Step 4: Implement the middleware The `withLogto` middleware is used to protect routes. It calls `getContext()` to get the context of the current request, including the authentication state and user information. ``` import type { NextFunction } from 'express'; type Middleware = (request: Request, response: Response, next: NextFunction) => Promise<void>; export const withLogto = (config: LogtoConfig): Middleware => async (request: IncomingMessage, response: Response, next: NextFunction) => { const client = createNodeClient(request, response, config); const user = await client.getContext({ getAccessToken: config.getAccessToken, resource: config.resource, fetchUserInfo: config.fetchUserInfo, getOrganizationToken: config.getOrganizationToken, }); Object.defineProperty(request, 'user', { enumerable: true, get: () => user, }); next(); }; ``` The function `getContext` uses the storage adapter to get the tokens from the session. # Checkpoint: using the SDK Now that you have crafted the Express SDK for Logto, you can use it in your app by adding the middleware to protect routes and using the route handlers to trigger the sign-in flow and handle the callback. Here is a simple example of how to use the SDK in your Express app: ``` import http from 'node:http'; import { handleAuthRoutes, withLogto } from 'path-to-express-sdk'; import cookieParser from 'cookie-parser'; import type { Request, Response, NextFunction } from 'express'; import express from 'express'; import session from 'express-session'; const config = { appId: '<app-id>', appSecret: '<app-id>', endpoint: '<logto-endpoint>', }; const app = express(); app.use(cookieParser()); app.use( session({ secret: 'keyboard cat', cookie: { maxAge: 14 * 24 * 60 * 60 * 1000 }, resave: false, saveUninitialized: false, }) ); app.use(handleAuthRoutes(config)); app.get('/', withLogto(config), (request, response) => { if (request.user.isAuthenticated) { return response.end(`Hello, ${request.user?.claims?.name}`); } response.redirect('/auth/sign-in'); }); const server = http.createServer(app); server.listen(3000, () => { console.log('Sample app listening on http://localhost:3000'); }); ``` In this example, we use the `withLogto` middleware to check the authentication state, and redirect the user to the sign-in page if they are not authenticated, otherwise, we display a welcome message. You can check the official Express sample project [here](https://github.com/logto-io/js/tree/master/packages/express-sample). # Conclusion In this guide, we have walked you through the steps to create a Express SDK for Logto implementing the basic auth flow. The SDK provided here is a basic example. You can extend it by adding more methods and functionalities to meet your app's needs. You can follow the same steps to create an SDK for any other JavaScript-based platform that runs in Node.js. Resources: 1. [Logto Node SDK](https://github.com/logto-io/js/tree/master/packages/node) 2. [Logto Express SDK](https://github.com/logto-io/js/tree/master/packages/express) {% cta https://logto.io/?ref=dev %} Try Logto Cloud for free {% endcta %}
palomino
1,908,367
[LeetCode] Visualization of Reverse Linked List
This is my memo when I solved 206. Reverse Linked List. What we want This is pretty...
0
2024-07-02T02:52:19
https://dev.to/lada496/leetcode-visualization-of-reverse-linked-list-c35
algorithms, leetcode, javascript
This is my memo when I solved [206. Reverse Linked List](https://leetcode.com/problems/reverse-linked-list/). ## What we want This is pretty simple. We want to re-order a provided linked list in the opposite direction. ![](https://cdn-images-1.medium.com/max/2412/1*bVSRxRLU_RahUeT3j0j2pw.png) ## Strategy Go through each element and replace the value of next from the next element’s address with the previous element’s. This requires us to handle two operations: 1) changing the value of next of each element and 2) tracking which element we’re updating and what element comes next. ![what we want to do in each round](https://cdn-images-1.medium.com/max/2492/1*YqkotOrISxFf9lrgLwU4_Q.png) ## Code I think there are two tricky things when it comes to coding: 1) storing the next element to modify before updating curr.next, and 2) returning prev instead of curr as we update curr at the last of the while loop. ![the last round of the while loop](https://cdn-images-1.medium.com/max/2832/1*THzyPjtfte1ME1-aVu6ljw.png) ```javascript /** * Definition for singly-linked list. * class ListNode { * val: number * next: ListNode | null * constructor(val?: number, next?: ListNode | null) { * this.val = (val===undefined ? 0 : val) * this.next = (next===undefined ? null : next) * } * } */ function reverseList(head: ListNode | null): ListNode | null { let [prev, curr] = [null, head] while(curr){ const next = curr.next curr.next = prev prev = curr curr = next } return prev }; ```
lada496
1,908,365
Hi, this is Chuck!
I'm Chuck, introducing a great CDN Product to you, Tencent EdgeOne. If you are interested, please...
0
2024-07-02T02:45:08
https://dev.to/chuckcchen/hi-this-is-chuck-15g4
I'm Chuck, introducing a great CDN Product to you, [Tencent EdgeOne](https://edgeone.ai/). If you are interested, please feel free to [contact us](https://edgeone.ai/contact).
chuckcchen
1,908,363
How to create a graph diagramming by uploading pictures
Step 1: Open the canvas Step 2: click the import images button to pop up the image selection...
0
2024-07-02T02:40:15
https://dev.to/fridaymeng/how-to-create-a-graph-diagramming-by-uploading-pictures-280p
Step 1: Open the canvas ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d9eq339delmtfthp2v9n.png) Step 2: click the import images button to pop up the image selection dialog box,Select the image you want to import and confirm. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6bqqw18x47l5e61z45fq.png) Step 3: After clicking Confirm, you can see the picture you selected in the canvas. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l417b0olhuoup9om543i.png) Step 4: Select a sorting method from the top buttons. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qymkf5na87ry2gr2edz9.png) Step 5: Drag the small dot in any node to add a connection relationship with other nodes. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h0wd0hpd8r0ra9pni5r3.png) Step 6: Drag the small dot in any node and release the mouse in the blank space to complete the function of adding a new node. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g9qm5ewg6xmdlw7q1dj9.png) Step 7: Click the top and config buttons to modify the properties of the nodes and lines of the graph, such as adding line arrows, changing node colors, adding animations, etc. For more operations, please visit AddGraph.
fridaymeng
1,908,362
Page Object Model and Page Factory in Selenium
We implement the test automation using Selenium to ease the process of website testing. But what if...
0
2024-07-02T02:38:20
https://dev.to/elle_richard_232/page-object-model-and-page-factory-in-selenium-56d5
software, page, factory, selenium
We implement the test automation using [Selenium](https://testgrid.io/blog/selenium-what-is-it/) to ease the process of website testing. But what if test automation scripts are not written in a structured way? It would make the testing process inefficient and ambiguous. To maintain the efficient performance and project structure of the automation scripts, it is necessary to have different pages/scripts for each task. To ease the access of distributing codes across different files and maintain a clean project structure, the Page object model and Page Factory come to the rescue. In this article, we will walk you through some of the core concepts of the Page object model and Page Factory in Selenium with the help of appropriate examples. ### What is the Page Object Model in Selenium? Page Object Model (POM) is a design pattern in Selenium that is used to create an object repository to store all web elements. It helps improve the code reusability and test case maintenance. In this design pattern, each web page in the web application will have a corresponding Page Class in the automation script. This Page class will identify the various WebElements of that web page and also includes methods to perform testing on those WebElements. Note: The name methods inside a Page class should be given to represent the task they are performing. For example, if a loader is waiting for the order to be confirmed, the POM method name can be waitForOrderConfirmation(). ### Why do We Need Page Object Mode? The below-mentioned points depict the need for a Page Object Model in Selenium. The below-mentioned steps depict the need for a Page Object Model in Selenium. **Code Duplication:** Without proper management of locators, automation test projects can become unwieldy due to duplicated code or excessive repetition of locator usage. **Time Efficiency:** Maintaining scripts becomes cumbersome when multiple scripts rely on the same page element. Any change in that element necessitates updates across all affected scripts, consuming valuable time and risking errors. **Code Maintenance:** POM mitigates code maintenance challenges by centralizing element identification and interaction in separate class files. When a web element changes, updating its locator in one class file propagates the change across all associated scripts, ensuring code remains reusable, readable, and maintainable. **Adaptability to UI Changes:** When UI elements are restructured or relocated, existing automation tests may fail due to outdated locators. Manual updates to numerous scripts can be time-intensive. However, with POM, locating and updating element locators is centralized, streamlining the process and allowing testers to focus on enhancing test coverage rather than manual adjustments. By implementing POM, automation frameworks gain flexibility, efficiency, and resilience to UI modifications, ultimately enhancing the effectiveness of test automation efforts. ### Advantages of Page Object Model in Selenium The advantages of using the Page object model in Selenium are: **Reusability:** The same Page object class can be used in several test cases, which reduces code duplication and improves code reuse. This saves time and effort when writing new tests because the same Page object class may be used several times. **Easy Maintenance:** The page object model improves the UI test management by organizing code logically. It helps identify which page or screen needs modification when UI elements or actions change. **Readability of Scripts:** POM makes the test scripts more readable and understandable by separating the script files for each screen and using the logical names for methods. **Increases test coverage:** POM allows testers to create additional tests with less effort. This increases test coverage and helps to uncover more faults, resulting in software with greater quality. **Functional encapsulation**: Using POM, all probable testing operations on a page may be described and included within the same class built for each page. This enables a clear definition and defines the scope of each page’s operation. ### What is Page Factory in Selenium? Page Factory is an in-built Selenium design pattern for web automation testing that simplifies the creation of Page Objects. It reduces the amount of boilerplate code required to build Page Objects, making the automation code easier to maintain and read. In Page Factory, testers utilize the @FindBy annotation alongside the initElements method to initialize web elements. The @FindBy annotation accepts various attributes such as tagName, partialLinkText, name, linkText, id, CSS, className, and XPath, enabling testers to locate and interact with elements on the web page precisely. ### Advantages of Page Factory in Selenium Using the Page Factory along with the Page Object Model in Selenium brings a lot of advantages. Listed below are a few advantages of Page Factory: **Easy Initialization:** PageFactory simplifies web element initialization by allowing the use of annotations like @FindBy directly within the page object class. These annotations specify locators (e.g., id, name, XPath), and PageFactory automatically initializes elements upon instantiation of the page object. **Lazy Initialization:** PageFactory employs lazy initialization, meaning elements are initialized only when accessed or interacted with in the test code. This optimizes performance by avoiding unnecessary element lookup and initialization when elements are not needed. **Improved Code Readability:** Separating web element initialization from test code enhances code readability. With Page Factory, it’s clearer to understand code intent and interactions with the web page. **Enhanced Test Performance:** Page Factory can boost test performance by reducing the overhead of locating web elements. Initializing the Page Object once per test, rather than per test method, minimizes redundant operations, improving overall efficiency. ### FAQs 1. What is the difference between the Page object model and the Page factory in Selenium? The Page Object Model is a code design pattern that creates an object repository for web items that can be accessed via a web page. It uses the By annotation to describe page objects, and each object must be initialized. POM also supports cache storage. Page Factory, on the other hand, is a class that only implements the POM. 2. Why do we use POM in Selenium? POM is a Selenium design pattern that builds an object repository to hold all web elements. It reduces code duplication and enhances test case management. 3. Can we use POM without Page Factory in Selenium? Yes, you can use Page Object Model (POM) in Selenium without using Page Factory. **Source:** _This blog was originally posted on [TestGrid](https://testgrid.io/blog/page-object-model-and-page-factory-in-selenium/)._
elle_richard_232
1,908,361
Stay Updated with Python/FastAPI/Django: Weekly News Summary (24/06/2024 - 30/06/2024)
Dive into the latest tech buzz with this weekly news summary, focusing on Python, FastAPI, and Django...
0
2024-07-02T02:30:08
https://poovarasu.dev/python-fastapi-django-weekly-news-summary-24-06-2024-to-30-06-2024/
python, django, fastapi, flask
Dive into the latest tech buzz with this weekly news summary, focusing on Python, FastAPI, and Django updates from June 24th to June 30th, 2024. Stay ahead in the tech game with insights curated just for you! This summary offers a concise overview of recent advancements in the Python/FastAPI/Django framework, providing valuable insights for developers and enthusiasts alike. Explore the full post for more in-depth coverage and stay updated on the latest in Python/FastAPI/Django development. Check out the complete article here https://poovarasu.dev/python-fastapi-django-weekly-news-summary-24-06-2024-to-30-06-2024/
poovarasu
1,908,360
Stay Updated with PHP/Laravel: Weekly News Summary (24/06/2024 - 30/06/2024)
Dive into the latest tech buzz with this weekly news summary, focusing on PHP and Laravel updates...
0
2024-07-02T02:28:14
https://poovarasu.dev/php-laravel-weekly-news-summary-24-06-2024-to-30-06-2024/
php, laravel
Dive into the latest tech buzz with this weekly news summary, focusing on PHP and Laravel updates from June 24th to June 30th, 2024. Stay ahead in the tech game with insights curated just for you! This summary offers a concise overview of recent advancements in the PHP/Laravel framework, providing valuable insights for developers and enthusiasts alike. Explore the full post for more in-depth coverage and stay updated on the latest PHP/Laravel development. Check out the complete article here https://poovarasu.dev/php-laravel-weekly-news-summary-24-06-2024-to-30-06-2024/
poovarasu
1,908,359
Trying out a new stack: my experience working with tRPC and Drizzle on my Next.JS project
Recently I started working on a side project and I wanted to make it a fullstack typescript...
0
2024-07-02T02:26:07
https://dev.to/flavioribeirojr/trying-out-a-new-stack-my-experience-working-with-trpc-and-drizzle-on-my-nextjs-project-3mho
nextjs, typescript, postgres, trpc
Recently I started working on a side project and I wanted to make it a fullstack typescript application. I decided to use Next.JS to take advantage of the tools already included in the framework(e.g. router, rsc, caching, etc.). On top of that I added two amazing libraries that make an excellent usage of **Typescript**: `tRPC` and `drizzle`, I want to talk a little bit about my experience working with them. ### [Drizzle](https://orm.drizzle.team/): a typescript ORM There are a lot of _ORM_ and query builders available for Node.JS, you might have used or heard of packages such as Prisma, Knex, Sequelize or Typeorm. All of those will help you connect to a database, create your tables and run your queries, some of them will even let you define the entity types for your schema. What drizzle does different is the way it takes a big advantage of the type system Typescript offers. The first thing I noticed when I started working with drizzle is how easy it was to define the schema for my tables. I didn't had to setup annotations or manually define the type for each column. Another thing that I found super helpful is the ability to reference the columns using a `camelCase` style while still defining the column name as `snake_case`, making easier to make manual queries. See the example below: ![Code example of drizzle schema](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mont3ywa38e1isi42fi1.png) I am defining an users table with 3 columns with different types, notice how the object keys are defined using the `camelCase` style and in the first parameter of the drizzle constructors(`varchar`, `date`) I'm defining the name that will be used internally. Using drizzle defining the schema like this will automatically give you types, for example when you define a `varchar` column the type inferred will be `string`. See below: ![Code example of the inferred types on drizzle](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6hkvfb3pfanm2s7eisif.png) As you can see all of my types are already defined by drizzle. Drizzle have a big set of functions that facilitate building typesafe queries, they offer a full solution for making unions, joins and other common sql operations. It is also worth mentioning the magic `sql` operator that let you write your own query in case you need to. If you want to learn more take a look at their [docs](https://orm.drizzle.team/docs/overview) ### [tRPC](https://trpc.io/): you might not need swagger 🤷🏽‍♂️ One of the difficult parts of working with frontend is when you need to define what comes back from the server. One of the solutions available is to use swagger to document your api and choose between a lot of libraries that can turn the swagger document into a client that handles the calls to your server. A problem that might arise from doing this is how accurate your documentation is as changes are made to the implementation on your server. `tRPC` is a solution for fullstack applications that use Typescript. It make typesafe easy. The types of the inputs and outputs of your `procedures` are defined in just one place and you don't even need to explicitly create them, they will be inferred. By having all the server types defined in the actual code that is running in the server it is impossible to create inaccurate type descriptions. Below is an example of a call from a react component to a server procedure, notice how the types are already defined: ![Code example of using tRPC mutation on a react component](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9vhsaxvv3sbb753vv3s4.png) Is worth mentioning other two libraries that work really well together with tRPC: `zod` and `react-query`. With `zod` you can at the same time define a schema type and validate it against the actual value, making `mutations`, for example, much easier to validate. `react-query` is the missing piece for working with promises in React, and tRPC has a really good wrapper around it so that you can easily use your queries and mutations on your components. Please note that this only work if you are working with a fullstack Typescript application, if you are working with different languages there is nothing tRPC can do for you. Keep swagging 🕺🏽 ### Conclusion Working with typescript nowadays days feels mandatory when working with javascript, as it improve the developer experience and avoid silly typing bugs, and working with tools that have Typescript at their core is the ultimate way to make the best out of it.
flavioribeirojr
1,908,358
Effective Techniques to Enhance Search Rankings and Minimize Bounce Rate
Boosting your search engine ranking and lowering your bounce rate are super important for making your...
0
2024-07-02T02:24:52
https://dev.to/juddiy/effective-techniques-to-enhance-search-rankings-and-minimize-bounce-rate-22b1
Boosting your search engine ranking and lowering your bounce rate are super important for making your website better. When you nail these, you'll draw in more visitors and keep them around longer, which can lead to higher conversion rates. Here are some great tips to help you reach these goals: #### 1. Optimize Website Content - **Keyword Research and Usage**: Ensure your content includes relevant keywords to boost search engine ranking. - **High-Quality Content**: Provide valuable information that meets user needs, reducing the likelihood of users quickly leaving the site. - **Regular Updates**: Keep your content fresh and relevant to encourage repeat visits. #### 2. Improve Website Load Speed - **Compress Images and Files**: Use appropriately sized and formatted images to reduce file sizes and speed up page load times. - **Enable Browser Caching**: Reduce page reload times. - **Use a Content Delivery Network (CDN)**: Distribute content across multiple servers to speed up page loading. #### 3. Enhance User Experience (UX) - **Mobile Friendliness**: Ensure your website looks good on mobile devices, which is crucial for SEO ranking. - **Clear Navigation**: Simplify site navigation so users can easily find what they need. - **Reduce Pop-Ups**: While pop-ups can increase conversions, too many can annoy users. #### 4. Increase Website Interactivity - **Internal Linking Strategy**: Add relevant internal links within articles to encourage users to visit more pages. - **Multimedia Content**: Use videos, images, and interactive content to capture user attention. - **Comments and Feedback Mechanisms**: Encourage user engagement through comments and feedback, increasing page activity. #### 5. Technical SEO Optimization - **Optimize Meta Tags**: Use compelling title tags and meta descriptions to improve click-through rates. - **Structured Data Markup**: Use structured data to help search engines better understand your content. - **Analyze and Improve**: Use [SEO AI](https://seoai.run/) tool to monitor site performance, identify high bounce rate pages, and make improvements. #### 6. Boost Website Credibility - **Security**: Ensure your site has an SSL certificate for a secure browsing experience. - **Trust Signals**: Add trust signals such as user reviews, case studies, and certification badges to enhance user trust. ### Conclusion By combining content optimization, technical improvements, and enhanced user experience, you can significantly boost your site's search engine ranking while reducing bounce rates. This not only helps to increase site traffic but also enhances user satisfaction and conversion rates. Regularly evaluating and adjusting these strategies to keep up with changing search engine algorithms and user behavior is key to success. I hope these methods help you optimize your website for better performance!
juddiy
1,908,317
I created a Boiler Plate for creating Web Apps with Google Apps Script.
GAS-WebApp-BoilerPlate Boilerplate for creating web apps with Google Apps Script. We...
0
2024-07-02T02:24:13
https://dev.to/mikoshiba-kyu/i-created-a-boiler-plate-for-creating-web-apps-with-google-apps-script-4gn2
vite, react, clasp, gas
## GAS-WebApp-BoilerPlate Boilerplate for creating web apps with Google Apps Script. We expect to develop in TypeScript and React. {% embed https://github.com/Mikoshiba-Kyu/gas-webapp-boilerplate %} --- ## Feature - **DevContainer** is used - **Vite** and **clasp** generate files for eventual deployment in Google Apps Script - Unit tests can be created with **Vitest** and E2E tests with **Playwright** --- ## Quick Start ``` git clone https://github.com/Mikoshiba-Kyu/gas-webapp-boilerplate.git ``` --- ## Overview of Project Structure The core development code is placed under the `src` folder for the front end and back end, respectively. ``` 📁 src ├── 📁 backend │ ├── 📁 serverFunctions │ │ ├── 📄 index.ts │ │ └── 📄 and more... │ └── 📄 main.ts └── 📁 frontend ├── 📄 main.tsx └── 📄 and more... ``` The `yarn build` creates files for Google Apps Script in the `gas/dist` folder. ``` 📁 gas ├── 📁 dist │ ├── 📄 index.html │ └── 📄 main.js └── 📄 appsscript.json ``` Other folders are described below. - `.github` Preset Github Actions for E2E test execution. - `.husky` Preset lint at pre commit time. - `e2e` Stores Playwright test files. --- ## Development ### Launch DecContainer Clone the repository and start DevContainer in any way you wish. ### Front-end implementation Implement the front-end side in `src/frontend`. Common UI frameworks, state management libraries, etc. can be used. ### Back-end implementation Google Apps specific classes such as `SpreadsheetApp` cannot be used directly from the front end. You must call the function exposed to global in `backend/main.ts` via [gas-client](https://github.com/enuchi/gas-client) from the front end. ```typescript // backend/main.ts import { sampleFunction } from './serverFunctions' declare const global: { [x: string]: unknown } // This function is required to run as a webApp global.doGet = (): GoogleAppsScript.HTML.HtmlOutput => { return HtmlService.createHtmlOutputFromFile('dist/index.html') } // Create the necessary functions below. global.sampleFunction = sampleFunction ``` ```typescript // frontend/App.tsx import { GASClient } from 'gas-client' const { serverFunctions } = new GASClient() ... ... const handleButton = async () => { if (import.meta.env.PROD) { try { const response: number = await serverFunctions.sampleFunction(count) setCount(response) } catch (err) { console.log(err) } } else { setCount(count + 1) } } ... ... ``` > 🗒️Notes > In the above example, `import.meta.env.PROD` is used to branch by environment. > If created by `yarn build` and deployed in GAS, the environment uses `serverFunction`, > And if it is running locally by `yarn dev`, it will work in an alternative way. ### Creating and running tests #### Unit testing ```bash $ yarn test:unit ``` For front-end and unit testing, use Vitest. If you want to test Google Apps specific functions created in `serverFunctions`, you need to mock them. #### E2E testing ```bash $ yarn test:e2e ``` Playwright is used for E2E testing. The URL of an already running GAS web app must be specified as `baseURL`. ```typescript // playwright.config.ts ... ... use: { /* Base URL to use in actions like `await page.goto('/')`. */ baseURL: process.env.PLAYWRIGHT_BASE_URL || 'your apps url', }, ... ... ``` > ⚠️ Important > When conducting E2E testing, the target application must be made available to `everyone`. --- ### Deployment First, compile with Vite. ```bash $ yarn build ``` If you are not logged in to clasp, log in. ```bash $ clasp login ``` Create a new project if one has not already been created. When you create a project as follows, a new file `appsscript.json` will be created in the root. If you want to use the one already placed in the `gas` folder, you can delete it. ```bash $ clasp create ? Create which script? standalone docs sheets slides forms > webapp api ``` > 🗒️Notes > If you are using a project that has already been created, > manually create `.clasp.json` in the root and specify the `scriptId` directly. > 🗒️Notes > Set "timeZone" in `gas/appscript.json` according to your situation. Replace the `rootDir` in the created `.clasp.json` with the path to the `gas` folder of the project. ```json { "scriptId": "********", "rootDir": "/workspaces/gas-webapp-boilerplate/gas" } ``` Execute deployment. ```bash $ clasp push $ clasp deploy ``` To open the deployed project page in a browser, use the following command. ```bash $ clasp open ```
mikoshiba-kyu
1,908,357
Javascript/Html/Css
JavaScript, HTML and CSS are the three main technologies for creating web pages. Each of them has...
0
2024-07-02T02:23:09
https://dev.to/bekmuhammaddev/javascripthtmlcss-1fd5
javascript, html, css, english
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pdq7jvr8gzfkwt93l3bs.png) JavaScript, HTML and CSS are the three main technologies for creating web pages. Each of them has its own unique features, and when used together, they are very effective in creating and managing web pages. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6qxquvuo5vt0w0org8h4.png) _javascript,html,css structura:_ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2xzsu57h86ywjqor0jfk.png) **HTML (HyperText Markup Language)** HTML is used to define the structure of web pages. It detects the components of web pages, such as text, images, videos, links, and other elements. Basic Structure of HTML: An HTML document consists of three main parts: the doctype declaration, the head part, and the body part. The basic structure of HTML is: ``` <!DOCTYPE html> <html> <head> <title>Mening Veb Sahifam</title> </head> <body> <h1>Sarlavha</h1> <p>Bu paragraflar.</p> </body> </html> ``` An HTML element is defined using tags. The tags consist of opening and closing tags, and the content between them makes up the content of the HTML element. Basic HTML Tags - 'html': start and end of HTML document. - 'head': Contains metadata, styling and scripts. - 'title': Text to be displayed in the browser's title section. - 'body': Contains the main content of the web page. - 'h1' to 'h6': Labels for headings, 'h1' is the largest, 'h6' is the smallest heading. - 'p': Paragraph text. - 'a': Link (anchor) element. - 'img': Image element. - 'div': A section or container element. - 'span': Inline container element. HTML is used to define the basic structure of web pages. It has a simple and understandable syntax, making it a very powerful tool when used in combination with other technologies. Using HTML, CSS, and JavaScript together, you can create interactive and aesthetically beautiful web pages. **CSS (Cascading Style Sheets)** CSS is used to specify the appearance, that is, the style of web pages. It controls the color, font, size, position and other visual properties of HTML elements. css kod: ``` <!DOCTYPE html> <html> <head> <title>Mening Veb Sahifam</title> <style> body { font-family: Arial, sans-serif; background-color: #f0f0f0; color: #333; } h1 { color: #0066cc; } </style> </head> <body> <h1>Sarlavha</h1> <p>Bu paragraflar.</p> </body> </html> ``` css example: ``` h1 { color: blue; font-size: 24px; } ``` In the example above, the h1 selector sets the color and font-size properties for all <h1> tags. **JavaScript** JavaScript adds dynamism to web pages. It manages user interaction, changes page content, validates data, and performs many other tasks.JavaScript is mainly used to make web pages dynamic and interactive. JavaScript was originally created by Netscape and is now one of the most popular and widely used programming languages ​​in the world. javascript code: ``` <!DOCTYPE html> <html> <head> <title>Mening Veb Sahifam</title> <style> body { font-family: Arial, sans-serif; background-color: #f0f0f0; color: #333; } h1 { color: #0066cc; } </style> </head> <body> <h1 id="sarlavha">Sarlavha</h1> <p id="matn">Bu paragraflar.</p> <button onclick="matnniOzgartirish()">Matnni O'zgartirish</button> <script> function matnniOzgartirish() { document.getElementById("sarlavha").innerHTML = "Yangi Sarlavha"; document.getElementById("matn").innerHTML = "Matn o'zgartirildi!"; } </script> </body> </html> ``` Usage of JavaScript: 1. Web Development: Mainly used in front-end development, but also used in back-end development thanks to Node.js. 2. Mobile Apps: Used to create mobile apps through frameworks like React Native and Ionic. 3. Games: Frameworks like Phaser and Babylon.js are used to create games that can be played in the browser. 4. Server-side programming: Using Node.js, you can write programs that run on the server. Popular frameworks and libraries: 1. React: Developed by Facebook and widely used to create Single Page Applications (SPAs). 2. Angular: Developed by Google and designed for building large and complex web applications. 3. Vue.js: A framework with a very flexible and simple syntax, used in small and large projects. 4. Node.js: Used for server-side programming, and helps create highly efficient and scalable web applications. HTML defines the structure of the page, CSS decorates the page, and JavaScript adds dynamism to the page. Together, these three technologies are used to make web pages beautiful and interactive. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/93d0i8zprt2wp70ta952.png) In the ranking of technologies in 2023, javascript is the leader, html, css is in 2nd place and other languages. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w6a9ridpmpjzc0yrh25o.png)
bekmuhammaddev
1,842,903
The Ultimate NixOS Homelab Guide - Flakes, Modules and Fail2Ban w/ Cloudflare
Welcome back everyone to the NixOS homelab guide, today we will be moving all our configurations into...
27,253
2024-07-02T02:17:39
https://dev.to/jasper-at-windswept/the-ultimate-nixos-homelab-guide-flakes-modules-and-fail2ban-w-cloudflare-25ba
nixos, linux, homeserver
Welcome back everyone to the NixOS homelab guide, today we will be moving all our configurations into a very simple flake, setting up Vaultwarden with Fail2Ban and finally modularizing our configuration ready for future self hosted apps. Sorry for the huge wait, I had some issues and personal problems that cropped up but I'm back! ## Flake Setup Moving our configuration to a flake is pretty easy so let's get straight to it. Make a new directory in your home folder. ``` mkdir ~/.flake cd ~/.flake ``` Now create a new `flake.nix` file at the root of that directory and open it in your text editor of choice. ``` vim flake.nix ``` Now fill out the file with the following, then I'll explain and break it down. ```nix { description = "A very basic flake"; inputs = { nixpkgs.url = "github:nixos/nixpkgs/nixpkgs-unstable"; }; outputs = { self, nixpkgs, ... } @ inputs: let system = "x86_64-linux"; version = "24.11"; user = "your-username"; hostname = "homelab"; pkgs = import nixpkgs { inherit system; config = { allowUnfree = true; }; }; lib = nixpkgs.lib; in { nixosConfigurations = { ${hostname} = lib.nixosSystem { inherit system; specialArgs = { inherit user hostname version; }; modules = [ ./nix/configuration.nix ]; }; }; }; } ``` We initialize a flake, add the unstable nixpkgs source, set some variables we can use through our entire config (make sure to set the hostname and user variables to whatever username and hostname you used in your `configuration.nix`, we set that file up in the last post) and then create the nixosSystem inheriting our variables and then importing `configuration.nix` which we will move into our flake directory right now. ```sh mkdir ~/.flake/nix sudo cp /etc/nixos/* ~/.flake/nix sudo chown $USER:users ~/.flake/nix/* sudo chmod a+rw ~/.flake/nix/* ``` Those commands will copy our existing configurations to our new flake directory and then make sure our user have all the write permissions. You should now have this directory structure. ``` ~/.flake ├── flake.nix └── nix ├── configuration.nix └── hardware-configuration.nix ``` Now we need to update our `configuration.nix` to work with our flake and also add some utilities. ```nix { config, lib, pkgs, user, hostname, version, ... }: { imports = [ # Include the results of the hardware scan. ./hardware-configuration.nix ]; boot = { kernelPackages = pkgs.linuxPackages_latest; loader = { efi.canTouchEfiVariables = true; grub = { enable = true; efiSupport = true; device = "nodev"; }; }; }; nix = { # Enable flakes! settings = { experimental-features = [ "nix-command" "flakes" ]; auto-optimise-store = true; }; }; nixpkgs.config.allowUnfree = true; networking = { hostName = "${hostname}"; networkmanager = { enable = true; }; }; # Your timezone time.timeZone = "Australia/Sydney"; users.users.${user} = { isNormalUser = true; extraGroups = [ "wheel" "docker" ]; # Enable ‘sudo’ for the user with wheel. home = "/home/${user}"; shell = pkgs.zsh; }; programs = { # Personal preference zsh.enable = true; nh = { enable = true; flake = "/home/${user}/.flake"; }; }; environment.systemPackages = with pkgs; [ vim ]; # Enable the OpenSSH daemon. services.openssh.enable = true; system.stateVersion = "${version}"; } ``` Now our config will make use of the `user`, `hostname` and `version` variables we created in our `flake.nix`. Now we are finally ready to apply our new config, run the below command. (replace `homelab` with whatever the `hostname` variable is in your `flake.nix`) ```sh sudo nixos-rebuild boot --flake /home/$USER/.flake#homelab ``` If any errors come up please leave a comment below and I'll be happy to help troubleshoot. If the command completes and there were no issues then you can just run `sudo reboot`. ## Fail2Ban and Modularization In this example I am setting up Fail2Ban for a local Vaultwarden server, you will need to adapt this based on the Fail2Ban instructions for your software. Let's start and creating a folder at `~/.flake/nix/modules` ``` mkdir ~/.flake/nix/modules ``` In there let's open up a new file and you can name it either `fail2ban.nix` or `vaultwarden.nix` if that is the service you are protecting. (name doesn't matter that much its just semantics). In that file start with this content: ```nix { pkgs, lib, config, user, ... }: { services.fail2ban = { enable = true; }; } ``` Throughout this I'll be referring to these pages: https://nixos.wiki/wiki/Fail2ban https://github.com/dani-garcia/vaultwarden/wiki/Fail2Ban-Setup https://github.com/fail2ban/fail2ban/blob/master/config/action.d/cloudflare.conf Now I assume if your setting up Fail2Ban you understand how it works and what it is, so you may be asking, alright so how can I create stuff like Jails and custom actions and filters. Well Nix has you covered. Let's take a look at my Jail configuration for Vaultwarden. ```nix { pkgs, lib, config, user, ... }: { services.fail2ban = { enable = true; jails = { vaultwarden.settings = { enabled = true; filter = "vaultwarden"; action = '' cf iptables-allports ''; # This is the path where I have my vaultwarden data logpath = "/home/${user}/vaultwarden/vw-data/vaultwarden.log"; # User gets banned after 4 incorrect attempts maxretry = 4; bantime = "52w"; findtime = "52w"; chain = "FORWARD"; }; }; }; } ``` "Ok awesome but what is the `cf` action and what about the vaultwarden filter, how do i set that up on NixOS when /etc is read-only?" Fear not because here is `environment.etc` ```nix { pkgs, lib, config, user, ... }: { environment.etc = { "fail2ban/filter.d/vaultwarden.local".text = pkgs.lib.mkDefault (pkgs.lib.mkAfter '' [INCLUDES] before = common.conf [Definition] failregex = ^.*Username or password is incorrect\. Try again\. IP: <ADDR>\. Username:.*$ ignoreregex = ''); }; services.fail2ban = { # ... }; } ``` You may recognise that string for the first child of `etc` it is the fail2ban path for custom filters! Now let's add the `cf` action which is for banning through Cloudflare. You only need this if you have made your Vaultwarden instance public via Cloudflare/-Tunnels. ```nix environment.etc = { "fail2ban/action.d/cf.conf".text = pkgs.lib.mkDefault (pkgs.lib.mkAfter '' [Definition] actionstart = actionstop = actioncheck = actionban = /run/current-system/sw/bin/curl -s -o /dev/null -X POST \ -H "X-Auth-Email: <cfuser>" \ -H "X-Auth-Key: <cftoken>" \ -H "Content-Type: application/json" \ -d '{"mode":"block","configuration":{"target":"ip","value":"<ip>"},"notes":"Fail2Ban <name>"}' \ "https://api.cloudflare.com/client/v4/user/firewall/access_rules/rules" actionunban = /run/current-system/sw/bin/curl -s -o /dev/null -X DELETE -H 'X-Auth-Email: <cfuser>' -H 'X-Auth-Key: <cftoken>' \ https://api.cloudflare.com/client/v4/user/firewall/access_rules/rules/$(/run/current-system/sw/bin/curl -s -X GET -H 'X-Auth-Email: <cfuser>' -H 'X-Auth-Key: <cftoken>' \ 'https://api.cloudflare.com/client/v4/user/firewall/access_rules/rules?mode=block&configuration_target=ip&configuration_value=&page=1&per_page=1' | tr -d '\n' | cut -d'"' -f6) [Init] cftoken = your-token cfuser = jasper-at-windswept@example.com ''); }; ``` Again you can see that the initial string just references the path as a child of `/etc`. It should be pretty conclusive from there how to add your own actions and filters. And if ever in doubt, check the NixOS Wiki and search.nixos.org for options relating to fail2ban. As a side note, getting this to work took me literal days so yeah, haha. Now to get this to work in our config add the following to your `configuration.nix` ``` { config, lib, pkgs, user, hostname, version, ... }: { imports = [ ./hardware-configuration.nix ./modules/vaultwarden.nix ]; } ``` Now since we moved to a flake and I snuck `nh` into your configuration you can run the super simple command: - `nh os switch` This will rebuild your system and switch over to it automatically based on the flake path we provided earlier, `~/.flake` ## Securing your Server Well this is a pretty big post now isn't it. The best way I secure my server is by disabling SSH via password and only allowing my personal private key. This can be done as follows: On your personal computer (the one with your private key) - `ssh-copy-id -i ~/.ssh/private_key.pub admin@your.homelab.ip.address` You should also change your `~/.ssh/config` on your PC to use that private key when connecting to your homeserver. ``` Host your.homelab.ip.address IdentityFile ~/.ssh/private ``` - **KEEP A LOGGED IN TERMINAL TO YOUR SERVER OPEN BEFORE REBUILDING WITH PASSWORDS DISABLED AS IF ANYTHING GOES WRONG YOU WILL BE LOCKED OUT!!!** On your homeserver open up `~/.ssh/authorized_keys` and copy the newly created string. `configuration.nix` ```nix { users.users.${user} = { # ... openssh.authorizedKeys = [ # Paste your key into double quotes here # Example: "ssh-ed25519 AAAA------------ jasper@nixos" ]; }; services = { openssh = { enable = true; settings = { PasswordAuthentication = false; }; }; }; } ``` Then rebuild, this time though we will use the `boot` option and restart the server. ``` nh os boot reboot ``` Now on your personal computer, open a new terminal and try to connect to the server via SSH. If you connect with the password being requested then hurray, you have just secured your server! If you have any troubles please leave a comment or contact me on Discord (`jasper-at-windswept`) and I will try to help out! That is all for this post, make sure to follow and like the post if you found it helpful and I will see you all in the next one.
jasper-at-windswept
1,908,328
Mapping LLM Integration Levels to Levels of Autonomous Driving, it does not have to be all or nothing!
I thought it would be interesting to compare LLM integration into your day to day business workflows...
0
2024-07-02T02:15:34
https://dev.to/alnutile/mapping-llm-integration-levels-to-levels-of-autonomous-driving-it-does-not-have-to-be-all-or-nothing-2m0j
llm, larallama, rag
I thought it would be interesting to compare LLM integration into your day to day business workflows to Levels of Automobile Automation. > BTW Follow me on YouTube https://www.youtube.com/@alfrednutile Here are the levels of Automobile Automation: * Level 0 - No Automation: The driver performs all driving tasks. * Level 1 - Driver Assistance: The vehicle can assist with either steering or acceleration/deceleration, but not both simultaneously. * Level 2 - Partial Automation: The vehicle can control both steering and acceleration/deceleration in some situations, but the driver must remain engaged. * Level 3 - Conditional Automation: The vehicle can handle all aspects of driving in certain conditions, but the driver must be ready to take control when prompted. * Level 4 - High Automation: The vehicle can handle all driving tasks in specific circumstances without human intervention. * Level 5 - Full Automation: The vehicle can perform all driving tasks under all conditions, with no human intervention needed. Right now there seems to be a lot of “all or nothing” mindset when it comes to working with LLMs and I think each tasks you face day to day might benefit differently or not at all from where this tech is right now. > The challenge—and opportunity—lies in identifying which of your daily tasks could benefit from LLM assistance and at what level ## Level 0 Sometimes this is it! You might try and use it to write cover letters for job postings only to find out that Grammarly was enough. But you tried. You pasted a few bits of your writing style into a chat window, pasted the job posting and said “Write me a cover letter in my voice for this job” and what came out was embarrassing. ![Embarrassed](https://sundance-software.nyc3.digitaloceanspaces.com/larallama/mean-girls-embarrassed.gif) Great stop right there for that task but keep an open mind to other tasks where LLMs might be more helpful. ## Level 1 One example of this level we can think about it and me writing this article then giving I write to the LLM to review. I can give it some examples of my voice and then ask it to just update any places it could use more “meat on the bones” etc. ![Meat on the Bones](https://sundance-software.nyc3.digitaloceanspaces.com/larallama/tasting-the-beef-matty-matheson.gif) I might not like what it outputs or maybe I do. Maybe I use it as is or maybe I compare the two and just move some bits over. But I might end up with something a little better than what Grammarly could have done. Using examples of my tone it does a good job of not sounding like an alien from another planet trying to sound human and professional. ## Level 2 Let’s take the above and take it up a notch. Let’s say I have a dozen articles in my www.larallama.io collection that I wrote, and then I start a chat with a bullet list of points I want to cover in a blog post and I tell it have at it! Maybe just maybe it can pull it off. Maybe it can do some great tweets or LinkedIn “tweets”? Anyways maybe it keeps the tone, or ten prompts later it pulls it off. That is great - it took the bones of my story and really fleshed it out for me, potentially saving time and enhancing the quality of the content. ## Level 3 Ok like above we are building confidence and we even setup a system like www.larallama.io to collect data from the week. News about Laravel, news about RAG systems, news about LaraLlama.io itself. And then I tell it “Every Friday Write a News Letter and Email me a copy” Great, but then I check it and if it looks good I post it to the mailing list using [INSERT YOUR MAILING LIST SERVICE HERE] And that is great, I have a moment to check it, even tell it to try again etc. I am not 100% trusting it yet but getting closer. This level of automation can significantly reduce your workload while maintaining quality control. ## Level 4 Then comes Level 4! Level 3 proves itself week after week. Readers are writing back how impressed they are! They are even sending emails to LaraLlama that become suggestions for next articles and building reports on customer feedback. It just does not end! ![Can not Stop](https://sundance-software.nyc3.digitaloceanspaces.com/larallama/we-cant-stop-it-power-rangers-zeo.gif) So we turn on FULL AUTOMATION!!! The system now sends the emails every Friday without my proofing, based on the consistent quality it has demonstrated. ## Level 5 Ok my example ended above this one just goes over the top. Let’s list out what just happened above in Four to see how crazy this could get then wrap this up. * One, the system is sending emails every Friday about news of the week. * Two, the system is reading customer feedback to adjust each week to that feedback and preventing the infamous trolling of the system that would lead it to writing about some obscure topic. * Three, the system is also taking suggesting based on the stats of the articles read! This level of automation represents a significant leap in AI-assisted content creation and management, though it's important to note that human oversight may still be necessary in some capacity. ## Wrapping it Up And that's how far we can potentially go with automation for our different day to day work tasks. However, it's important to remember that not all business processes are ready for any level of AI integration. For instance, tasks like helping a customer get a quote on a new web feature still require a human touch (though this might change in a few years). Similarly, creating engaging videos about innovative ideas for LaraLlama.io remains a primarily human-driven process. (For now 🤔) The key takeaway is that LLM integration isn't an all-or-nothing proposition. There's a wide spectrum of possibilities between no automation and full automation. The challenge—and opportunity—lies in identifying which of your daily tasks could benefit from LLM assistance and at what level. Some processes might be ready for Level 4 or 5 automation, while others might be best served by Level 1 or 2 support. As we move forward, it's crucial to approach LLM integration strategically. Start small, experiment with different levels of automation for various tasks, and gradually increase the AI's role as you build confidence in its capabilities. By understanding these different levels of LLM integration, you can make informed decisions about how to leverage this powerful technology in your day-to-day business workflows, potentially boosting productivity and innovation in ways we're only beginning to explore. (That is such an LLM worded sentence 🤦‍♂️)
alnutile
1,908,316
ToolifyPerfector: Your Ultimate Toolkit for Web Utilities and Converters
Welcome to ToolifyPerfector, your go-to destination for a comprehensive suite of utilities and...
0
2024-07-02T02:02:30
https://dev.to/therahul_gupta/toolifyperfector-your-ultimate-toolkit-for-web-utilities-and-converters-7p3
webdev, javascript, programming, tutorial
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j7wirrcxy6je6v0yrfro.png) Welcome to ToolifyPerfector, your go-to destination for a comprehensive suite of utilities and tools designed to enhance your online experience. Whether you’re a developer, designer, or just a curious internet user, ToolifyPerfector has something for everyone. Explore our vast array of tools that cater to various needs, from website status checks to content formatting. Here’s a closer look at what we offer: ## Utilities Website Status Checker: Instantly check whether a website is online or experiencing downtime. User Agent Finder: Discover details about your user agent for debugging or compatibility checks. **What’s My IP**: Quickly find out your public IP address. **Ping**: Measure the latency for any given address to assess network speed and reliability. **URL Unshortener**: Uncover the original URL behind shortened links. **URL Encoder/Decoder**: Safely encode or decode URLs for transmission and readability. **SSL Checker**: Verify the SSL certificate status of any website to ensure secure connections. **QR Code Generator/Reader**: Create infinite QR codes or read them from images seamlessly. **HTTP Headers Parser**: Analyze HTTP headers for any URL to troubleshoot web requests. **UUIDv4 Generator**: Generate unique UUIDv4 IDs for secure identification. **YouTube Thumbnail Downloader**: Download thumbnails from YouTube videos easily. **E-Mail Validator**: Validate email addresses individually or in bulk to ensure accuracy. **Redirect Checker**: Check whether a URL has a redirect and trace its path. **Random Number Generator**: Generate random numbers with specific constraints for various applications. ## Converters **RGB to Hex / Hex to RGB**: Convert between RGB color values and hexadecimal codes effortlessly. **Timestamp Converter**: Switch between UNIX timestamps and human-readable dates. **Text to Binary / Binary to Text**: Encode and decode text to and from binary format. **Text to Base64 / Base64 to Text**: Convert text to Base64 encoding and vice versa. **Image to Base64**: Convert images to Base64 strings for easy embedding. **Markdown to HTML / HTML to Markdown**: Convert documents between Markdown and HTML formats. **CSV to JSON / JSON to CSV**: Transform data between CSV and JSON formats. **JSON to XML / XML to JSON**: Convert data structures between JSON and XML formats. **JSON Beautifier / Validator**: Format and validate JSON data for better readability and error checking. **ROT13 Encoder/Decoder**: Encode and decode text using the ROT13 substitution cipher. **Unicode to Punycode / Punycode to Unicode**: Convert domain names between Unicode and Punycode formats. **Image Format Converters**: Convert between JPG, PNG, WEBP, and more to suit your needs. **Image Compressor / Resizer**: Optimize and resize images to the desired specifications. **Measurement Converters**: Convert units for memory, length, speed, temperature, weight, and more. **HTML Code Editor**: Edit HTML code with a live preview to see changes instantly. ## Security **Password Generator / Strength Test**: Create secure passwords and test their strength. **Hash Generators (MD5, SHA, Bcrypt)**: Generate various hashes from text for secure storage. **Credit Card Validator**: Validate credit card details to ensure correctness. ## Content Tools **Word Count / Lorem Ipsum Generator**: Count words and letters in text or generate placeholder text. **Text Separator / Duplicate Lines Remover**: Organize and clean up text by separating or removing duplicates. **Line Break Remover**: Eliminate line breaks from text for a continuous flow. **E-Mail / URL Extractor**: Extract email addresses or URLs from blocks of text. **SEO Tags / Twitter Card Generator**: Create SEO tags and Twitter cards to enhance your website’s visibility. **HTML Entity Encode / Decode**: Convert text to HTML entities and back. **HTML Minifier / Formatter**: Minify or format HTML code for better performance and readability. **CSS / JS Minifier / Formatter**: Minify or format CSS and JavaScript code. **JS Obfuscator**: Protect your JavaScript code by obfuscating it. **SQL Beautifier**: Format SQL queries for better clarity and maintenance. **Privacy Policy / Terms of Service Generator**: Create essential legal documents for your website. **Robots.txt / HTACCESS Redirect Generator**: Generate files for website management. **Source Code Downloader**: Download the source code of any webpage for offline analysis. **Text Replacer / Reverser**: Replace or reverse text strings as needed. **Word Density Counter / Palindrome Checker**: Analyze word density or check for palindromes in text. **Case Converter / Text to Slug**: Change text case or convert text to slugs for URLs. **Text Randomizer / Shuffle Text Lines**: Randomize or shuffle lines of text for varied outputs. **Encode / Decode Quoted Printable**: Encode or decode text to quoted-printable format. **Countdown Timer / Stopwatch**: Use online timers for various time-tracking needs. **Scientific Calculator / World Clock**: Perform calculations or check global times. **Wheel Color Picker / Virtual Coin Flip**: Pick colors or simulate a coin flip. **Text Repeater / Aim Trainer**: Repeat text strings or train your aim with our online tools. **Image Rotate / Grayscale**: Rotate images or convert them to grayscale. **Date Picker Calendar**: Select specific dates and years easily. **Paste & Share Text**: Share text online quickly and conveniently. ## Domains **Domain Generator / WHOIS**: Generate domain names or get WHOIS information for any domain. **IP to Hostname / Hostname to IP**: Convert between IP addresses and hostnames. **IP Information**: Get detailed information about any IP address. **HTTP Status Code Checker**: Check HTTP status codes for URLs to diagnose issues. **URL Parser**: Extract detailed information from URLs. **DNS Lookup**: Query DNS records for any domain name. **What is My Browser**: Identify your browser details. **Open Port Checker**: Check for open ports on your connection to ensure security. **BMI Calculator**: Calculate your Body Mass Index (BMI) based on height and weight. **Online SMTP Test**: Test and check your SMTP server for email delivery issues. **GZIP Compression Test**: Verify if Gzip compression is enabled on your website for improved performance. At ToolifyPerfector, we strive to provide you with all the essential tools you need in one convenient platform. Visit [ToolifyPerfector](https://toolifyperfector.com/) today and streamline your online tasks with ease!
therahul_gupta
1,908,314
Step Toward Better Tenant Provisioning
In a multi-tenant SaaS application, you often need to manage resources that are tenant-specific....
0
2024-07-02T01:59:19
https://jason.wadsworth.dev/step-toward-better-tenant-provisioning/
aws, saas, cdk, stepfunctions
In a multi-tenant SaaS application, you often need to manage resources that are tenant-specific. Whether it's a tenant-specific role, isolated DynamoDB tables, or per-tenant Cognito user pools, you need to have a way to deploy and update these resources across your application. In this blog, I'll show you how I have approached this problem. There are three components of managing tenant-specific resources; creating new resources as a tenant is added, updating resources as your application needs changes, and deleting resources when a tenant is deleted. ## The Old Way ![EventBridge -> Lambda -> SDK to create resources](https://jason.wadsworth.dev/images/2024-07-01/eb-Lambda-sdk.png) In the past, I would use a model that looks something like the image above to create and delete resources. When a new tenant is added to the system an event is sent out and that would trigger a Lambda function. That Lambda function would use the SDK to create the necessary resources. Similarly, a delete would send out an event that would trigger a Lambda function that would use the SDK to delete tenant-specific resources. There are some problems with this approach. First, I like to use CDK, specifically [L2 constructs](https://docs.aws.amazon.com/cdk/v2/guide/constructs.html), for all of my infrastructure. The SDK is very different, so there is a cognitive cost of using the SDK. You often need to remember more details, and the structure is very different. Second, there isn't a place to go see all the resources associated with a tenant. This isn't a big deal when you have one resource per tenant, but as that grows it's nice to be able to go to a single spot to see everything that belongs to that tenant. Third, while creating new resources when a tenant is created, and deleting them when a tenant is deleted, is pretty straightforward, updating is not. There isn't an event that you can use to trigger the update; at least not one that is a system event. Updates, unlike creates and deletes, are deploy time changes. As the application changes, you need to update the infrastructure of your tenants. That's a very different process than creating and deleting. Updating requires code that is aware of the current state and understands how to go from one to the other. It requires code to handle rolling back into a previous state when something goes wrong. Updates are complicated, and anytime I can remove complicated code I'm going to do it. ## A Better Approach Hearing others have the same problem, I wanted to find a solution to make things easier. I knew I wanted CDK and CloudFormation to be a part of the solution, and my thoughts quickly went to [Step Functions](https://aws.amazon.com/step-functions/). Could there be an answer there? Here is what I figured out. It starts with CDK. I create a Stack in CDK that holds all the resources for a tenant. This stack includes the use of `CfnParameter` to pass in the identifier of the tenant. Any resources that you need to create are added to this stack. The code looks something like this. ```typescript export class TenantTemplateStack extends Stack { constructor(scope: Construct, id: string, props: StackProps) { super(scope, id, props); const parameterTenantId = new CfnParameter(this, 'TenantId', { type: 'String', description: 'The ID of the tenant', }); const tenantId = parameterTenantId.valueAsString; // any resources you want to provision per tenant go here } } ``` The stack needs to be synthesized and made, so in our tenant management stack I include an S3 bucket where the template will be deployed, I synthesize the above template, and I deploy the synthesized template to the bucket. The keys to this are the use of the `BootstraplessSynthesizer` in the template stack and the `Stage` that I'll use to synthesize it. This creates a sort of CDK Inception, where your "cdk.out" will have your synthesized stack(s) and each stack will have another synthesized stack for your tenant template. Accessing the assembly of the stage allows us to grab the output and push it to S3 using the BucketDeploy construct. ```typescript export class TenantManagement extends Construct { public readonly templateBucketName: string; public readonly templateBucketKey: string; constructor(scope: Construct, id: string, props: TenantManagementProps) { super(scope, id); const stack = Stack.of(this); const templateBucket = new Bucket(this, 'TemplateBucket', { blockPublicAccess: { blockPublicAcls: true, blockPublicPolicy: true, ignorePublicAcls: true, restrictPublicBuckets: true, }, objectOwnership: ObjectOwnership.OBJECT_WRITER, encryption: BucketEncryption.S3_MANAGED, enforceSSL: true, publicReadAccess: false, versioned: false, // important so that updates can be trigger based on this event eventBridgeEnabled: true, }); const stage = new Stage(this, 'SynthStage'); new TenantTemplateStack(stage, 'TenantTemplate', { // this allows the synthesis to generate a template without resolving CDK values like account and region synthesizer: new BootstraplessSynthesizer(), }); // synthesize the template stack const assembly = stage.synth(); // the stage only has one stack, so it's safe to grab index zero here to get the path of the output const templateFullPath = assembly.stacks[0].templateFullPath; // the bucket deployment construct will copy the resources in the specified path to S3 new BucketDeployment(this, 'EachTenantStackDeployment', { destinationBucket: templateBucket, sources: [Source.asset(dirname(templateFullPath))], }); } } ``` At this point, I have a template that is being created that can be used for each of our tenants, but I still need to run the template at the right times. I'll create a few Step Functions to do this. I'll start with the create because it's really hard to test doing an update and delete if you don't first create it. :) The create is triggered in the same way our Lambda function that was making SDK calls was triggered; via a tenant-created event sent to EventBridge. The flow looks a bit like this: ![EventBridge to Step Functions to CloudFormation](https://jason.wadsworth.dev/images/2024-07-01/eb-sfn-cfn.png) The detail of the Step Function looks like this: ![Create tenant resources Step Function](https://jason.wadsworth.dev/images/2024-07-01/create-tenant.png) The Step Function is actually rather simple. It makes a call to CreateStack using the template I uploaded to the S3 bucket. After calling CreateStack it calls DescribeStack in a loop, checking to see that it has completed and failing if the stack fails. This way I can add metric alarms to notify the team if there are failures. Next, I'll do the delete. Like the create, the delete is triggered from a tenant-deleted event sent to EventBridge. This runs a Step Function that looks a bit like this: ![Delete tenant resources Step Function](https://jason.wadsworth.dev/images/2024-07-01/delete-tenant.png) This one is a bit more complicated than the create because it first checks to see that the template is in a state that allows it to be deleted. This way you don't end up with errors if you try to delete a stack while it's in the process of being updated. Finally, the update. ![CDK to S3 to EventBridge to Step Functions to CloudFormation](https://jason.wadsworth.dev/images/2024-07-01/cdk-s3-eb-sfn-cfn.png) ![Update tenant resources Step Function](https://jason.wadsworth.dev/images/2024-07-01/update-tenants.png) This Step Function is started whenever the template is updated in the S3 bucket. This means that when the deployment of our tenant management sends the updated template to the bucket this Step Function will run, which will automatically update all of your tenant's stacks. The State Machine looks a bit overwhelming, but when broken down into its parts it's pretty easy to understand. ![update tenant resources Step Function with tenant loop highlighted](https://jason.wadsworth.dev/images/2024-07-01/update-tenants-tenant-loop.png) The first part of this is just getting a list of tenants. I am storing our tenants in a DynamoDB table so I can query the data from there. DynamoDB uses paging, so I have to have some logic to loop over the data and call back into DynamoDB to get the next page. ![Update tenant resources Step Function with the first describe loop highlighted](https://jason.wadsworth.dev/images/2024-07-01/update-tenants-describe-loop.png) The next bit is checking to see if the tenant's stack can be updated. This is important because a tenant may be in the process of being created when you deploy a new template. If you don't do this loop the update will fail and the new tenant won't get the updates. ![Update tenant resources Step Function with the update stack and describe loop highlighted](https://jason.wadsworth.dev/images/2024-07-01/update-tenants-update-describe-loop.png) Lastly, the UpdateStack call, and subsequent looping, looks just like our create logic adjusted for an update. ## Conclusion When you are building a multi-tenant SaaS app it's important to have a strategy for managing any tenant-specific resources you may have. Using Step Functions with the CDK is a great way to manage those updates. With this approach I get to continue to use CDK to model our resources, I have one place to go to see all the resources for a tenant (the CloudFormation stack for that tenant), and the complexities of updating and rolling back changes are managed by CloudFormation.
jasonwadsworth
1,908,264
You can be a Frontend Dev without Javascript (and play Super Mario 64 from the browser)
All of this began because on Sunday, I made a silly LinkedIn post that started with: "Hey Chatgpt,...
0
2024-07-02T01:57:56
https://dev.to/mauroaccorinti/why-you-can-play-super-mario-64-from-the-browser-and-become-a-frontend-dev-without-javascript-3710
frontend, webassembly, learning, webdev
All of this began because on Sunday, I made a silly [LinkedIn post](https://www.linkedin.com/posts/mauroaccorinti_hey-chatgpt-if-i-wanted-to-be-a-frontend-activity-7210752058730172416-3xwk/?utm_source=dev-to-why-you-can-play-super-mario-64-from-the-browser) that started with: **"Hey Chatgpt, if I wanted to be a frontend developer but I didn't want to learn HTML, CSS, or Javascript, what should I do? Be very rude about it and use tons of emojis"** The post was funny. Snarky AI is funny. A frontend developer who doesn't know about HTML, CSS or JS is funny. Until I was proven wrong by a very kind Senior developer who mentioned in that post: > "Realistically you could be a front-end dev for decades and never touch HTML, CSS, or JS - you could do WinForms, WPF/XAML, Unity, Unreal Engine, CryEngine, Swift, Blazor, Qt, or any number of other front-end systems which have nothing to do with HTML, CSS or JS. And in fact, nearly all of them can be compiled to Web Assembly and run in a browser with zero HTML, CSS or JS involved." - [Timothy Partee](https://www.linkedin.com/in/timothypartee/)​ > I had no idea what this man was talking about. It seemed unimaginable to think of the web as something that isn't JS-related. I mean pffff, I've never seen Unity, Unreal Engine, or WinForms as part of the [frontend roadmap](https://roadmap.sh/frontend), have you? But also... **that sounds really cool to learn about.** Can you imagine running video games off the browser if your computer was good enough for it? Or programming a website without using your typical IDE with JS-based frameworks? So let's start my fall into this rabbit hole of insanity! ## Can you run Unreal Engine on the web? I started by googling the question that seemed the most interesting to me. For the uninitiated, the Unreal Engine is a game development suite made by Epic Games (the creators of Fortnite). Think of it as one of the big industry standards for making games. With such a powerful tool.. wouldn't it be sort of cool to run a game directly from Chrome for example? So can you? The answer was... not exactly what I was looking for. ## There are 3 interesting points I found about using Unreal in the Web 1. From what I can tell, while yes there is a way to have the program use HTML5 (which you can in theory use for making websites), the support of it has seemed to have died in the latest releases. 2. I've also found a plugin called "Unreal4Web" which can be used to import and showcase 3d models on your website... but without any real demos I can quickly find, I'll just have to take their word for it. It seemed to have been originally used to show models of cars for manufacture's websites but could be used for other industries as well 3. You could use something called [Pixel Streaming](https://dev.epicgames.com/documentation/en-us/unreal-engine/pixel-streaming-in-unreal-engine) to host your game on a server and use the browser as the input device to interact with it. This is excellent for so many reasons. As long as you have a great and stable internet connection, it makes it so you can play a game from any device, no matter how good said device is. But it's not exactly working on the browser natively, is it? Nope, this wouldn't do. So you can't use Unreal directly like this to host it on the web. --- Did you know this article was originally emails written for the Exceptional Frontend Newsletter? 👀 [You can learn more about it here](https://exceptional-frontend.ck.page/sign-up) --- But wait a minute... Tim mentioned you probably could use Web Assembly for that. ## What the heck is Web Assembly? You know how in Javascript if you just wanted to add two numbers together you could just do something like `const result = num1 + num2`? Well, we can do that because Javascript is a high-level language. That just means that the code we write abstracts away the nitty-gritty computer code that performs the addition. So instead of worrying about the details of how the computer processes the numbers, where it stores them, in which part of memory... we just write straightforward, human-readable code! Under the hood though, our `num1 + num2` operation could equal hundreds upon hundreds of instructions that are given to the CPU to change binary values to other binary values. I'm talking about 1s to 0s here. All at speeds that we can't even fathom. As well as abstracted away so we don't need to think about it. Assembly languages are the sort of coding languages that get as close to giving instructions to the computer as possible. It's hard to read, involves knowing a lot about the hardware, and is 10000% of something you only run into in software engineering courses or college. And guess what? Browsers have their version of this low-level language they can run since 2017 called **Web Assembly (WASM)**. Reading an official description of what Web Assembly is (the bolded text is my addition): > WebAssembly (WASM) is a low-level binary instruction format designed to be a safe, fast, and platform-independent runtime for high-level languages on the web. **What's super freaking dope about it** is it allows developers to run code written in languages like C, C++, Rust, and others on web browsers at near-native speeds. > So browsers don't just run Javascript anymore. They can run web assembly alongside Javascript. The cool thing is you don't ever write in actual Web Assembly (dear heavens imagine if we had to do that), but you code in other languages that aren't JS and then use programs or plugins to convert it to Wasm. For example... ## Did you know Figma is written in C++? 👀 One of the reasons why Figma is so fast, loads instantly, and creates such a good user experience is that the main program you see isn't in Javascript at all. It's C++ that has been compiled into Wasm, which is later used with JS on the browser. I don't know why but that fact **blew my mind** and opened my perspective on everything that can be done on the browser. Imagine a website that isn't just using HTML, CSS, and JS!! Also imagine finding this out just now, 7 years late to the party, and being hyped when everybody is over it now 😂 ​[Here's a really great article made by the Figma team about their experience on using WASM when it became available and increasing speed by 3x.](https://www.figma.com/blog/webassembly-cut-figmas-load-time-by-3x/)​ Other platforms I've learned that use WASM in similar ways are Autocad and Photoshop as well. Oh and I guess Super Mario 64. ## Using WASM to play Super Mario 64 natively on the browser In June of 2020, [the original Super Mario 64 game was decompiled into C.](https://github.com/n64decomp/sm64) The effort took a team of passionate hobbyists over 2 years to fully reverse-engineer the game, gather as much info as possible, and figure out how to get the source code. The end product of all of this was a way to generate an executable that lets you natively run the game off of your computer without it being emulated. Which is like I said before, bananas! Oh but wait! What did we say before about WASM? That you can use something like a C program, compile it to WASM using something like [Emscripten](https://emscripten.org/), and then set it up to be run on the browser. Does that mean that we can... 😏 Oh hell yeah - you can run Super Mario 64 on the browser today. [​Here's the link if you want to check it out.](https://probablykam.github.io/Mario64webgl/) (You might have audio issues like I did. That's because browsers don't allow you usually to play music or audio on a website without a prior user interaction. Click on the little lock in the URL of your browser and fiddle around with the settings to trust that site to play audio without user interaction) But wait a minute, now that WASM exists does that mean... ## Can you become a frontend developer without Javascript after all? After doing a deep dive into this really cool technology I've never known before, being blown away by the possibilities and excited about the future, all I have to say is... **You should learn Javascript either way 😅** First of all, WASM wasn't made to replace Javascript but to be used together with it. While it would be nice to think you can have a website made entirely with something like C++, the reality is that use cases for WASM are very limited (perfect for heavy computing and situations where native performance speed is a must, but not ideal for most other things). Another aspect to consider is you can't interact with the DOM without Javascript. Or if you can, it's extremely clunky. It is a very new technology after all. I have seen some interesting articles on its use for server environments but oof, my brain is fried from learning so much about it these last 2 days. So I'll leave that investigation up to you if you pursue it. Hopefully, this article served as a good introduction to the concept if you didn't know about it (And served as an excuse to play a beloved N64 game from my childhood as well) Oh! And do let me know if I got something wrong about WASM in the comments below 👇 I'd love to learn more about the technology and having conversations around it will help with that. Thank you! --- ## I write articles like this for my newsletter "Exceptional Frontend" Twice a week you'll get frontend-centered posts that that are fun and help you become a better developer. This article was the combination of 2 emails I wrote last week and thought it to be interesting enough to create as an article. 90% of what I write isn't posted anywhere else, so if you don't want to miss out you can [sign up here.](https://exceptional-frontend.ck.page/sign-up)
mauroaccorinti
1,907,528
Case (I) - KisFlow-Golang Stream Real-Time Computing - Quick Start Guide
Github: https://github.com/aceld/kis-flow Document:...
0
2024-07-02T01:57:03
https://dev.to/aceld/case-i-kisflow-golang-stream-real-time-computing-quick-start-guide-f51
go
<img width="150px" src="https://github.com/aceld/kis-flow/assets/7778936/8729d750-897c-4ba3-98b4-c346188d034e" /> Github: https://github.com/aceld/kis-flow Document: https://github.com/aceld/kis-flow/wiki --- [Part1-OverView](https://dev.to/aceld/part-1-golang-framework-hands-on-kisflow-streaming-computing-framework-overview-8fh) [Part2.1-Project Construction / Basic Modules](https://dev.to/aceld/part-2-golang-framework-hands-on-kisflow-streaming-computing-framework-project-construction-basic-modules-cia) [Part2.2-Project Construction / Basic Modules](https://dev.to/aceld/part-3golang-framework-hands-on-kisflow-stream-computing-framework-project-construction-basic-modules-1epb) [Part3-Data Stream](https://dev.to/aceld/part-4golang-framework-hands-on-kisflow-stream-computing-framework-data-stream-1mbd) [Part4-Function Scheduling](https://dev.to/aceld/part-5golang-framework-hands-on-kisflow-stream-computing-framework-function-scheduling-4p0h) [Part5-Connector](https://dev.to/aceld/part-5golang-framework-hands-on-kisflow-stream-computing-framework-connector-hcd) [Part6-Configuration Import and Export](https://dev.to/aceld/part-6golang-framework-hands-on-kisflow-stream-computing-framework-configuration-import-and-export-47o1) [Part7-KisFlow Action](https://dev.to/aceld/part-7golang-framework-hands-on-kisflow-stream-computing-framework-kisflow-action-3n05) [Part8-Cache/Params Data Caching and Data Parameters](https://dev.to/aceld/part-8golang-framework-hands-on-cacheparams-data-caching-and-data-parameters-5df5) [Part9-Multiple Copies of Flow](https://dev.to/aceld/part-8golang-framework-hands-on-multiple-copies-of-flow-c4k) [Part10-Prometheus Metrics Statistics](https://dev.to/aceld/part-10golang-framework-hands-on-prometheus-metrics-statistics-22f0) [Part11-Adaptive Registration of FaaS Parameter Types Based on Reflection](https://dev.to/aceld/part-11golang-framework-hands-on-adaptive-registration-of-faas-parameter-types-based-on-reflection-15i9) --- [Case1-Quick Start](https://dev.to/aceld/case-i-kisflow-golang-stream-real-time-computing-quick-start-guide-f51) --- ## Download KisFlow Source ```bash $go get github.com/aceld/kis-flow ``` [KisFlow Developer Documentation](https://github.com/aceld/kis-flow/wiki) KisFlow Developer Documentation 1. KisFlow Quick Start (Using Configuration Files) Source Code Example: [kis-flow-usage/2-quick_start_with_config at main · aceld/kis-flow-usage](https://github.com/aceld/kis-flow-usage/tree/main/2-quick_start_with_config) First, let's create a project with the following file structure: ### Project Directory ```bash ├── Makefile ├── conf │ ├── flow-CalStuAvgScore.yml │ ├── func-AvgStuScore.yml │ └── func-PrintStuAvgScore.yml ├── faas_stu_score_avg.go ├── faas_stu_score_avg_print.go └── main.go ``` ### Flow Define the current Flow. The current Flow is named "CalStuAvgScore", which is a data flow for calculating students' average scores. Define two Functions. Function1 is `Calculate`, which is the logic for calculating students' average scores, and `Function2` is `Expand`, which is for printing the final results. ### Config The configuration files for the Flow and Functions are as follows: #### (1) Flow Config > conf/flow-CalStuAvgScore.yml ```yaml kistype: flow status: 1 flow_name: CalStuAvgScore flows: - fname: AvgStuScore - fname: PrintStuAvgScore ``` #### (2) Function1 Config > conf/func-AvgStuScore.yml ```yaml kistype: func fname: AvgStuScore fmode: Calculate source: name: Student Scores must: - stu_id ``` #### (3) Function2 Config > conf/func-PrintStuAvgScore.yml ```yaml kistype: func fname: PrintStuAvgScore fmode: Expand source: name: Student Scores must: - stu_id ``` ### Main Next is the main logic, which is divided into three steps: * Load configuration files and get Flow instances. * Submit data. * Run the Flow. > main.go ```go package main import ( "context" "fmt" "github.com/aceld/kis-flow/file" "github.com/aceld/kis-flow/kis" ) func main() { ctx := context.Background() // Load configuration from file if err := file.ConfigImportYaml("conf/"); err != nil { panic(err) } // Get the flow flow1 := kis.Pool().GetFlow("CalStuAvgScore") if flow1 == nil { panic("flow1 is nil") } // Submit a string _ = flow1.CommitRow(`{"stu_id":101, "score_1":100, "score_2":90, "score_3":80}`) // Submit a string _ = flow1.CommitRow(`{"stu_id":102, "score_1":100, "score_2":70, "score_3":60}`) // Run the flow if err := flow1.Run(ctx); err != nil { fmt.Println("err: ", err) } return } ``` ### Function1 The implementation logic of the first calculation process is as follows. `AvgStuScoreIn` is the input data type, currently containing three scores, and `AvgStuScoreOut` is the output data type, which is the average score. > faas_stu_score_avg.go ```go package main import ( "context" "github.com/aceld/kis-flow/kis" "github.com/aceld/kis-flow/serialize" ) type AvgStuScoreIn struct { serialize.DefaultSerialize StuId int `json:"stu_id"` Score1 int `json:"score_1"` Score2 int `json:"score_2"` Score3 int `json:"score_3"` } type AvgStuScoreOut struct { serialize.DefaultSerialize StuId int `json:"stu_id"` AvgScore float64 `json:"avg_score"` } // AvgStuScore(FaaS) calculates students' average scores func AvgStuScore(ctx context.Context, flow kis.Flow, rows []*AvgStuScoreIn) error { for _, row := range rows { out := AvgStuScoreOut{ StuId: row.StuId, AvgScore: float64(row.Score1+row.Score2+row.Score3) / 3, } // Submit result data _ = flow.CommitRow(out) } return nil } ``` ### Function2 The logic for printing is to directly print the data as follows. > faas_stu_score_avg_print.go ```go package main import ( "context" "fmt" "github.com/aceld/kis-flow/kis" "github.com/aceld/kis-flow/serialize" ) type PrintStuAvgScoreIn struct { serialize.DefaultSerialize StuId int `json:"stu_id"` AvgScore float64 `json:"avg_score"` } type PrintStuAvgScoreOut struct { serialize.DefaultSerialize } func PrintStuAvgScore(ctx context.Context, flow kis.Flow, rows []*PrintStuAvgScoreIn) error { for _, row := range rows { fmt.Printf("stuid: [%+v], avg score: [%+v]\n", row.StuId, row.AvgScore) } return nil } ``` ### Output Finally, run the program and get the following results: ```bash Add KisPool FuncName=AvgStuScore Add KisPool FuncName=PrintStuAvgScore Add FlowRouter FlowName=CalStuAvgScore stuid: [101], avg score: [90] stuid: [102], avg score: [76.66666666666667] ``` ## 2. KisFlow Quick Start (Using Native Interface, Dynamic Configuration) Source Code Example: [kis-flow-usage/1-quick_start at main · aceld/kis-flow-usage](https://github.com/aceld/kis-flow-usage/tree/main/1-quick_start) ### Project Directory ```bash ├── faas_stu_score_avg.go ├── faas_stu_score_avg_print.go └── main.go ``` ### Flow ### Main > main.go ```go package main import ( "context" "fmt" "github.com/aceld/kis-flow/common" "github.com/aceld/kis-flow/config" "github.com/aceld/kis-flow/flow" "github.com/aceld/kis-flow/kis" ) func main() { ctx := context.Background() // Create a new flow configuration myFlowConfig1 := config.NewFlowConfig("CalStuAvgScore", common.FlowEnable) // Create new function configuration avgStuScoreConfig := config.NewFuncConfig("AvgStuScore", common.C, nil, nil) printStuScoreConfig := config.NewFuncConfig("PrintStuAvgScore", common.E, nil, nil) // Create a new flow flow1 := flow.NewKisFlow(myFlowConfig1) // Link functions to the flow _ = flow1.Link(avgStuScoreConfig, nil) _ = flow1.Link(printStuScoreConfig, nil) // Submit a string _ = flow1.CommitRow(`{"stu_id":101, "score_1":100, "score_2":90, "score_3":80}`) // Submit a string _ = flow1.CommitRow(`{"stu_id":102, "score_1":100, "score_2":70, "score_3":60}`) // Run the flow if err := flow1.Run(ctx); err != nil { fmt.Println("err: ", err) } return } func init() { // Register functions kis.Pool().FaaS("AvgStuScore", AvgStuScore) kis.Pool().FaaS("PrintStuAvgScore", PrintStuAvgScore) } ``` ### Function1 > faas_stu_score_avg.go ```go package main import ( "context" "github.com/aceld/kis-flow/kis" "github.com/aceld/kis-flow/serialize" ) type AvgStuScoreIn struct { serialize.DefaultSerialize StuId int `json:"stu_id"` Score1 int `json:"score_1"` Score2 int `json:"score_2"` Score3 int `json:"score_3"` } type AvgStuScoreOut struct { serialize.DefaultSerialize StuId int `json:"stu_id"` AvgScore float64 `json:"avg_score"` } // AvgStuScore(FaaS) calculates students' average scores func AvgStuScore(ctx context.Context, flow kis.Flow, rows []*AvgStuScoreIn) error { for _, row := range rows { out := AvgStuScoreOut{ StuId: row.StuId, AvgScore: float64(row.Score1+row.Score2+row.Score3) / 3, } // Submit result data _ = flow.CommitRow(out) } return nil } ``` ### Function2 > faas_stu_score_avg_print.go ```go package main import ( "context" "fmt" "github.com/aceld/kis-flow/kis" "github.com/aceld/kis-flow/serialize" ) type PrintStuAvgScoreIn struct { serialize.DefaultSerialize StuId int `json:"stu_id"` AvgScore float64 `json:"avg_score"` } type PrintStuAvgScoreOut struct { serialize.DefaultSerialize } func PrintStuAvgScore(ctx context.Context, flow kis.Flow, rows []*PrintStuAvgScoreIn) error { for _, row := range rows { fmt.Printf("stuid: [%+v], avg score: [%+v]\n", row.StuId, row.AvgScore) } return nil } ``` ### Output ```bash Add KisPool FuncName=AvgStuScore Add KisPool FuncName=PrintStuAvgScore funcName NewConfig source is nil, funcName = AvgStuScore, use default unNamed Source. funcName NewConfig source is nil, funcName = PrintStuAvgScore, use default unNamed Source. stuid: [101], avg score: [90] stuid: [102], avg score: [76.66666666666667] ``` --- Author: Aceld GitHub: https://github.com/aceld KisFlow Open Source Project Address: https://github.com/aceld/kis-flow Document: https://github.com/aceld/kis-flow/wiki --- [Part1-OverView](https://dev.to/aceld/part-1-golang-framework-hands-on-kisflow-streaming-computing-framework-overview-8fh) [Part2.1-Project Construction / Basic Modules](https://dev.to/aceld/part-2-golang-framework-hands-on-kisflow-streaming-computing-framework-project-construction-basic-modules-cia) [Part2.2-Project Construction / Basic Modules](https://dev.to/aceld/part-3golang-framework-hands-on-kisflow-stream-computing-framework-project-construction-basic-modules-1epb) [Part3-Data Stream](https://dev.to/aceld/part-4golang-framework-hands-on-kisflow-stream-computing-framework-data-stream-1mbd) [Part4-Function Scheduling](https://dev.to/aceld/part-5golang-framework-hands-on-kisflow-stream-computing-framework-function-scheduling-4p0h) [Part5-Connector](https://dev.to/aceld/part-5golang-framework-hands-on-kisflow-stream-computing-framework-connector-hcd) [Part6-Configuration Import and Export](https://dev.to/aceld/part-6golang-framework-hands-on-kisflow-stream-computing-framework-configuration-import-and-export-47o1) [Part7-KisFlow Action](https://dev.to/aceld/part-7golang-framework-hands-on-kisflow-stream-computing-framework-kisflow-action-3n05) [Part8-Cache/Params Data Caching and Data Parameters](https://dev.to/aceld/part-8golang-framework-hands-on-cacheparams-data-caching-and-data-parameters-5df5) [Part9-Multiple Copies of Flow](https://dev.to/aceld/part-8golang-framework-hands-on-multiple-copies-of-flow-c4k) [Part10-Prometheus Metrics Statistics](https://dev.to/aceld/part-10golang-framework-hands-on-prometheus-metrics-statistics-22f0) [Part11-Adaptive Registration of FaaS Parameter Types Based on Reflection](https://dev.to/aceld/part-11golang-framework-hands-on-adaptive-registration-of-faas-parameter-types-based-on-reflection-15i9) --- [Case1-Quick Start](https://dev.to/aceld/case-i-kisflow-golang-stream-real-time-computing-quick-start-guide-f51)
aceld
1,908,313
How Melbourne's NDIS Providers Are Revolutionizing Disability Services
The National Disability Insurance Scheme (NDIS) is a transformative program aimed at providing...
0
2024-07-02T01:51:00
https://dev.to/dmacaringhand/how-melbournes-ndis-providers-are-revolutionizing-disability-services-3016
The National Disability Insurance Scheme (NDIS) is a transformative program aimed at providing comprehensive support to Australians with disabilities. In Melbourne, NDIS providers are at the forefront of this revolution, delivering personalized and high-quality services that significantly improve the lives of participants. By focusing on individual needs and fostering independence, these providers are setting new standards in disability care. Tailored Support Plans One of the most impactful ways [NDIS providers in Melbourne](https://www.dmacaringhand.com.au/) are revolutionizing disability services is through the creation of tailored support plans. Each participant receives a customized plan that addresses their unique needs, goals, and aspirations. This personalized approach ensures that services are not only relevant but also effective in enhancing the quality of life for individuals with disabilities. Comprehensive Needs Assessment The journey begins with a comprehensive needs assessment, where experienced professionals collaborate with participants and their families to understand their specific circumstances. This detailed evaluation considers various factors such as the type of disability, level of support required, and personal preferences. The result is a support plan that is truly reflective of the participant’s needs. Goal-Oriented Strategies The tailored support plans are not just about meeting immediate needs but are also focused on long-term goals. Whether it’s improving daily living skills, pursuing education or employment, or participating in community activities, Melbourne’s NDIS providers develop strategies that empower participants to achieve their objectives. This goal-oriented approach fosters a sense of purpose and motivation among individuals with disabilities. Innovative Service Delivery Models Innovation is at the heart of Melbourne’s NDIS providers. They continually seek new and effective ways to deliver services, ensuring participants receive the best possible care. Integrated Care Teams Many NDIS providers in Melbourne employ integrated care teams, consisting of professionals from various disciplines such as healthcare, social work, and education. This multidisciplinary approach ensures that participants receive holistic support, addressing all aspects of their well-being. Integrated care teams collaborate seamlessly, providing coordinated and cohesive services that maximize the benefits for participants. Technology-Enhanced Support Embracing technology has been a game-changer for NDIS providers in Melbourne. From telehealth services to assistive technology, providers are leveraging digital solutions to enhance the support they offer. Telehealth services, for instance, enable participants to access healthcare professionals remotely, ensuring continuity of care even during times of physical distancing. Assistive technology, on the other hand, includes devices and software that aid in communication, mobility, and daily living activities, greatly enhancing the independence of participants. Community Engagement and Inclusion A key aspect of the NDIS in Melbourne is promoting community engagement and inclusion. Providers are dedicated to ensuring that individuals with disabilities are not just recipients of services but active participants in their communities. Social and Recreational Programs NDIS providers organize a variety of social and recreational programs that encourage participants to engage with their peers and the broader community. These programs range from sports and arts activities to social clubs and community events. By participating in these activities, individuals with disabilities can develop social skills, build friendships, and enjoy a sense of belonging. Employment Support Services Employment is a critical area where NDIS providers are making a significant impact. Through tailored employment support services, participants receive assistance with job training, resume building, and job placement. Providers work closely with local businesses to create inclusive workplaces that welcome and support individuals with disabilities. This not only enhances the economic independence of participants but also promotes diversity and inclusion in the workforce. Continuous Improvement and Feedback Melbourne’s NDIS providers are committed to continuous improvement. They actively seek feedback from participants and their families to refine and enhance their services. Participant-Centered Approach A participant-centered approach ensures that the voices of those receiving support are heard and valued. Regular feedback sessions and surveys are conducted to gather insights into the effectiveness of the services provided. This feedback is then used to make necessary adjustments and improvements, ensuring that the support remains relevant and effective. Professional Development To maintain high standards of care, NDIS providers invest in the professional development of their staff. Ongoing training and development programs ensure that staff members are equipped with the latest knowledge and skills to deliver top-quality services. This commitment to professional growth fosters a culture of excellence and innovation within the organizations. Conclusion Melbourne’s NDIS providers are truly revolutionizing disability services through their innovative, personalized, and community-focused approaches. By developing tailored support plans, embracing technology, promoting community inclusion, and committing to continuous improvement, they are making a profound difference in the lives of individuals with disabilities. As these providers continue to evolve and innovate, the future of disability services in Melbourne looks brighter than ever.
dmacaringhand
1,908,312
How to Use VSCode Logpoint with Keyboard Shortcuts
How to use VSCode logpoint feature with only using keyboard.
0
2024-07-02T01:47:50
https://dev.to/wiscer/how-to-use-vscode-logpoint-with-keyboard-shortcuts-4gmc
vscode, logpoint, javascript
--- title: How to Use VSCode Logpoint with Keyboard Shortcuts published: true description: How to use VSCode logpoint feature with only using keyboard. tags: 'vscode, logpoint, javascript' canonical_url: null id: 1908312 date: '2024-07-02T01:47:50Z' --- ## Introduction This article demonstrates how to efficiently utilize VSCode Logpoints using only keyboard shortcuts. This method is particularly beneficial for users who rely on screen readers or prefer keyboard-centric workflows. ## Understanding VSCode Logpoint VSCode Logpoints allow developers to dynamically insert logging messages into their code without modifying the source code itself. This feature is invaluable for debugging and understanding code execution flow. ## Setting Up Custom Shortcut for Logpoints Since there is no default shortcut or menu item for adding Logpoints, follow these steps to define a custom shortcut: - Open the keybinding editor with `Ctrl + K Ctrl + S`. - Type `Logpoint` into the search box and press `Tab` to navigate to the `Debug:Add Logpoint` result. Press `Enter` to activate. - Enter your desired keyboard shortcut, for example, `Ctrl + L Ctrl + P`, and confirm by pressing `Enter`. ## Example File To illustrate the usage of Logpoints, consider the following JavaScript file: ```javascript let a = 1; let b = plus_4(a); plus_9(b); console.log("End of file"); function plus_4(x) { console.log("plus 4"); return x + 4; } function plus_9(x) { console.log("Plus 9"); return x + 9; } ``` ## Adding Logpoints to Replace `console.log` Replacing repetitive `console.log` statements with Logpoints enhances debugging efficiency: - Navigate to the line containing `console.log`. - Use the custom shortcut (`Ctrl + L Ctrl + P`) to add a Logpoint. Enter a descriptive message when prompted, e.g., "Reached end of file", and press `Enter`. Repeat these steps for each `console.log` statement in your code. ## Running in Debug Environment To execute the JavaScript file with Logpoints: - Open the debug panel with `Ctrl + Shift + D`. - Navigate to `Javascript Debug Terminal` using `Tab` and activate it. - Run the script using: ```bash node index.js ``` Logpoint messages will be displayed in the terminal. **Note for screen reader users:** Switch to screen reader compatibility mode with `F2` to read the terminal output. Use arrow keys to navigate through lines and press `F1` to return to normal mode. Using Logpoints eliminates the need for repetitive `console.log` cleanup during debugging sessions, streamlining your workflow. ## Removing Logpoints To remove a Logpoint, position the cursor on the relevant line and press `F9`, similar to removing breakpoints. ## Conclusion Using VSCode Logpoints significantly improves debugging efficiency compared to traditional `console.log` methods. It saves time by reducing code clutter and repetitive cleanup tasks, making debugging more focused and productive. We hope this article helps streamline your development process with VSCode Logpoints. Happy coding! Thanks for reading.
wiscer
1,908,309
Managing User Accounts and Groups in Linux
System AdministrationIn the realm of system administration, managing user accounts and groups is a...
0
2024-07-02T01:41:10
https://dev.to/horlatayorr/managing-user-accounts-and-groups-in-linux-42g8
System AdministrationIn the realm of system administration, managing user accounts and groups is a fundamental task. This article delves into a Bash script designed to automate the creation of user accounts and their association with specific groups, a common requirement in dynamic environments such as software development teams. The script not only streamlines the setup process but also ensures security practices by generating random passwords for each user. The Script at a Glance [Link to code repository](https://github.com/horlatayorr/HNG-Internship_Stage1.git):  This Bash script reads a text file containing usernames and group names, creates the users and groups, sets up home directories with appropriate permissions, generates random passwords, logs all actions to a log file, and stores the generated passwords securely. Step-By-Step Explanation 1. Input Validation and Set Up ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vuu2opx93z72ecytdcda.PNG) The script begins by checking if an input file is provided and sets up necessary directories and files. Explanation: -Checks if the input file is provided as an argument. -Defines variables for the input file, log file, and password file. -Creates necessary directories and files with appropriate permissions. 2. Helper Functions Defines helper functions for logging messages and generating random passwords. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n6dqwg4yz5ayeazw1qwb.PNG) Explanation: -log_message: Logs messages with a timestamp to the log file. -generate_password: Generates a random password using urandom. -Backs up the existing log and password files before making any changes. 3. Processing the Input File Reads the input file line by line, extracting usernames and groups, and processes each user. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ssnq5tzwh037h9b1d49d.PNG) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1d0wp64ca6uk1iln1570.PNG) Explanation: - Reads the input file line by line, trimming whitespace. - Skips empty or malformed lines. - Checks if the user already exists and logs a message if they do. - Creates a personal group and the user if they do not already exist. - Sets appropriate permissions for the user’s home directory. 4. Password Generation Generates and sets a random password for the user, then stores it securely. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/297n87t72c7qnckstxx2.PNG) Explanation: - Generates a random password for the user. - Sets the generated password for the user. - Stores the username and password securely in the password file. 5. Group Management Adds the user to specified groups, creating any groups that do not already exist. Explanation: - Checks if groups are specified for the user. - Splits the groups by comma and processes each group. - Creates the group if it does not already exist. - Adds the user to the group and log s the action. Testing the Script To test the script, create a sample input file and execute the script: 1. Create the Sample Input File: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gjpfnyib9hvg2xgu6l9k.PNG) 2. Run the script. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/441yfgoivexokrgtcauv.PNG) 3. Check Outputs ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bulo60v4di00zpjiopcx.PNG) - Log File: Review /var/log/user_management.log for detailed logs. - Password File: Verify /var/secure/user_passwords.txt for generated passwords. - User Groups: Check the groups for each user using the id command: Conclusion By following this guide, you can effectively manage users and groups on your system, enhancing both efficiency and security. For those interested in further enhancing their skills or exploring opportunities in software development and system administration, the [HNG Internship](https://hng.tech/internship) offers a comprehensive platform to learn, grow, and connect with a vibrant tech community. Additionally, for hiring top tech talent or exploring premium tech solutions, consider visiting [HNG Hire](https://hng.tech/hire) and [HNG Premium](https://hng.tech/premium).
horlatayorr
1,908,393
100% Free Vector Search with OpenLlama, Postgres, NodeJS and NextJS
So you want to try out vector search but you don’t want to pay OpenAI, or use Huggingface, and you...
0
2024-07-02T04:21:09
https://dev.to/jherr/100-free-vector-search-with-openllama-postgres-nodejs-and-nextjs-3jm5
react, node, nextjs, ai
--- title: 100% Free Vector Search with OpenLlama, Postgres, NodeJS and NextJS published: true date: 2024-07-02 01:37:40 UTC tags: react,nodejs,nextjs,ai canonical_url: --- So you want to try out vector search but you don’t want to pay [OpenAI](https://openai.com/), or use [Huggingface](https://huggingface.co/), and you don’t want to pay a vector database company. I’m here for you. Let’s get vector search going, on your own machine, for free. ![Abstract Thumbnail Image For This Article](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cfkdskv2ir6r555hg4vo.jpg) ### **What Are We Doing?** Let’s take a quick step back and talk about what we are doing and why we are doing it. Vector search of AI embeddings is a way to create a search based on concepts. For example, searching for ‘pet’ might yield results for both dogs and cats. This is super valuable because it means that your customers can get better search results To accomplish this we first take the text we want to search and send it to an AI for it to create an “embedding”. An embedding is a lengthy array of floating point values, usually between ~300 and ~1500 numbers. The embedding value for cat, dog, and pet would be similar. So if you were to compare cat and dog they would be close, where dog and pizza would not be close. What a vector database allows you to do is to store these vectors, along with their associated data (probably the original text of the data). Once the data is stored you can then query the database with a new vector to get any nearby results. For example, if we stored cat and dog with their embeddings in the database, we could then take a input text of “pet”, create the embedding for that, then use that to query the database and we would likely get back cat and dog. ### Why Postgres and OpenLlama? Postgres is a fantastic database that you can easily install and run locally. And with the pgvector extension to Postgres you can create vector fields that you can then use in your SQL queries. There are multiple ways to install Postgres on your machine. On my Mac I used the Postgres.app to install Postgres. OpenLlama is a very easy way to install and run AI models locally. I used Homebrew to install OpenLlama using brew install ollama. For our simple test application we’ll load all the lines from the [1986 horror film Aliens](https://www.imdb.com/title/tt0090605/). ### Getting Set Up There are [lots of models](https://ollama.com/library) to choose from with OpenLlama. For this application I chose [snowflake-arctic-embed](https://ollama.com/library/snowflake-arctic-embed) because its ideal for fast creation of embeddings. To install it I used the command ollama pull snowflake-arctic-embed . The last part of the setup is to create a local Postgres database. You can name it whatever you like, I chose lines because we are searching lines from a movie. With the database created we can use the psql command to run some commands. The first is to add the vector extension to the database. That enables the vector field type. To do that I use the create extension command: ``` CREATE EXTENSION vector; ``` Now we need to create a table to hold the line text as well as the vectors, here are the commands to create the table along with an index on the position value which is the position of the line in the script. ``` CREATE TABLE lines ( id bigserial PRIMARY KEY, position INT, text TEXT, embedding VECTOR(1024) ); CREATE UNIQUE INDEX position_idx ON lines (position); ``` The important thing to note here is the size of the vector. Different models create different sizes of vector. In the case of our snowflake model the embedding size is 1,024 numbers, so we set the vector size to that. You will want to use the same embedding AI for both storage and query. If you use different models then the numbers won’t line up. ### Creating The Vector Indexes As you can imagine, comparing two 1,024 value floating point arrays could be costly. And comparing lots of them could be very costly. So these new vector databases have come up with different indexing models to make that more effecient. The Postgres vector support has different types of indexes, we will use the [Hierarchical Navigable Small Worlds](https://www.pinecone.io/learn/series/faiss/hnsw/) (HNSW) type to create three different indexes: ``` CREATE INDEX ON lines USING hnsw (embedding vector_ip_ops); CREATE INDEX ON lines USING hnsw (embedding vector_cosine_ops); CREATE INDEX ON lines USING hnsw (embedding vector_l1_ops); ``` Why three? Because there are multiple ways to compare two vectors. There is Cosine, which is often the default, and is good for doing more concept comparison. There is also uclidean, and dot-product comparison. Postgres supports all of these methods (and more). Whichever you use you will want to make sure the indexes are enabled for that so that you can get high speed queries. ### Loading The Database With the model downloaded, and Postgres setup we can now start loading the database with our movie lines and their embeddings. I’ve posted the complete project that also includes a NextJS App Router UI on [github](https://github.com/jherr/aliens-vector-search). The [script is located in the load-embeddings directory](https://github.com/jherr/aliens-vector-search/blob/main/load-embeddings/aliens.script.txt). The original [data is from this script page](https://movietranscript.blogspot.com/2015/11/1986-aliens-english-transcript.html). Before you can load the data you’ll need to copy the .env.example files to .env and then change the values to match whatever your Postgres connection details. To load the embedding into Postgres run node loader.mjs with Node 20 or higher. The key parts of the script are the embedding generation: ``` import ollama from "ollama"; ... const response = await ollama.embeddings({ model: "snowflake-arctic-embed", prompt: text, }); ``` Where we use the ollama library to invoke the snowflake embedding model with each line of text one-by-one. We then insert the line into the database using an INSERT statement: ``` await sql`INSERT INTO lines (position, text, embedding) VALUES (${position}, ${text}, ${`[${response.embedding.join(",")}]`}) `; ``` The only tricky thing here is how we format the embedding which is by joining all the numbers together into a string and wrapping it in brackets. With all the data loaded into the database it’s time we make a query. ### Making Our First Query To make sure this works there is a [test-query.mjs file in the](https://github.com/jherr/aliens-vector-search/blob/main/load-embeddings/test-query.mjs)[load-embeddings directory](https://github.com/jherr/aliens-vector-search/blob/main/load-embeddings/test-query.mjs). To make a vector query we first run the model to turn the query into a vector, like so: ``` const response = await ollama.embeddings({ model: "snowflake-arctic-embed", prompt: "food", }); ``` In this case the prompt is food and we use exactly the same process as we did on the loader script to turn that into an embedding. We then use a SQL SELECT statement to query the database with that vector: ``` const query = await sql`SELECT position, text FROM lines ORDER BY embedding <#> ${`[${response.embedding.join(",")}]`} LIMIT 10`; ``` We are using ORDER BY to order the records in the database by their similarity to the given embedding then using LIMIT to get back just the top 10 most similar. The <#> syntax in the ORDER BY is important because it defines which comparison algorithm to use. From the documentation our options are: ``` <-> - L2 distance <#> - (negative) inner product <=> - cosine distance <+> - L1 distance (added in 0.7.0) ``` You can decide for yourself which comparison provides the best output for your application, but be sure to index the table properly based on that comparison method. On my machine this test query yielded, amongst other things: ``` 317 Guess she don't like the corn bread, either. ``` Which is a classic line from the movie that indeed references a type of food (corn bread). ### Putting A User Interface On It With a little extra effort I put a NextJS App Router interface on it that you can play with by running pnpm dev in the root directory of the project after the database has been loaded and the .env file set up properly. This NextJS app uses exactly the same SELECT operation to query the lines from the database. ### Conclusions Obviously you’re not going to take an Aliens script searching application production. But from what I’ve shown you here you could search text content, product descriptions, comments, almost any kind of text. Enjoy!
jherr
1,907,051
Mengenal Asymmetric Encryption: Keamanan Data Tingkat Tinggi dengan Golang
Apa Itu Asymmetric Encryption? Asymmetric encryption adalah metode enkripsi yang...
0
2024-07-02T01:17:55
https://dev.to/yogameleniawan/mengenal-asymmetric-encryption-keamanan-data-tingkat-tinggi-dengan-golang-4b17
go, programming, tutorial, learning
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2c6tdxpib3sz1rt1ok16.png) ### Apa Itu Asymmetric Encryption? Asymmetric encryption adalah metode enkripsi yang menggunakan dua kunci berbeda: public key (kunci publik) dan private key (kunci privat). Berbeda dengan metode [Symmetric Encryption](https://yogameleniawan.com/learning-media/implementasi-metode-standard-symmetric-encryption-signature-pada-golang-2m5m) kalau asymmetric encryption itu kunci publik digunakan untuk mengenkripsi data, sementara kunci privat digunakan untuk mendekripsi data. Karena karakteristik ini, asymmetric encryption sangat berguna dalam berbagai situasi yang membutuhkan keamanan tinggi dan otentikasi. Berikut beberapa contoh penggunaan dan penjelasan detail kapan menggunakan asymmetric encryption: 1. Mengamankan Transaksi Online Dalam transaksi online, seperti pembelian di e-commerce atau perbankan online, asymmetric encryption digunakan untuk memastikan bahwa informasi sensitif, seperti nomor kartu kredit atau informasi pribadi, dikirim dengan aman dari pengguna ke server. Prosesnya yaitu pengguna mengenkripsi informasi sensitif menggunakan public key yang disediakan oleh server. Hanya server yang memiliki private key yang dapat mendekripsi informasi tersebut, sehingga menjaga data tetap aman selama pengiriman. 2. Sistem Otentikasi dan Login Asymmetric encryption digunakan dalam protokol otentikasi untuk memastikan identitas pengguna. Contohnya adalah penggunaan sertifikat digital dan SSL/TLS dalam HTTPS. Prosesnya ketika pengguna mencoba login, server mengirimkan sebuah challenge yang dienkripsi dengan public key pengguna. Pengguna kemudian mendekripsi challenge ini dengan private key mereka dan mengirimkan kembali ke server, membuktikan identitas mereka. ### Apa keuntungan dan kekurangan pakai Asymmetric Encryption? #### Keuntungan Menggunakan Asymmetric Encryption - Keamanan Tinggi: Dengan menggunakan dua kunci berbeda, risiko bocornya kunci enkripsi berkurang. - Otentikasi: Dapat digunakan untuk memastikan identitas pengirim dan penerima data. - Distribusi Kunci yang Aman: Public key dapat didistribusikan secara bebas tanpa mengurangi keamanan. #### Kekurangan Menggunakan Asymmetric Encryption - Kecepatan: Asymmetric encryption lebih lambat dibandingkan symmetric encryption karena proses enkripsi dan dekripsinya lebih kompleks. - Ukuran Kunci: Kunci yang digunakan dalam asymmetric encryption biasanya lebih panjang, sehingga membutuhkan lebih banyak ruang penyimpanan dan bandwidth. --- ### Contoh Penggunaan Asymmetric Encryption di Golang Sekarang kita lihat gimana cara pakai asymmetric encryption dan digital signature di Golang. #### Generate RSA Key di Golang ```go package main import ( "crypto/rand" "crypto/rsa" "crypto/x509" "encoding/pem" "fmt" "os" ) func generateKeyPair(bits int) (*rsa.PrivateKey, *rsa.PublicKey, error) { privateKey, err := rsa.GenerateKey(rand.Reader, bits) if err != nil { return nil, nil, err } return privateKey, &privateKey.PublicKey, nil } func savePEMKey(fileName string, key *rsa.PrivateKey) error { outFile, err := os.Create(fileName) if err != nil { return err } defer outFile.Close() privateKeyPEM := pem.Block{ Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(key), } return pem.Encode(outFile, &privateKeyPEM) } func savePublicPEMKey(fileName string, pubkey *rsa.PublicKey) error { asn1Bytes, err := x509.MarshalPKIXPublicKey(pubkey) if err != nil { return err } publicKeyPEM := pem.Block{ Type: "RSA PUBLIC KEY", Bytes: asn1Bytes, } outFile, err := os.Create(fileName) if err != nil { return err } defer outFile.Close() return pem.Encode(outFile, &publicKeyPEM) } func main() { privateKey, publicKey, err := generateKeyPair(2048) if err != nil { fmt.Println("Error generating keys:", err) return } err = savePEMKey("private_key.pem", privateKey) if err != nil { fmt.Println("Error saving private key:", err) return } err = savePublicPEMKey("public_key.pem", publicKey) if err != nil { fmt.Println("Error saving public key:", err) return } fmt.Println("Keys generated and saved successfully!") } ``` #### Enkripsi dan Dekripsi dengan RSA di Golang ```go package main import ( "crypto/rand" "crypto/rsa" "crypto/x509" "encoding/pem" "fmt" "io/ioutil" "os" ) func loadPrivateKey(path string) (*rsa.PrivateKey, error) { privKeyFile, err := os.Open(path) if err != nil { return nil, err } defer privKeyFile.Close() pemFileInfo, _ := privKeyFile.Stat() var size = pemFileInfo.Size() pemBytes := make([]byte, size) buffer := bufio.NewReader(privKeyFile) _, err = buffer.Read(pemBytes) if err != nil { return nil, err } data, _ := pem.Decode(pemBytes) privateKeyImported, err := x509.ParsePKCS1PrivateKey(data.Bytes) if err != nil { return nil, err } return privateKeyImported, nil } func loadPublicKey(path string) (*rsa.PublicKey, error) { pubKeyFile, err := os.Open(path) if err != nil { return nil, err } defer pubKeyFile.Close() pemFileInfo, _ := pubKeyFile.Stat() var size = pemFileInfo.Size() pemBytes := make([]byte, size) buffer := bufio.NewReader(pubKeyFile) _, err = buffer.Read(pemBytes) if err != nil { return nil, err } data, _ := pem.Decode(pemBytes) publicKeyImported, err := x509.ParsePKIXPublicKey(data.Bytes) if err != nil { return nil, err } pubKey, ok := publicKeyImported.(*rsa.PublicKey) if !ok { return nil, fmt.Errorf("error parsing public key") } return pubKey, nil } func encryptMessage(pub *rsa.PublicKey, msg []byte) ([]byte, error) { ciphertext, err := rsa.EncryptOAEP( sha256.New(), rand.Reader, pub, msg, nil) if err != nil { return nil, err } return ciphertext, nil } func decryptMessage(priv *rsa.PrivateKey, ciphertext []byte) ([]byte, error) { plaintext, err := rsa.DecryptOAEP( sha256.New(), rand.Reader, priv, ciphertext, nil) if err != nil { return nil, err } return plaintext, nil } func main() { privateKey, err := loadPrivateKey("private_key.pem") if err != nil { fmt.Println("Error loading private key:", err) return } publicKey, err := loadPublicKey("public_key.pem") if err != nil { fmt.Println("Error loading public key:", err) return } message := []byte("Hello, world!") ciphertext, err := encryptMessage(publicKey, message) if err != nil { fmt.Println("Error encrypting message:", err) return } fmt.Printf("Encrypted message: %x\n", ciphertext) plaintext, err := decryptMessage(privateKey, ciphertext) if err != nil { fmt.Println("Error decrypting message:", err) return } fmt.Printf("Decrypted message: %s\n", plaintext) } ``` #### Tanda Tangan Digital (Signature) di Golang ```go package main import ( "crypto/rand" "crypto/rsa" "crypto/sha256" "crypto/x509" "encoding/pem" "fmt" "io/ioutil" "os" ) func signMessage(priv *rsa.PrivateKey, msg []byte) ([]byte, error) { hash := sha256.New() hash.Write(msg) hashed := hash.Sum(nil) signature, err := rsa.SignPSS( rand.Reader, priv, crypto.SHA256, hashed, nil, ) if err != nil { return nil, err } return signature, nil } func verifyMessage(pub *rsa.PublicKey, msg, sig []byte) error { hash := sha256.New() hash.Write(msg) hashed := hash.Sum(nil) err := rsa.VerifyPSS( pub, crypto.SHA256, hashed, sig, nil, ) if err != nil { return err } return nil } func main() { privateKey, err := loadPrivateKey("private_key.pem") if err != nil { fmt.Println("Error loading private key:", err) return } publicKey, err := loadPublicKey("public_key.pem") if err != nil { fmt.Println("Error loading public key:", err) return } message := []byte("This is a signed message.") signature, err := signMessage(privateKey, message) if err != nil { fmt.Println("Error signing message:", err) return } fmt.Printf("Signature: %x\n", signature) err = verifyMessage(publicKey, message, signature) if err != nil { fmt.Println("Error verifying message:", err) return } fmt.Println("Message verified successfully!") } ``` ### Kesimpulan Dengan menggunakan asymmetric encryption, teman-teman bisa mengenkripsi data dengan public key dan mendekripsinya dengan private key. Selain itu, teman-teman juga bisa menandatangani pesan dengan private key dan memverifikasinya dengan public key. Metode ini memberikan tingkat keamanan yang tinggi dan memastikan integritas serta keaslian data yang dikirim atau diterima. Jadi, pastikan teman-teman menggunakan metode ini untuk keperluan yang membutuhkan keamanan tinggi!
yogameleniawan
1,908,278
Bad Vision
In addition to my main job as a programmer, I really like to draw.As you can understand because of...
0
2024-07-02T00:59:46
https://dev.to/semyon_glinkin_c7cdc6c336/bad-vision-244f
webdev, programming, react, python
In addition to my main job as a programmer, I really like to draw.As you can understand because of this I have very sore eyes.I decided that it's time to get [Prescription Glasses](https://glassesstore.co.uk/), and started looking for a suitable site. Found one, which immediately liked because of the large selection and normal prices. I ordered a pair of glasses and was pleasantly surprised when they arrived quickly. I put them on and felt that my eyes were no longer so strained, it was easier to work. I also liked the virtual fitting function - you can see at a glance whether they will fit you or not. I am very happy with my purchase.
semyon_glinkin_c7cdc6c336
1,908,272
What channel is cnn on for directv
In the realm of cable and satellite television, knowing how to locate specific channels is essential...
0
2024-07-02T00:42:11
https://dev.to/haya_naaz_6058eb103487294/what-channel-is-cnn-on-for-directv-32b6
cnn
In the realm of cable and satellite television, knowing how to locate specific channels is essential for accessing preferred programming. For many viewers, CNN holds a prominent place in their channel lineup, offering up-to-date news coverage and analysis. However, finding CNN on DirecTV can sometimes be a challenge, particularly for new subscribers or those unfamiliar with the channel lineup. In this guide, we’ll explore how to easily locate CNN on DirecTV, ensuring that viewers can access their favorite news network with ease. Whether you’re a news enthusiast or simply want to stay informed, understanding [what channel is CNN on for DirecTV](https://cnnwallet.com/2024/06/04/what-channel-is-cnn-on-for-directv/) is key to enhancing your viewing experience. Join us as we navigate through the DirecTV lineup and uncover the channel placement of CNN, empowering viewers to stay connected and informed in today’s fast-paced world.
haya_naaz_6058eb103487294
1,908,270
Am Ice Ritual Recipe
If you want to learn about the Am Ice Ritual recipe, you’ve come to the right place!Today, we’ll give...
0
2024-07-02T00:37:35
https://dev.to/haya_naaz_6058eb103487294/am-ice-ritual-recipe-o5
If you want to learn about the [Am Ice Ritual recipe](https://bigboxcrowd.com/am-ice-ritual-recipe/), you’ve come to the right place!Today, we’ll give you all the information you need about this recipe. Let’s dive into the delicious world of the Am Ice Ritual! This morning routine has been gaining popularity for its claimed benefits in weight loss and sensitive tone for the day ahead. But what exactly is this ritual, and why is it creating such a buzz in the health and wellness community? In this introduction, we’ll unravel the mysteries behind the Am Ice Ritual, explore its origins, and discover how it can become a refreshing addition to your morning routine for a healthier and more energized start to your day. In the health world, there’s something called the Am Ice Ritual. People say it’s like magic for your mornings and can help you lose weight and have a good day. But what is this ritual all about? And how can you make it part of your morning routine? In this article, we’ll look at where the Am Ice Ritual comes from, what good things it might do for you, and how you can make your own ritual to help you lose weight and feel better.
haya_naaz_6058eb103487294
1,908,269
Exploring the Scala Play Framework for Web Development
Introduction Scala is a popular programming language that has gained significant traction...
0
2024-07-02T00:33:02
https://dev.to/kartikmehta8/exploring-the-scala-play-framework-for-web-development-1o0p
javascript, beginners, programming, tutorial
## Introduction Scala is a popular programming language that has gained significant traction in recent years. With its functional programming paradigm and object-oriented approach, it has become a popular language for web development. One of the main frameworks used with Scala for web development is the Play Framework. In this article, we will explore the advantages, disadvantages, and features of the Scala Play Framework for web development. ## Advantages of the Scala Play Framework 1. **Scalable:** The Play Framework is highly scalable, making it suitable for building web applications of any size. 2. **Fast development:** Using the Play Framework, developers can build robust web applications quickly. 3. **Modular architecture:** Play follows a modular architecture, making it easier to add and remove features without affecting the entire application. 4. **Built-in features:** The Play Framework comes with built-in components such as an ORM (Object Relational Mapper) and a web server, which helps in speeding up development. ## Disadvantages of the Scala Play Framework 1. **Steep learning curve:** The Play Framework can be challenging to learn for beginners due to its complex functional programming concepts. 2. **Limited community support:** Compared to other frameworks, the Play Framework has a smaller community, which may make it difficult to find help or resources. ## Features of the Scala Play Framework 1. **Asynchronous programming:** The Play Framework uses an async model, allowing for better performance and scalability. ```scala // Example of asynchronous action in Play Framework def index = Action.async { Future { Ok("Hello, Play Framework!") }(executionContext) } ``` 2. **Hot reloading:** Developers can make changes to their code and see the results immediately without having to restart the server. ```bash # Example usage in Play development mode sbt run # Changes in the code will reflect immediately due to hot reloading. ``` 3. **RESTful API support:** The Play Framework has built-in support for building RESTful APIs, making it suitable for developing modern web applications. ```scala // Example of a simple RESTful API in Play Framework def getMessage(id: Long) = Action { val message = MessageRepository.find(id) message match { case Some(msg) => Ok(Json.toJson(msg)) case None => NotFound } } ``` ## Conclusion The Scala Play Framework is a powerful tool for web development, offering a range of features and advantages. With its scalability, fast development, and ready-to-use components, it is a compelling choice for building web applications. However, it may not be the most suitable framework for beginners due to its steep learning curve. Despite this, the Play Framework continues to gain popularity among developers, and we can expect to see more impressive web applications built using this framework in the future.
kartikmehta8
1,908,252
Bash Scripting for Automating User Management
User accounts and groups can be managed more easily and quickly by using a bash script in Linux. A...
0
2024-07-01T23:01:46
https://dev.to/chuks_dozie_b155978baf38c/bash-scripting-for-automating-user-management-392
User accounts and groups can be managed more easily and quickly by using a bash script in Linux. A structured file format containing users’ information is employed in this script to aid in the efficient creation and management of user accounts. **Introduction** In Linux environments, user management often entails administration privileges for activities such as creating users, managing passwords, group management and permissions. This article provides a bash script that can automate these functions hence making them effective. **Script Overview** The bash script provided automates user creation and management. Herein are various functionalities: 1. Root Privilege Check: Makes sure if the system has been run with superuser rights to accomplish administrative operations. 2. Input File Validation: Checks whether an input file is available or not then exits. The input file contains usernames and groups separated by semicolons. 3. Logging: All activities performed by the script are logged into /var/log/user_management.log to keep track of user creation, group management as well as any errors encountered. 4. Password Management: For each user it generates random passwords which are stored securely in /var/secure/user_passwords.csv. User and Group Creation: It goes through every line found within the input file where it creates By the way I did this during my Internship with HNG. Check them out [https://hng.tech/internship](https://hng.tech/internship) or [https://hng.tech/hire](https://hng.tech/hire)
chuks_dozie_b155978baf38c
1,908,268
Introduction to Nuxt.js: The Framework for Universal Vue.js Applications
Nuxt.js is a powerful, open-source framework built on top of Vue.js for creating universal...
0
2024-07-02T00:19:58
https://kristijan-pajtasev.medium.com/introduction-to-nuxt-js-the-framework-for-universal-vue-js-applications-27c66ab0df2c
nuxt, vue, javascript, frontend
Nuxt.js is a powerful, open-source framework built on top of Vue.js for creating universal applications. It simplifies the development of server-rendered Vue applications and static websites. Here's an overview to get you started with Nuxt.js and understand why it might be the right choice for your next project. ## What is Nuxt.js? Nuxt.js extends Vue.js by providing an opinionated and robust structure for building applications. It abstracts away much of the configuration required for managing Vue applications and offers a streamlined development experience. ## Key Features of Nuxt.js 1. **Universal Applications**: Nuxt.js allows you to create universal (isomorphic) applications, which means the same code can run on both the client and the server. This capability enhances SEO, improves performance, and provides a better user experience. 2. **Automatic Code Splitting**: Nuxt.js automatically splits your code into smaller chunks, which are loaded on demand. This improves page load times and application performance. 3. **Powerful Routing System**: Nuxt.js leverages file-based routing, meaning you can create routes by simply adding Vue files in the pages directory. This approach simplifies the process of managing routes in your application. 4. **Server-Side Rendering (SSR)**: Nuxt.js makes it easy to enable SSR, which renders your Vue components on the server and sends the fully rendered HTML to the client. This results in faster initial load times and better SEO. 5. **Static Site Generation (SSG)**: With Nuxt.js, you can generate static websites. This approach combines the benefits of static sites (like speed and security) with the flexibility of dynamic content. 6. **Module System**: Nuxt.js comes with a modular architecture that allows you to extend the framework’s capabilities using plugins and modules. Examples include PWA support, authentication, and analytics. 7. **Development Tools**: Nuxt.js provides a set of development tools, including a powerful CLI, hot module replacement, and detailed debugging capabilities, which enhance the development experience. ## Getting Started with Nuxt.js To get started with Nuxt.js, you need to have Node.js and npm (or yarn) installed on your machine. Here’s a quick guide to setting up a new Nuxt.js project: 1. **Install Nuxt.js**: Use the following commands to create a new Nuxt.js project: ```bash npx create-nuxt-app <project-name> # or yarn create nuxt-app <project-name> ``` Follow the prompts to set up your project configuration. 2. **Project Structure**: After creating your project, you’ll notice the following directories and files: - `pages/`: Vue files in this directory correspond to routes in your application. - `layouts/`: Define layout components that can be used across different pages. - `components/`: Reusable Vue components. - `static/`: Static files (like images and fonts) that won’t be processed by Webpack. - `store/`: Vuex store files for state management. 3. **Development Server**: Run the development server with the following command: ```bash npm run dev # or yarn dev ``` Your Nuxt.js application will be accessible at `http://localhost:3000`. ## Example: Creating a Simple Page To create a simple page in your Nuxt.js application, add a new Vue file in the `pages` directory: ```vue <!-- pages/index.vue --> <template> <div> <h1>Welcome to Nuxt.js</h1> <p>This is your homepage.</p> </div> </template> <script> export default { head() { return { title: 'Home Page', meta: [ { hid: 'description', name: 'description', content: 'My custom description' } ] } } } </script> <style> /* Add your styles here */ </style> ``` This simple example demonstrates how to create a new page, manage metadata for SEO, and add styles within a single file. ## Conclusion Nuxt.js enhances the capabilities of Vue.js by providing an intuitive and flexible framework for building universal applications. Whether you're developing a server-rendered app, a static site, or a single-page application, Nuxt.js offers the tools and structure needed to streamline your workflow and optimize your application. By leveraging Nuxt.js, developers can focus on writing application-specific code without worrying about the underlying configuration, making it an excellent choice for both small and large-scale projects. *** For more, you can follow me on [Twitter](https://twitter.com/hi_iam_chris_), [LinkedIn](https://www.linkedin.com/in/kpajtasev/), [GitHub](https://github.com/kristijan-pajtasev/), or [Instagram](https://www.instagram.com/hi_iam_chris_/).
hi_iam_chris
1,908,266
Day 982 : Last Nite
liner notes: Saturday : Went to the station and did the radio show. Had a pretty chill day. The...
0
2024-07-02T00:12:10
https://dev.to/dwane/day-982-last-nite-bo
hiphop, code, coding, lifelongdev
_liner notes_: - Saturday : Went to the station and did the radio show. Had a pretty chill day. The recording of this week's show is at https://kNOwBETTERHIPHOP.com ![Radio show episode image of a scene from &quot;Do The Right Thing&quot; with a person holding up their fists to the camera to show their finger rings that spell out Love Hate with the words June 29th 2024 Eyes on the Ball](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c8ss44fk7xisslwe50ds.jpg) - Sunday : Did my study sessions. Didn't do a lot of coding, but I got some other things done like paying my bills for the upcoming month. Another pretty chill day. Ended the night watching an episode of "Demon Slayer". - Professional : Had a meeting to start the day, then another like an hour later. Went over the content review of my blog post and spoke with the reviewer. Not a lot things were changed which was pretty good. Replied to some community questions. Fixed a couple of small typos I found in a couple of tutorials that I did so I fixed those. I forgot to capitalize a letter in a variable name. Started a quick project to test a new feature. Spoke with the team about a refactor of an older application to make sure everyone was on the same page. Pretty productive day to start off the week. - Personal : So, I've been playing around with this logo. I think I finally got how I want it to look. I just need to finalize it and export it so I can add it to my site. ![An aerial view of a rocky shoreline, with turquoise ocean water lapping at the shore. The rocks are white and jagged, and there is a small patch of green grass at the top of the cliff. The image is taken from a high angle, so the ocean appears vast. Location: Zarautz, Spain](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9vnxdecmhevgy081m13k.jpg) Whew, I was pretty tired after work and took a nap while trying to write up this blog. Going to work on this logo and map out how I want the layout for the front end of this other side project to look so I can start working on it. That backend is pretty much done. I just need to update some things and make sure it works. Looking to repeat last nite and watch an episode of "Demon Slayer". Have a great night! peace piece Dwane / conshus https://dwane.io / https://HIPHOPandCODE.com {% youtube kugXpZjI5RM %}
dwane
1,908,265
Mobile Development Platforms and Common Software Architecture Patterns Used
In today's digital age, mobile applications have become an integral part of our daily lives, driving...
0
2024-07-02T00:05:33
https://dev.to/norheem/mobile-development-platforms-and-common-software-architecture-patterns-used-46ba
In today's digital age, mobile applications have become an integral part of our daily lives, driving the need for robust and efficient mobile development platforms. These platforms facilitate the creation of mobile apps and ensure they are scalable, maintainable, and user-friendly. Alongside these platforms, various software architecture patterns are crucial in structuring the code and enhancing the overall development process. This article discusses the most popular mobile development platforms and explores the common software architecture patterns that developers use to build high-quality mobile applications. Before we talk about these platforms, let's talk about what mobile development, mobile development platforms, software architecture and Software architecture patterns are, to establish a clear understanding of the creation and structuring of mobile applications. Mobile Development refers to the process of creating software applications that run on mobile devices such as smartphones and tablets. This involves designing, coding, testing, and deploying application systems like iOS and Android. Mobile development can be done using native languages specific to the platform (e.g., Swift for iOS, Kotlin for Android) or through cross-platform frameworks that allow a single codebase to be used across multiple platforms (e.g., flutter). Mobile Development Platforms are the environments and tools that developers use to create mobile applications. Examples are iOS which uses Xcode as the integrated development environment (IDE) and Swift or Objective-C as the programming languages. Android uses Android Studio as the IDE and Java or Kotlin as the programming languages. React Native is a framework that allows developers to use JavaScript and React to build mobile apps for both iOS and Android. Flutter is a UI toolkit from Google that uses the Dart programming language to create natively compiled applications for mobile, web, and desktop from a single codebase, Xamarin is a Microsoft-owned framework that uses C# and .NET to create cross-platform apps and Ionic is a framework that uses web technologies like HTML, CSS, and JavaScript to build cross-platform mobile apps. Software architecture refers to the high-level structure of a software system, defining how different components and modules interact with each other. It involves making decisions about the organization of the code, the selection of design patterns, and the overall structure to ensure the system is scalable, maintainable, and efficient. Good software architecture helps in managing complexity and facilitates easier maintenance and evolution of the software and Software architecture patterns are reusable solutions to common problems in software design. They provide a template for how to structure the code and organize the components of a software system. Common patterns used in mobile development include: Model-View-Controller (MVC) is a software architecture pattern that separates an application into three interconnected components, each with distinct responsibilities. The Model represents the data and business logic of the application, handling data retrieval, storage, and manipulation. The View is responsible for displaying the data to the user and rendering the user interface elements. The Controller acts as an intermediary between the Model and the View, processing user input, updating the Model, and refreshing the View accordingly. This separation of concerns makes the codebase more modular, easier to manage, and facilitates parallel development, as different team members can work on the Model, View, and Controller simultaneously. Model-View-ViewModel (MVVM) is a software architecture pattern that is similar to MVC but introduces a ViewModel that handles the presentation logic and state, making it easier to manage and test the user interface. This pattern is particularly useful in data-binding scenarios, where changes in the ViewModel automatically reflect in the View. Model-View-Presenter (MVP) In the MVP pattern, the Presenter is responsible for handling all the UI logic, which makes the View passive and easier to test. This separation allows for more modular code, as the Presenter can be developed and tested independently of the View. Before I round it up, I would like to tell you about a platform that allows you to learn and test your abilities by building amazing stuff in an intensive 8-week BootCamp. I'm privileged to be accepted and be part of the new BootCamp that starts today July 1st. I have been told by previous students that it's a very challenging program but also there are lots of fun in it. I'm here to recommend the same program to any tech enthusiast interested in doing things they like and like to try their knowledge and see how good they are to try this internship program by registering for free here https://hng.tech/internship if you to enrol for a premium account where you have access to a lot of materials and get a certificate at the end of the internship you can register here https://hng.tech/premium and finally you can search and hire an elite from their platform too through https://hng.tech/hire In conclusion, understanding these foundational concepts discussed above is crucial for anyone involved in the creation and structuring of mobile applications as they provide the necessary framework and guidelines for building robust, maintainable and scalable software.
norheem
1,908,214
How to use Midjourney AI on Make.com?
Make.com, formerly known as Integromat, is a powerful automation platform that connects various...
0
2024-07-01T23:59:57
https://dev.to/renaud7/how-to-use-midjourney-ai-on-makecom-5cb6
automation, ai, nocode
Make.com, formerly known as Integromat, is a powerful automation platform that connects various applications and services. Unlike Zapier, which uses "Zaps," Make.com employs "scenarios" to create complex, multi-step workflows. These scenarios can link multiple apps and services, allowing for intricate automation processes without requiring coding skills. [Midjourney AI](https://midjourney.co.in), an AI-driven image generation tool, can be seamlessly integrated with Make.com's automation capabilities. This integration opens up possibilities for automating image creation based on data from diverse sources. For instance, you could set up a scenario where information from a Google Sheets document triggers Midjourney to generate corresponding images. This Midjourney-Make.com combination can significantly enhance productivity by eliminating manual steps in the image creation process. Whether you're working with data from databases, CRMs, or communication tools, the integration ensures a smooth, automated flow from data to image generation. This setup is particularly valuable for businesses and creatives who require frequent, data-driven image creation. # How to get started? To get started, you need to have a Make.com account; if you don't, create a new one [here](https://www.make.com/en/register?pc=apiframe). The account creation process is quite simple. You will then need to create an account on APIFRAME. ![Log in APIFRAME](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f6lho2njkgfr9foae9tv.png) To create an account on APIFRAME, follow these steps: 1. **Visit the APIFRAME Website**: Open your web browser and go to https://apiframe.pro. 2. **Locate the Sign-Up Option**: On the homepage, look for a "Connect" or "Get Access Now" button. This is typically found at the top-right corner of the screen or prominently displayed on the homepage. 3. **Enter Your Details**: You will be prompted to enter your email address. You don't need a password. Make sure to use a valid email address. You can also use your GitHub account to Sign Up! 4. **Verify Your Email**: After submitting your details, APIFRAME will send a confirmation email to the address you provided. Open this email and click on the verification link to confirm your account. 5. **Complete Profile Setup**: Once your email is verified, click on "My Account" on the top-right corner and copy your API Keys. You will need it for Make. 6. **Start Using APIFRAME**: You are provided with some test credits, you may want to start a trial or upgrade your account for any advanced use. By following these steps, you will have successfully created an account on APIFRAME and can begin generating Midjourney AI images. # Midjourney AI on Make.com: Automate now! ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3fn54vpwtw8hj34jq1h2.png) To start using Midjourney AI on Make.com, follow these steps: 1. **Obtain the Invite Link**: To get started you will need an invite link. It can be found on the APIFRAME website homepage, but I will make it easier for you here is the link: https://www.make.com/en/hq/app-invitation/8789e9dcc0943cd0a146cabd353637de 2. **Click the Invite Link**: Open the invite link in your web browser. This will take you to a Make page where you can accept the invitation to use the Midjourney AI by APIFRAME app. 3. **Log In to Make**: If you are not already logged in, you will be prompted to log in to your Make account. Enter your credentials to proceed. 4. **Install the Midjourney AI by APIFRAME.PRO app**: On the invitation page, click the "Install" button. This will add the Midjourney AI by APIFRAME app to your Make account, making it available for use in your Scenarios. 5. **Create a Scenarios**: After installing the app, you can start creating a Scenario using the Midjourney AI by APIFRAME.PRO app. Click on "Create a new scenario" from your Make navigation bar, and select the Midjourney AI by APIFRAME from the list of available apps. 6. **Configure the Module**: Follow the on-screen instructions to configure the trigger and actions for your Scenario. You will need the APIFRAME API Key you copied earlier to connect your APIFRAME account on Make. 7. **Test and Activate the Scenario**: Once you have configured your Scenario, test it to ensure it works as expected. If the test is successful, activate the Scenario to start automating your workflows. # List of Available Midjourney AI Actions on Make.com - Imagine - Reroll - Upscales (Simple, Creative, Subtle, 2x and 4x) - Variations - Faceswap - Inpaint (Vary Region) - Outpaint (Zoom out) - Pan - Describe - Blend - Seed - Fetch - Get Account Details
renaud7
1,908,244
GSoC Week 5
In hindsight, I realized I did many things this week. This might be a testament to my GSoC project's...
27,442
2024-07-01T23:44:59
https://dev.to/chiemezuo/gsoc-week-5-5gba
gsoc, googlesummerofcode, wagtail, opensource
In hindsight, I realized I did many things this week. This might be a testament to my GSoC project's progress and overall status so far. There was a lot of focused work. ## Weekly Check-in I had recovered from my illness just in time for the week's tasks. The goal was to make up for lost time from the previous week. We started with a review of the backlog and pending actions from the discussions over the previous week. Most of the meeting was centered on getting a finished product (PR) ready so that the codebase maintainers would have enough time to review the code, make suggestions, ask for improvements in certain areas, properly document, perform any other internal processes, accept, and merge. The idea is that for all of this to be done before the end of the GSoC deadline, the PR must be ready well in advance. The first major delay was that the RFC didn't have as many comments as we would have liked. We understood that not everyone from the core team would have a chance to look at it right away, but we were looking out for some form of "negative feedback". Negative, in the sense that someone would be against our approach so that we could look for an optimal way if we had missed some details. My mentors decided that it would be good to push the RFC to the wider Wagtail community. We collectively brought up some suggestions: 1. Making an announcement in the Slack announcements channel. 2. Featuring it in the "This Week in Wagtail" newsletter. 3. Making a social media post about the RFC and getting it posted via the Wagtail social media handles. Afterward, I promised to fix all the failing tests and complete the new ones for the edge cases to be tested for. I planned on getting the PR in a completely reviewable state by the end of the week. ## Publicizing the RFC After the meeting, I ran the suggestions for pushing the RFC to the wider community by Wagtail's community manager, and she was kind enough to quickly get the RFC featured in the newsletter before starting her holidays. She also mentioned some other people I could talk to in her absence and hinted at the fact that I could make the post on my own social media handles, and then Wagtail could amplify it on their own social media handles. It sounded like a good plan I could work with. I elected to do this much later in the week, to give some time between the newsletter post and official posts. Towards the end of the week, I brought the topic up with another one of the Wagtail admins, and he suggested that a nice-to-have post would be one where I could do a demo showcasing my proposed changes, provided I was comfortable with it. This discussion happened on a late Friday afternoon, so I chose not to make any decision until consulting with my mentors the following week. ## Finishing the draft PR For months since finishing the PR, there were lots of test cases that were failing or were just flat-out unaccounted for. This was mostly because Wagtail's default behaviour for the `ImageChooserBlock` was to default to titles as alt text in the absence of any alt text. At the time of writing this, Wagtail defaults image titles to the name of the image file itself and seeing as editors often overlook it, tends to be a 'rubbish' file default. Bad alt text is often worse than empty alt text, because unlike the former, the latter can easily be spotted by an accessibility checker. Now, because of the existing behaviour, lots of test cases were checking that the returned image HTML had an alt attribute that defaulted to the title of the file. I had to remove all occurrences of those test scenarios. I also had to write some tests for the Wagtail internals that we modified to make the new block work, e.g to test that every image instance from the cache matched with the image instance, and to test that each image was retrieved, regardless of whether it had the same image ID or not. I got some help from Wagtail's lead developer as he helped clarify some things, and I got to work on the test cases. The final stage of getting the tests to work was to update my copies to the latest versions on the parent repo, and then rebase my commits to not scatter the commit history thus far. After all of this, I updated my branch, and for the first time in months, I had no CI errors. All the tests and actions ticked green. >Note: I still left the 'draft' status on so that my mentors would be able to get a good look at it. ## AI goal feasibility research I went through the Accessibility team's Spreadsheet to compare some alt text data. I went through well over a hundred sample images from existing wagtail sites and examined the alt text that was done by humans. I compared this to the output from AI models when the same image was passed in with a prompt to generate alt text. It was surprising to see that even without context, AI models (Chat GPT specifically, as it was what we used) did a better job of generating alt text than human editors. This put into perspective the fact that the intentions behind the project were justified, and the next step for me was to explore models and learn about the model/API contract that would be needed to integrate Wagtail with these AI models. ## Accessibility meeting I had an accessibility team meeting by the end of the week, and the other GSoC contributor (Nandini) was there. It was nice to interact with her briefly, and the team as a whole discussed the progress with the new alt text validation rules for the accessibility checker. The team had done an amazing job on it. I gave a brief update on the status of my work and got to see Nandini's progress so far on generating a low-carbon footprint template for Wagtail pages. Wagtail is doing some amazing stuff, and I am excited about what we'll have by the end of this year. ## Challenges I had a tough time trying to work around the validation error messages without touching the Python code, but I asked someone more experienced within the Wagtail community to take a look at it, and he promised to look into it, as well as give a colleague of his a chance to look at it as well. I also had to get feedback from the two biggest Wagtail code maintainers to make sure they were okay with me modifying the existing tests. It wasn't a challenge in itself, but it did involve me navigating around a topic I found scary to touch. Both maintainers were in total support of it, provided the new behaviour was what was intended, and was better than the previous way of doing things. It was a week full of activity, but there weren't as many challenges as I'd become more well-versed with the stuff I needed to know. ## What I learned The RFC did receive some more feedback, and from the comments I saw, I learned just how compatible the `ImageBlock` would be with the existing `ImageChooserBlock` because they operated similarly. Thanks to my lead mentor Storm, I also learned about factors to consider when drafting 'upgrade considerations' for newer releases, as well as the thought process that goes into it. With that, I can call my week 5 of GSoC a wrap. I'm growing increasingly proud of the work I am doing. Thank you for reading. Cheers. 🥂
chiemezuo
1,908,259
Best Free Online Tools for PDF Management in 2024
Best Free Online Tools for PDF Management in 2024 Managing PDF files efficiently is essential for...
0
2024-07-01T23:38:03
https://dev.to/digitalbaker/best-free-online-tools-for-pdf-management-in-2024-dlk
pdf, converters, free, pdftoword
**Best Free Online Tools for PDF Management in 2024** Managing PDF files efficiently is essential for both personal and professional tasks. Numerous online tools offer a range of features to help with everything from editing and converting to compressing and merging PDFs. Here, we explore some of the best free online tools for PDF management in 2024, each providing unique features and user-friendly interfaces. **1. Adobe Acrobat** Adobe Acrobat is a leading name in PDF management. The free online version of Adobe Acrobat allows users to view, comment on, and share PDFs. It also includes basic tools for editing, converting, and signing PDFs. Adobe's robust platform ensures high-quality results and reliability. Features: View, comment, and share PDFs Basic editing tools Convert to and from PDF Securely sign documents Website: [Adobe Acrobat Online](https://www.adobe.com/acrobat/online.html) **2. Foxit PDF Editor** Foxit PDF Editor offers a comprehensive suite of tools for PDF management. The free online version allows users to edit text, images, and pages within PDFs. It also provides features for converting PDFs to other formats and vice versa. Features: Edit text, images, and pages Convert to and from PDF Annotate and comment on PDFs Merge and split PDFs Website: [Foxit PDF Editor Online](https://www.foxit.com/pdf-editor/) **3. Nitro Pro** Nitro Pro's free online tools are designed to make PDF editing and conversion simple and effective. Users can convert PDFs to various formats, merge multiple files into one, and edit PDF content directly. Features: Convert PDFs to Word, Excel, and PowerPoint Merge PDFs Edit text and images Secure and sign PDFs Website: [Nitro Pro Online](https://www.gonitro.com/) **4. PDF Expert** PDF Expert provides a sleek and user-friendly interface for managing PDFs. While primarily a paid tool, it offers a free online version with essential features like reading, annotating, and filling out PDF forms. Features: View and annotate PDFs Fill out forms Basic editing tools User-friendly interface Website: [PDF Expert Online](https://pdfexpert.com/) **5. ILovePDF3** ILovePDF3 is a versatile tool offering a wide range of PDF management features for free. Users can merge, split, compress, convert, and edit PDFs easily. Its intuitive design makes it accessible for users of all skill levels. Features: Merge and split PDFs Compress PDFs Convert to and from PDF Edit PDF content Website: [ILovePDF3](https://ilovepdf3.com/) **6. Smallpdf** Smallpdf provides a comprehensive suite of tools for handling PDFs. It offers free services for compressing, converting, merging, and editing PDFs. Smallpdf's cloud-based platform ensures that your files are accessible from anywhere. Features: Compress PDFs Convert to and from PDF Merge and split PDFs Edit PDF content Website: [Smallpdf](https://smallpdf.com/) **7. PDF Candy** PDF Candy is a versatile tool that offers a variety of PDF management options. Users can convert, merge, split, compress, and edit PDFs for free. Its straightforward interface makes it easy to use for everyone. Features: Convert to and from PDF Merge and split PDFs Compress PDFs Edit PDF content Website: [PDF Candy](https://pdfcandy.com/) **8. PDF2Go** PDF2Go offers a range of online tools for managing PDFs. It allows users to convert, compress, merge, and edit PDFs for free. The platform is designed to be user-friendly and efficient, catering to both basic and advanced PDF needs. Features: Convert to and from PDF Compress PDFs Merge and split PDFs Edit PDF content Website: [PDF2Go](https://www.pdf2go.com/) **9. Convertio** Convertio is a powerful tool for converting files between various formats, including PDFs. Its free online version supports a wide range of conversions, making it a valuable resource for anyone needing to handle PDFs and other document types. Features: Convert PDFs to various formats Batch conversions OCR for scanned documents Simple drag-and-drop interface Website: [Convertio](https://convertio.co/) **Conclusion** Whether you need to edit, convert, merge, or compress PDFs, these free online tools offer a variety of features to suit your needs. Each platform has its unique strengths, ensuring that you can find the perfect tool for your PDF management tasks. By leveraging these resources, you can handle PDFs more efficiently and effectively, saving time and enhancing productivity. References Adobe Acrobat - Adobe Acrobat Online Foxit PDF Editor - Foxit PDF Editor Online Nitro Pro - Nitro Pro Online PDF Expert - PDF Expert Online ILovePDF - ILovePDF Smallpdf - Smallpdf PDF Candy - PDF Candy PDF2Go - PDF2Go Convertio - Convertio
digitalbaker
1,908,258
New Learning Journal
App Dev System software manages physical hardware on computer and runs on top aka...
0
2024-07-01T23:28:22
https://dev.to/vdircio/new-learning-journal-2nm6
# App Dev - System software manages physical hardware on computer and runs on top aka operating system - Utility software like system clock, file manager, clipboard are tools provided by the operating system to software developers who then build - Application software is software that people use directly - App - application software - Learning how to build cloud-based software - SaaS - URL - Uniform Resource Locator # Elevator Speech - Part 1 - Who you are: Hey. My name is Victor and I am a recent graduate from the University of Illinois with a major in mathematics and a minor in computer science, and I am seeking a position where I can utilize my analytical and development skills. - Part 2 - Highlight reel: Most recently I strengthened these skills when I was an IT Specialist at the Bureau of Land Management where I enhanced file organization and retrieval by developing their hub website, maintaining an attractive user interface, and facilitating the transition from local to cloud storage. Also, I have experience in software engineering, including virtual reality, mobile app and web development. - Part 3 - Your company’s projects resonate with my interests and skills. I’m eager to bring my strong analytical abilities and enthusiasm for tech to your team. This apprenticeship is an opportunity for me to grow and develop in a dynamic environment. My hands-on experience with complex projects, combined with my ability to quickly adapt and learn new technologies, makes me a strong fit for this role. I’m excited to learn more about how I can support your organization and further my career in tech.
vdircio
1,908,257
New Learning Journal
App Dev System software manages physical hardware on computer and runs on top aka...
0
2024-07-01T23:28:22
https://dev.to/vdircio/new-learning-journal-3mb1
# App Dev - System software manages physical hardware on computer and runs on top aka operating system - Utility software like system clock, file manager, clipboard are tools provided by the operating system to software developers who then build - Application software is software that people use directly - App - application software - Learning how to build cloud-based software - SaaS - URL - Uniform Resource Locator
vdircio
1,908,255
ReactJS vs. Angular: Which is better?
** Introduction ** In the fast-paced world of frontend development, selecting the right...
0
2024-07-01T23:09:46
https://dev.to/praiselaurine/reactjs-vs-angular-which-is-better-3p6n
react, career, frontend
** ## Introduction ** In the fast-paced world of frontend development, selecting the right framework or library can significantly impact your project's success. This article compares ReactJS and Angular, two popular choices, highlighting their differences, strengths, and some of the benefits. Also,have you heard of the [HNG internship program?](https://hng.tech/internship). I'll share my expectations and experiences in the HNG Internship program, where ReactJS plays a pivotal role. ** ## What is React? ** React is a front-end JavaScript library used to build both reusable UI components and user interfaces. React offers flexibility and performance-based solutions due to its use of server-side rendering. It helps developers create seamless UX and complex UI. **_What does React have over Angular? _** 1. Building Blocks Flexibility 2. Isomorphic JavaScript 3. Single Data Binding **_Advantages of React JS _** React JS provides plenty of brilliant benefits at the front-end for users and developers. Here are some of the advantages of React JS; 1. React enables an easy debugging process. The code is reusable. 2. Easy to learn due to its design, which is easy and simple. 3. It makes it very easy for developers to migrate an app in React. 4. Supports both android and iOS platforms. 5. ReactJS is view-oriented. ** ## What is Angular? ** Angular is an open-source web application framework developed by Google and is used to build dynamic, robust, single-page, and enterprise-grade applications. This really has an embedded library and features like client-server communication, routing, and RxJS, among many others. **_What does Angular have over React? _** 1. Among the very special features for which Angular stands foremost are the following: 2. Two-way data binding 3. MVC Model 4. Dependency Injection 5. Opinionated architecture **_Benefits of Angular _** 1. It has a single option for routing. Moreover, it has highly interactive UIs because of data binding features. 2. Angular extends HTML Syntax. With directives, Angular empowers to devise reusable UI components. 3. Data in the model view and the component should be synchronized. 4. Ease in building, maintaining, testing and updating with fewer foot guns ** ## Key Difference Between Angular and React JS ** **_1. Architecture and Learning Curve _** ReactJS: It focuses on the view aspect of the MVC pattern. For a developer already knowing JavaScript and JSX, the learning curve of ReactJS is much easier. Angular: It is a full-fledged MVC framework with TypeScript, so it requires a much steeper learning curve but provides more out-of-the-box structure. **_2. Community and Support _** ReactJS: Has a very huge and active community. Furthermore, it has extensive third-party libraries and resources. Angular: It is supported by Google, with its updates being regular and its documentation exhaustive. **_3. Rendering and Performance _** ReactJS: This library uses a virtual real DOM for its efficient updates and rendering. Angular: It uses two-way data binding and change detection. It may cause performance problems for larger applications. I look forward to working a great deal with ReactJS as a participant of this [HNG Internship](https://hng.tech/internship). Here's what I expect: 1. I look forward to improving my skills in building dynamic and responsive user interfaces. 2. Mastering the best practices of state management and performance optimization. 3. Building real-world projects solving problems collaboratively with peers and mentors. Also, if you are looking to find and Hire Elite Freelance Talent, go to [HNG Hire](https://hng.tech/hire). They’ll offer you the best Talents in any tech field! Have you tried integrating ReactJS or Angular into your projects? Share your experiences and insights!
praiselaurine
1,908,249
Don’t trust AI, trust tests
In my very first story, I talked about my experience with AI in the form of GitHub Copilot. It...
27,942
2024-07-01T23:03:52
https://medium.com/@kinneko-de/63c979d0b094
ai, githubcopilot, unittest, go
In my very first story, I talked about my experience with AI in the form of GitHub Copilot. It betrayed me again. But I was gently caught by my true lover: UnitTest *** I am currently working on code that receives a file using a [grpc stream](https://grpc.io/docs/languages/go/basics/). The file is sent in byte chunks. Go has a [nice functionality where you can determine the media type](https://pkg.go.dev/net/http#DetectContentType) of a file from the first 512 bytes. I do not want to keep all the bytes sent in memory so my goal is to have a byte array of exactly 512 bytes at the end to sniff the media type. All other bytes should be written to a physical file storage and then discarded. I am not that experienced in working with arrays and slices in Golang, nor in other languages. For my test cases, I choose to test chunks smaller than 512 bytes, exactly 512 bytes, and larger than 512 bytes. If you wonder why, check out what [boundary tests](https://en.wikipedia.org/wiki/Boundary-value_analysis) are. I have a lot of experience in writing tests. Not surprisingly, the test with only 4 bytes failed. It took me some time to get deeper into the go standard libraries. I (mis)use tests for this because it is so easy to write, execute, and debug small snippets of code. Here is my learning example: ```golang func TestArray(t *testing.T) { //target2 := [6]int{} target := make([]int, 6) first := []int{1, 2, 3} second := []int{4, 5} size := 0 copy(target[size:], first) size += len(first) copy(target[size:], second) size += len(second) target = target[:size] } ``` AI helps me with explanations and gives me a better understanding of how to use slices in Go. It is always a pleasure for an old man to learn something from the youth full of new ideas. ![Helpful explanation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gbzf4jg4pdvvok46sz1n.png) With the help of GitHub Copilot, my first and second tests pass. Here is the code I used: ```golang var totalFileSize uint64 = 0 sniff := make([]byte, 512) copy(sniff[totalFileSize:], chunkMessage.Chunk) ``` The test for more than 512 bytes failed because my slice was out of range. Maybe it is time for me to admit to myself that I still have a lot to learn. GitHub Copilot came up with the following solution: ```golang if totalFileSize < 512 { remaining := 512 - totalFileSize if len(chunkMessage.Chunk) > remaining { chunkMessage.Chunk = chunkMessage.Chunk[:remaining] } copy(sniff[totalFileSize:], chunkMessage.Chunk) } ``` In my arrogance as an old wise man, I thought I could do better. In my defense, _chunkMessage.Chunk_ cannot be modified because all bytes must be copied in the final file. I implemented a shorter version that worked well, at least in my eyes. ```golang if totalFileSize < 512 { missingBytes := 512 - totalFileSize copy(sniff[totalFileSize:], chunkMessage.Chunk[:missingBytes]) } ``` I suggested this shorter version to the AI and asked for its opinion on my code. The AI was very pleased with my solution. ![Right, ...](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y7s49cqz0ss6aj0jabss.PNG) …but when I reran the tests, the scales fell from my eyes. GitHub Copilot is right, I do not copy more than 512 bytes. But in the test case where I have less than 512 bytes, this code does not work. The AI chose an answer to please me and avoided pointing out what I’d done wrong. I ended up with the code below. This is the best of both worlds. ```golang if totalFileSize < 512 { missingBytes := 512 - totalFileSize remaingBytesInChunk := uint64(len(chunkMessage.Chunk)) if remaingBytesInChunk < missingBytes { missingBytes = remaingBytesInChunk } copy(sniff[totalFileSize:], chunkMessage.Chunk[:missingBytes]) } ``` *** I strongly believe that a software engineer has to write tests. Tests are sometimes hard to write, it is stupid boring work and you have to spend time to maintain them. But like a mother, they secure your life and take care of you. With them, I can sleep like a baby without worries. Now the AI does the same. ![Mummy loves baby](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/30ud7cvdt1yaz4c8ug9l.jpg) _Foto von <a href="https://unsplash.com/de/@isaacquesada?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Isaac Quesada</a> auf <a href="https://unsplash.com/de/fotos/frau-im-weissen-t-shirt-mit-rundhalsausschnitt-tragt-baby-DMcNqigMn1c?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a>_ Sleep well, AI. UnitTest loves and protects you.
kinneko-de
1,908,248
Retail Company Data Analytics (Predicting Future Sales)
In this project, I analyzed a dataset containing sales data for a retail company using linear algebra...
0
2024-07-01T23:01:00
https://dev.to/ludwig023/retail-company-data-analytics-predicting-future-sales-73k
In this project, I analyzed a dataset containing sales data for a retail company using linear algebra concepts. Moreover, I use linear algebra techniques to identify trends and patterns in sales figures. Apply algorithms, possibly regression analysis, to predict future sales based on historical data. **DATA ACQUISITION AND PREPARATION** the retail store dataset that was curated personally according to these data attributes. Date, Sales Amount, Number of Product Sold, Marketing Expenditure, Region. Sample datasets from Kaggle were used as a template for this particular dataset. The retail dataset is an Excel file that contains 8 columns and 500 rows. In this dataset, the data under customer ID are descriptive but specifically nominal—likewise the data under date, location, and product_category_preferences attributes. The data under the number of products sold and frequency attributes are discrete data types. At the same time, sales amount and marketing expenditure are continuous data types. However, these two distinct data types are subsets of numerical data. Below is an image showing the retail dataset in an xlsx format. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jwki50ln45jetyaugu6d.png) Below is an image showing how the retail dataset was loaded into a dataframe ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ac344tami0jwb85pjso.png) **Data Cleaning** After importing the retail dataset in google colab, I imported the pandas library as pd. This enabled me to load the dataset into a dataframe. To do this, an instance was created with pd and stored in the variable ‘data’. Moreover, I cleaned the retail dataset because it is a necessity that must be carried out as it ensures that the data is quality and consistent for analysis. Thus, I checked the column names if they are correct using ‘data.columns’. Afterwards, I converted the sales column into a numeric column by removing the dollar sign. The dollar sign attached to the figures make it a string. Before performing this cleaning, I used the function ‘data.info()’ to derive some information from the retail dataset. I understood that the number of entries in the dataset was 500. Also, I got to understand the data type of each column. The memory usage was 31.4+ KB. After, I converted the date into datetime format. I went a step further to check for missing values by using data.isnull(). The result I received indicated that there was no missing. To double prove, I replaced all the missing values with the mean value, using the fillna method. Below is an image of the various data cleaning that was performed on the retail dataset ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8h9qll7k4bn77luxpobd.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tn52xlcnfyafoq958nlv.png) **Exploratory Data Analysis** Moving forward, I performed an exploratory data analysis on the Retail dataset, a data generated by a retail company. I utilized the following methods to perform the first part of the EDA; data.describe(), data.info(). The data.describe() method was used to derive descriptive statistics from the retail dataset. The data.info() method was used to derive the information about the dataframe including index dtype and columns, non nulls values and memory usage(pandas, 2023). However, I performed some data type conversions. Initially, I converted the sales column into a numeric column simply by getting rid of the dollar sign Next, I converted the date column in the dataset into datetime format. The data.describe() method gave an output of the metrics indicating the count, mean, min, max, percentile and standard deviation of each column in the dataset. It indicates that all columns have 500 non-null entries, indicating no missing values in these columns. An example for the mean is the Number of product sold columns being 7.68 and that of the Sales being 49.91. These and more descriptive stats will be displayed in the images below. The data.info() method displayed an output that indicated the memory usage of the dataframe to be 31.4 KB. The dtype of each of the columns. These and more descriptive stats will be displayed in the images below. **Data Visualization** The next aspect of the EDA,I visualized the histogram of each numeric column as well as a pairplot to see the relationships between numeric variables. Below are images showing the code snippets of the Exploratory Data Analysis. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yl8x1n179s3imysnsbzb.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tmsc1hx3ccb47m3swax2.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0r6xnr9d0yyvhfzw1i61.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q7fyp2ebxmle3mlc8mfr.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pfxkwt0hcqcfh4p611kk.png) **Linear Algebra & Model Training** In this sales prediction project, I applied a couple of linear algebra concepts. Linear algebra is a branch of mathematics that aims at solving systems of linear equations with a finite number of unknowns(Schilling, Nachtergaele, and Lankham, n.d.). The concepts implemented are covariance matrix and singular value decomposition. Covariance matrix is the representation of covariance values of each pair of variables in multivariable data(Builtin, 2023). Singular value decomposition of a matrix is the factorization of that matrix into the product of three matrices. Example is A = UDV T where the columns of U an V are orthonormal and the matrix is diagonal with positive real entries(Guruswami, n.d.). For this project, I converted the sales column into numerics because it had the dollar currency attached which made it a string. The reason for this is because it will be used as a target variable. I created a variable X(more or less like a new dataframe) which should contain all the columns with the exception of the columns I dropped. My target variable being y, which contains the values in the sales column. Afterwards, I calculated the covariance matrix(cov_matrix = np.cov(X.T)) by creating an instance of the function in the numpy library. However, the T is used to transpose the data frame so that each row represents a variable. The covariance matrix represents a measure of how much two random variables change together between each pair of features in X. The singular value decomposition(SVD) performs on the features of X and returns three matrices. In the line of code the U is the unitary matrix having left singular vectors, S which are non-negative numbers in a decreasing order. Vt a unitary matrix having right singular vectors. Below are the code snippets for the covariance matrix and the SVD ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v1nc9yqq9zu4i0a5vx14.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/52hl8tl10n2kitt0wbog.png) **Model Training** The machine learning model utilized to predict future marketing expenditure is linear regression. Linear regression is a machine learning model where the independent variable is used to determine the dependent continuous variable. Firstly, I split the retail dataset into training and testing. Then I extracted the index of the marketing expenditure column. This is to identify the marketing expenditure column in the features. Then I created a list of feature columns and dropped the sales columns. A simple linear regression was performed with the marketing expenditure. The marketing expenditure would be used as the only predictor. Afterward, a multiple linear regression was performed using all features. Multiple linear regression is a statistical technique that models the relationship between a dependent variable and two or more independent variables(Frost, 2023). More or less like an upgrade of linear regression. Below are the code snippets of the linear regression model and multiple linear regression model ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/swdoajoyruuqxlmqosz3.png) **Evaluation Metrics** The R-squared for the simple regression (-0.005806533949882953) and that of the multiple linear regression (0.008004018071779306) indicate that the models explains virtually none of the variability of the response data around the mean. Moreover, the negative values suggest that the model performs worse than a horizontal line (mean of the dependent variable). The RSME for the multiple linear regression is 29.246565814382784. Thai value(29.25) means that, on average, the model’s predictions are off by 29.25. The coefficients of the multiple linear regression for the features are as below; feature 1 is -0.21182487, feature 2 is -0.00075948 and feature 3 is -0.27559952.Each of these values represent the change in the dependent variable for a change in the respective independent variable. The negative coefficient indicates an inverse relationship between the feature and the target variable. The intercept of the multiple linear regression is 55.55450162269854. This value(55.55) represents the expected value of the dependent variable when all the independent variables are zero. **Business Implication** The results of the evaluation metrics of the models have some implications for the sales of the retail company. The low and negative R-squared values indicate that the model does not fit the data well and has poor predictive capabilities. This suggests that the features used are not good predictors of the target variable. In effect, It will require feature engineering. The high RMSE confirms that the model’s predictions are not accurate. This means the models need improvement. One of them is that the data that are collected should be more relevant. Also, feature selection should be implemented to ensure that the best features are selected for the models. This implies that the model is not reliable for predicting future sales for the retail company. **Conclusion** In conclusion, the sales prediction analysis of the retail company was carried out by linear algebra concepts namely; covariance matrix and singular value decomposition. These linear algebra concepts provided valuable insights before the model training. The covariance matrix made me understand which variables to include in the linear regression and multiple linear regression models based on their relationships with the target variable. In simple terms, the covariance matrix was used to identify which variables have the strongest relationships with sales. The SVD helped to reduce the dimensionality and the multicollinearity and improved the stability and performance of the regression models. This was achieved by focusing on the most significant components. The linear regression model implemented in the sales prediction analysis of the retail company provided some valuable insights into the relationships between various factors and their impact on sales performance. After the modeling, the performance was checked using the R-squared values for the accuracy. The root means squared error(rmse). The coefficients indicated the direction and magnitude of the relationships between the independent variables and the dependent variables. Moreover, the negative coefficients indicated an inverse relationship. **Project Reflection** The reflection on the sale prediction of the retail company will cover some challenges I encountered including anomalies in the dataset that was not fit for the linear algebra concepts(covariance matrix and SVD), and the insights that I derived as well as future prediction. The dataset had some anomalies that would have made it hectic for analysis. However, the retail dataset was fairly good but the sale column of the dataset was recognized as a string because of the dollar sign attached to it. I had to detach the dollar sign from the figures to make it numeric since it was needed in the linear algebra concepts namely; covariance matrix and SVD. The result of linear and multiple regression models indicated that the model is not a good predictor of future sales. Thus, feature selection or hyperparameter tuning is essential to make sure the right column is selected for the training and testing of the model.
ludwig023
1,907,468
10 Essential VS Code Tips & Tricks For Greater Productivity
Did you know that 73% of developers worldwide depend on the same code editor? Yes, the 2023 Stack...
0
2024-07-01T23:00:00
https://dev.to/safdarali/10-essential-vs-code-tips-tricks-for-greater-productivity-1cao
webdev, javascript, vscode, beginners
Did you know that 73% of developers worldwide depend on the same code editor? Yes, the 2023 Stack Overflow Developer Survey results are in, and yet again, Visual Studio Code was by far the most used development environment. ![“Visual Studio Code remains the preferred IDE across all developers, increasing its use among those learning to code compared to professional developers”](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wth7nx5b2m3l4q8c4pq7.png) > “Visual Studio Code remains the preferred IDE across all developers, increasing its use among those learning to code compared to professional developers” - Stack Overflow Developer Survey 2023 And we all know why: it’s awesome. But are we fully exploring its potential? In this article, we unfold some compelling VS Code features that enhance productivity with local source control, animated typing, and rapid line deletion, amongst others. Let's start using them to achieve our coding goals faster than ever. ## 1. Timeline View: Local Source Control The Timeline view gives us built-in source control. Many of us know how useful Git and other source control tools are, helping us easily track file changes and revert back to a previous point when needed. The Timeline view in VS Code provides an automatically updated timeline of important events related to a file, such as Git commits, file saves, and test runs. This feature ensures that you can always keep track of your code's history without leaving the editor. ## 2. Integrated Terminal One of the most powerful features of VS Code is the integrated terminal. You can run commands, scripts, and even full development servers directly within the editor. This eliminates the need to switch between the editor and a separate terminal application, streamlining your workflow. To open the terminal, you can use the shortcut: ``` Ctrl + ` ``` ## 3. Extensions Marketplace The VS Code Extensions Marketplace offers a vast array of tools that can significantly boost your productivity. From linters and formatters to themes and language support, there's an extension for almost everything. Some must-have extensions include: - **Prettier** for code formatting - **ESLint** for JavaScript linting - **Live Server** for a quick development server - **GitLens** for enhanced Git capabilities ## 4. Multi-Root Workspaces VS Code supports multi-root workspaces, allowing you to work on multiple projects simultaneously within a single window. This is particularly useful if you're working on related projects or microservices. To add folders to your workspace, go to: ``` File > Add Folder to Workspace... ``` ## 5. IntelliSense IntelliSense is a powerful code completion tool that provides smart completions based on variable types, function definitions, and imported modules. This feature can drastically speed up your coding by reducing the amount of typing and preventing errors. ## 6. Emmet Abbreviations Emmet is a shorthand syntax that helps you write HTML and CSS quickly. For example, typing div.container>ul>li*5 and pressing Tab will expand to: ``` <div class="container"> <ul> <li></li> <li></li> <li></li> <li></li> <li></li> </ul> </div> ``` This can save a lot of time when writing repetitive structures. ## 7. Command Palette The Command Palette provides quick access to many functions and commands in VS Code. You can open it by pressing: ``` Ctrl + Shift + P ``` From here, you can search for and execute commands, making it a powerful tool for boosting productivity. ## 8. Snippets Snippets are predefined templates that make it easier to enter repetitive code patterns. You can create custom snippets or use pre-made ones. For instance, typing for and pressing Tab can expand to a for-loop structure, saving you from typing it out each time. ## 9. Bracket Pair Colorization Bracket pair colorization helps you keep track of matching brackets in your code. This is particularly useful in languages with complex nested structures. You can enable this feature by installing the "Bracket Pair Colorizer" extension. ## 10. Debugging VS Code comes with built-in debugging support for several programming languages. You can set breakpoints, inspect variables, and step through your code directly within the editor. This makes finding and fixing bugs much faster and more intuitive. To start debugging, you can use the shortcut: ``` F5 ``` ## Conclusion Visual Studio Code is packed with features that can enhance your productivity and make coding more efficient. By fully leveraging these tools, you can streamline your workflow, reduce context switching, and focus more on writing great code. Whether you're a seasoned developer or just starting out, these tips and tricks can help you get the most out of VS Code. So, dive in, explore these features, and elevate your coding experience! That's all for today. And also, share your favourite web dev resources to help the beginners here! Connect with me:@ [LinkedIn ](https://www.linkedin.com/in/safdarali25/)and checkout my [Portfolio](https://safdarali.vercel.app/). Explore my [YouTube ](https://www.youtube.com/@safdarali_?sub_confirmation=1)Channel! If you find it useful. Please give my [GitHub ](https://github.com/Safdar-Ali-India) Projects a star ⭐️ Thanks for 24547! 🤗
safdarali
1,908,251
Mobile Dev..
Mobile dev has become an essential skill in today's society. As a mobile developer, my progress and...
0
2024-07-01T22:59:37
https://dev.to/joe_asam/mobile-dev-12jh
Mobile dev has become an essential skill in today's society. As a mobile developer, my progress and experience in the world of mobile development will be shared. In this, I will explain the merits and demerits of cadres and discuss my motivation for joining the HNG internship. My name is Joseph Asam Sunday, a student of Ritman University, Software Engineer, a frontend developer and a tech-bro. I've got hobbies like gaming and coding. I got into the HNG internship for the purpose of gaining more knowledge. As high as the cost of learning mobile dev is high, HNG internship opens the opportunity to show myself information about the tech world and I have high hopes that the internship will be profitable. MOBILE DEVELOPMENT PLATFORMS: Mobile dev platforms are the foundations which we build our application upon. Here are the most used Mobile Dev Platforms ANDROID DEVELOPMENT WITH KOTLIN/JAVA: Android development primarily involves building apps for Android devices. See The two main programming language used for Android development are Kotlin and Java. •KOTLIN Kotlin is a modern, statically and typed language that is fully interoperable with Java. It has gained popularity due to it concise syntax and enhanced features. •JAVA Java has been a cornerstone in android development since platform's Inception. It is a versatile, object-oriented programming language 'OOP' that is widely used not only in mobile dev but also in web, desktop and server-side application. PROS AND CONS OF ANDROID DEVELOPMENT PROS 1. Large User Base: Android has a vast user base globally, providing developers with a broad audience for their applications. 2. Open source platform: Android's Open-source nature allows developers to access and modify the source code, innovation and customization. CONS 1. Fragmentation: fragmentation is a significant challenge in Android development. The vast number of devices, each with different hardware specifications and OS versions, can make it difficult to ensure consistent performance and comparability. 2. App monetization: monetizating android apps can due to high prevalence for free apps lower average revenue per user compared to iOS. And also there is a higher rate of privacy and unauthorized app distribution. IOS DEVELOPMENT WITH SWIFT Like it implies iOS development involves the creation of apps for Apple's iOS devices. Swift is the primary language used although Objective-C is still in use for legacy projects. •SWIFT Swift is a powerful, intuitive language created by Apple. It is designed to work seamlessly with Apple's frameworks and provide a modern development experience. PROS AND CONS OF IOS DEVELOPMENT PROS 1. High-quality user experience:iOS devices are known for their consistent and high-quality user experience. Apple's stringent design guidelines ensure that apps look and perform well across all iOS devices, providing the best user experience. 2. Monetization opportunity: iOS Users tend to spend more on apps And in app purchase compared to android users making them the best spot for new developers to generate income from their application. CONS 1. Strict app review process:Apple's app review process is known for being strict and time-consuming. Apps can be rejected for various reasons, causing delays in deployment. 2. Development costs: Developing for iOS requires a Mac, which can be a significant upfront investment. Additionally, the annual fee for the Apple Developer Program is higher compared to Google's Play Console fee and in addition the cost of getting an iOS device I higher compared to an Android. CONCLUSION With this, I've been able to highlight the different platforms with which apps can be built and their advantages and disadvantages. Along the line, I'm thrilled to learn new things on the internship. (To know more about the HNG internship [click here>](https://hng.tech/hire))
joe_asam
1,908,250
Learning to Program: The resources and methods I used to teach myself coding.
Hello everyone! Today, I want to share my journey in the world of programming. Changing careers can...
27,731
2024-07-01T22:58:56
https://dev.to/palak/learning-to-program-the-resources-and-methods-i-used-to-teach-myself-coding-5ajb
career, learning, beginners, ruby
Hello everyone! Today, I want to share my journey in the world of programming. Changing careers can often be a significant challenge, but with the right resources and learning methods, it is achievable. Here are some of the most effective and popular methods for learning programming, along with my experiences with them. ## Effective and Popular Methods for Learning Programming **- Online Courses:** Structured and often very comprehensive, allowing you to learn at your own pace. Platforms like Udemy, Coursera, and edX offer courses ranging from beginner to advanced levels, often taught by industry experts. **- Interactive Coding Platforms:** Websites offering interactive exercises and projects for practice, such as Codecademy, freeCodeCamp, and SoloLearn. These platforms allow you to learn by doing, which is incredibly effective. **- Books:** Traditional sources of knowledge that provide in-depth theoretical understanding. Although less interactive, they are excellent sources of detailed information and best practices. **- YouTube Tutorials:** Free resources that offer visual and practical tips on various programming topics. **- Coding Bootcamps:** Intensive, short-term training programs that cover a lot of material in a short time. These can be more expensive options but provide quick and comprehensive preparation for a career in programming. **- Participating in Open Source Projects:** Getting involved in open-source projects to gain real-world experience. This is a great way to get hands-on experience and collaborate with other programmers. **- Engaging with the Community:** Participating in programming communities through forums, social media, and meetups to get support and make connections. Platforms like Stack Overflow, Reddit, and local meetups can be extremely helpful. ## My Journey and the Resources I Used **Online Courses** **1. Arkademy - "From Zero to Apps"** Link: [Arkademy Courses by Arkency](https://courses.arkademy.dev/) Description: This was the first course I took. Unfortunately, it is no longer available. It was quite basic and better suited for those who already have some programming background and want to learn Ruby on Rails through practical examples. It helped me understand fundamental concepts and the mindset of a programmer. **2. Codecademy** Link: [Codecademy](https://www.codecademy.com/search?query=ruby) Description: Realizing I needed a solid foundation, I turned to Codecademy. Their interactive approach helped me understand the basics of Ruby and Rails through hands-on tasks. The step-by-step exercises and immediate feedback were very motivating. **3. Udemy** Link: [The Complete Ruby on Rails Developer Course](https://www.udemy.com/course/the-complete-ruby-on-rails-developer-course) Link: [Ruby on Rails 6: Learn 20+ Gems & Build an e-Learning Platform](https://www.udemy.com/course/ruby-on-rails-6-learn-20-gems-build-an-e-learning-platform) Description: Udemy offers a vast array of programming courses. These two courses, in particular, helped me develop my skills in Ruby on Rails. **4. SupeRails** by @superails Links: [SupeRails](https://superails.com/) platform, [blog](https://blog.corsego.com/), [youtube channel](https://www.youtube.com/@SupeRails) Description: Yaroslav's platform and YouTube channel are fantastic resources. He explains everything step-by-step and often debugs live, which helped me understand common problems and their solutions. Regularly updated content and the ability to see real debugging processes were very valuable. **Books** **1. The Ruby Way by Hal Fulton** Description: This book provided me with a deeper understanding of Ruby principles and best practices. It’s a comprehensive guide to the language, containing many examples and detailed explanations. **2. Design Patterns in Ruby by Russ Olsen** Description: I learned how to write more efficient and maintainable code by applying design patterns in Ruby. This book helped me understand how to structure code in a more advanced way. ## Polish Resources As a native Polish speaker, I also used several resources in my native language and the best one is: **GrubyKurs** by @rafalpiekara Link: [GrubyKurs](https://grubykurs.pl/) Description: This is a complete course designed for absolute beginners. It offers a full learning path, community support through Discord, and a newsletter that proved very helpful during recruitment interviews. Rafał publishes new materials and supports his students, creating a friendly learning environment. ## The Importance of Knowing English It’s worth noting that most high-quality programming resources are available in English. When I started learning programming, I struggled with English. Over time, improving my language skills was crucial for accessing better materials and understanding programming concepts. This experience showed me the importance of being open to learning new languages and skills, as they can significantly impact our careers. ## Conclusion Learning programming took me about a year, after which I landed an internship that opened the doors to my career as a programmer. My journey to becoming a programmer was full of challenges, but with perseverance and the right resources, I achieved my goal. If you are considering a similar path, I highly recommend exploring different learning methods and finding those that work best for you. Remember, every minute spent learning and practicing brings you closer to realizing your dreams. Good luck!
palak
1,908,247
Rugpull Identification and Prevention
🔐 The 4th edition of the #Web3SecurityPracticalGuide by TinTinLand in collaboration with...
0
2024-07-01T22:51:16
https://dev.to/ourtintinland/rugpull-identification-and-prevention-1kam
🔐 The 4th edition of the #Web3SecurityPracticalGuide by TinTinLand in collaboration with @sharkteamorg is here! 🚀 The Web3 Security Practices seminar is held twice a month, inviting you to delve into the underlying principles and latest trends of Web3 security. 📅 Date: July 3, Wednesday|20:00 UTC+8 📍 Tencent Meeting:[https://meeting.tencent.com/dm/ehpaeeWjDDEX](https://meeting.tencent.com/dm/ehpaeeWjDDEX) Follow:[https://x.com/OurTinTinLand](https://x.com/OurTinTinLand) 📌 Topic: #Rugpull Identification and Prevention 👥 Guest: Adam |Co-founder of @sharkteamorg 🕵️ Outline: 🔸Typical Rugpull event classification, feature analysis, and case sharing 🔸Analysis of the black production chain of Rugpull factories 🔸Rugpull identification and prevention suggestions 🛡️ Scan the QR code to join our Security Learning Community. Enhance your awareness and stay updated with the latest security trends!
ourtintinland
1,908,246
Demystifying Frontend: A Dive into ReactJS vs. Svelte
Hey everyone! I'm thrilled to be joining the HNG Internship program https://hng.tech/, and I'm...
0
2024-07-01T22:48:17
https://dev.to/faith_josephs_78ded4b72ba/demystifying-frontend-a-dive-into-reactjs-vs-svelte-2did
webdev, javascript, beginners
Hey everyone! I'm thrilled to be joining the HNG Internship program https://hng.tech/, and I'm especially excited to delve into the world of frontend development using ReactJS. As someone new to the field, I recently came across Svelte, another interesting framework, and it got me curious about the differences between the two. So, I decided to explore them both and share my findings! **The Showdown: ReactJS vs. Svelte** Let's break down these two popular frontend technologies: **ReactJS**: You might already be familiar with React, a powerful JavaScript library for building user interfaces. It uses components, which are reusable building blocks that encapsulate data and functionality. React also employs a virtual DOM, a lightweight representation of the real DOM, to efficiently update the UI when changes occur. React boasts a massive community, tons of learning resources, and a strong ecosystem of tools and libraries. However, it can have a steeper learning curve due to its structure and might feel a bit verbose for smaller projects. **Svelte**: This is a newer challenger in the frontend framework arena. Unlike React, Svelte takes a unique approach. It compiles code during build time, eliminating the need for a virtual DOM. This can lead to improved performance and smaller bundle sizes. Svelte also features a simpler syntax compared to React, making it easier to learn. However, Svelte's community and resources are still growing compared to React's established presence. Additionally, its rapid evolution might not be ideal for all projects. **Why ReactJS at HNG?** While both ReactJS and Svelte are great tools, there are strong reasons why HNG focuses on React: **Learning Advantage**: HNG's curriculum with ReactJS provides a solid foundation for frontend development. The vast React community ensures easy access to help and resources whenever you get stuck. **Career Opportunities:** Learning React opens doors to a wide range of job opportunities in the tech industry. Many companies heavily rely on React for their frontend development, so mastering it will put you ahead of the curve. **Conclusion** ReactJS and Svelte both offer excellent features for building user interfaces, but they cater to different preferences. ReactJS provides a comprehensive framework with a strong support system, while Svelte offers a more streamlined approach with potential performance benefits. I'm incredibly enthusiastic about learning ReactJS at HNG [https://hng.tech/internship] and can't wait to build amazing projects with this powerful library. While I'm focusing on React now, I'm definitely keeping an eye on Svelte's future developments! Who knows, maybe I'll explore both in the future and share my learnings with you all. Feel free to share your experiences with frontend development in the comments below!
faith_josephs_78ded4b72ba
1,908,217
Generics <T>【TypeScript】
The idea of Generics makes us confused, especially beginners. I am also one of such persons. Hence, I...
0
2024-07-01T22:44:29
https://dev.to/makoto0825/generics-typescript-18gd
webdev, typescript
The idea of Generics makes us confused, especially beginners. I am also one of such persons. Hence, I researched how to use generics in TypeScript. ## What is Generics? TypeScript generics are a feature that allows types to be treated as parameters, enhancing reusability and type safety. By using generics, you can write code that does not depend on specific types. But I guess some people still don't get it. So I will share the some examples with you. ## How to use it ```typescript interface Interface1 { value1: string; value2: string; } interface Interface2 { value1: number; value2: number; } const test1: Interface1 = { value1: "John", value2: "ABC", }; const test2: Interface2 = { value1: 123, value2: 456, }; ``` We have two objects, Test1 and Test2. The key names for Test1 and Test2 are the same, but the types are different, with one being string and the other being number. In this case, we need to create interfaces for each respective type. However, this requires creating a separate interface for each type, which Generics solves for us." ```typescript interface Interface3<T> { value1: T; value2: T; } const test3: Interface3<number> = { value1: 123, value2: 456, }; const test4: Interface3<string> = { value1: "John", value2: "ABC", }; ``` When defining types, using &lt;T&gt; allows us to reuse a single interface by specifying the type as an argument when declaring variables. This makes it easier to create reusable interfaces, even when new types of objects are added. By the way, &lt;T&gt; can be replaced with '&lt;A&gt;, &lt;B&gt; or any other preferred name. &lt;T&gt; is commonly used in general ## extends Generics allow you to impose constraints on types. When you want to specify that <T> must be specific types, you use extends for this purpose. ```typescript interface Interface4<T extends string | number> { value1: T; value2: T; } const test1: Interface4<string> = { value1: "John", value2: "ABC", }; const test2: Interface4<number> = { value1: 123, value2: 456, }; const test3: Interface4<boolean> = { value1: true, value2: false, };//error happened ``` In this case, by using extends with string and number, we specify that T cannot accept any other types. Therefore, test3 caused an error because it was assigned a type of boolean. ## Generics in function. Generics can be used in function. Even when you don't want to specify the types of arguments or return values to a specific type when declaring a function, you can use &lt;T&gt;. You write &lt;T&gt; between the function name and the arguments to denote the use of generics. The declaration would look like the following: ```typescript function func1<T>(value: T): T { return value; } //same function const func2 = <T>(value: T): T => { return value; }; func1<string>("John"); func2<number>(100); func2<boolean>(true); ``` By passing a type as an argument when calling a function, you can determine the type for that function.Of course, using extends allows you to impose type constraints as well. ```typescript function func1<T extends string | boolean>(value: T): T { return value; } func1<string>("John"); func1<number>(100); //error happened func1<boolean>(true); ```
makoto0825
1,908,245
Technical Article: Automating Linux User Creation with Bash Script
Managing users and groups on a Linux system can be a complex and time-consuming task, especially in...
0
2024-07-01T22:41:45
https://dev.to/fikan/technical-article-automating-linux-user-creation-with-bash-script-677
Managing users and groups on a Linux system can be a complex and time-consuming task, especially in environments with frequent changes. Automation can significantly simplify this process, ensuring consistency and saving valuable time. In this article, we will walk through the implementation of a Bash script that automates the creation of users and groups, sets up home directories, generates secure random passwords, and logs all actions for auditing purposes. ### Script Overview The Bash script `create_users.sh` reads a list of usernames and groups from a text file, creates the specified users and groups, sets up home directories with appropriate permissions, generates random passwords for the users, and logs all actions. The script also securely stores the generated passwords in a dedicated file. ### Script Breakdown Here is the complete `create_users.sh` script, followed by a detailed explanation of each section: ```bash #!/bin/bash # Ensure the script is run with root privileges if [ "$EUID" -ne 0 ]; then echo "Please run as root" exit 1 fi # Log file path LOG_FILE="/var/log/user_management.log" # Password storage file path PASSWORD_FILE="/var/secure/user_passwords.csv" # Create secure directory for passwords if it doesn't exist mkdir -p /var/secure chmod 700 /var/secure # Function to create groups create_groups() { local groups="$1" IFS=',' read -r -a group_array <<< "$groups" for group in "${group_array[@]}"; do group=$(echo "$group" | xargs) # Remove leading/trailing whitespace if [ ! -z "$group" ]; then if ! getent group "$group" > /dev/null; then groupadd "$group" echo "Group '$group' created." | tee -a "$LOG_FILE" fi fi done } # Function to create user and group create_user() { local username="$1" local groups="$2" # Create user group if it doesn't exist if ! getent group "$username" > /dev/null; then groupadd "$username" echo "Group '$username' created." | tee -a "$LOG_FILE" fi # Create the additional groups create_groups "$groups" # Create user with personal group and home directory if user doesn't exist if ! id "$username" > /dev/null 2>&1; then useradd -m -g "$username" -G "$groups" "$username" echo "User '$username' created with groups '$groups'." | tee -a "$LOG_FILE" # Set home directory permissions chmod 700 "/home/$username" chown "$username:$username" "/home/$username" # Generate random password password=$(openssl rand -base64 12) echo "$username:$password" | chpasswd echo "$username,$password" >> "$PASSWORD_FILE" else echo "User '$username' already exists." | tee -a "$LOG_FILE" fi } # Read the input file input_file="$1" if [ -z "$input_file" ]; then echo "Usage: $0 <name-of-text-file>" exit 1 fi # Ensure the input file exists if [ ! -f "$input_file" ]; then echo "File '$input_file' not found!" exit 1 fi # Process each line of the input file while IFS=';' read -r user groups; do user=$(echo "$user" | xargs) # Remove leading/trailing whitespace groups=$(echo "$groups" | xargs) # Remove leading/trailing whitespace if [ ! -z "$user" ]; then create_user "$user" "$groups" fi done < "$input_file" # Set permissions for password file chmod 600 "$PASSWORD_FILE" echo "User creation process completed." | tee -a "$LOG_FILE" ``` ### Detailed Explanation #### Ensuring Root Privileges The script starts by checking if it is being run with root privileges, as creating users and modifying system files require administrative rights. ```bash if [ "$EUID" -ne 0 ]; then echo "Please run as root" exit 1 fi ``` #### Setting Up Log and Password Files The script defines paths for the log file and the password storage file. It then creates a secure directory for storing passwords and ensures it has the correct permissions. ```bash LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" mkdir -p /var/secure chmod 700 /var/secure ``` #### Function to Create Groups The `create_groups` function takes a comma-separated list of groups and creates each group if it does not already exist. It also logs the creation of each group. ```bash create_groups() { local groups="$1" IFS=',' read -r -a group_array <<< "$groups" for group in "${group_array[@]}"; do group=$(echo "$group" | xargs) if [ ! -z "$group" ]; then if ! getent group "$group" > /dev/null; then groupadd "$group" echo "Group '$group' created." | tee -a "$LOG_FILE" fi fi done } ``` #### Function to Create Users and Groups The `create_user` function handles the creation of the user and their primary group, as well as any additional groups. It sets up the user's home directory, assigns appropriate permissions, and generates a random password for the user. ```bash create_user() { local username="$1" local groups="$2" if ! getent group "$username" > /dev/null; then groupadd "$username" echo "Group '$username' created." | tee -a "$LOG_FILE" fi create_groups "$groups" if ! id "$username" > /dev/null 2>&1; then useradd -m -g "$username" -G "$groups" "$username" echo "User '$username' created with groups '$groups'." | tee -a "$LOG_FILE" chmod 700 "/home/$username" chown "$username:$username" "/home/$username" password=$(openssl rand -base64 12) echo "$username:$password" | chpasswd echo "$username,$password" >> "$PASSWORD_FILE" else echo "User '$username' already exists." | tee -a "$LOG_FILE" fi } ``` #### Processing the Input File The script reads the input file provided as a command-line argument. Each line of the file is expected to contain a username and a list of groups separated by a semicolon. The script processes each line, removing any leading or trailing whitespace, and calls the `create_user` function. ```bash input_file="$1" if [ -z "$input_file" ]; then echo "Usage: $0 <name-of-text-file>" exit 1 fi if [ ! -f "$input_file" ]; then echo "File '$input_file' not found!" exit 1 fi while IFS=';' read -r user groups; do user=$(echo "$user" | xargs) groups=$(echo "$groups" | xargs) if [ ! -z "$user" ]; then create_user "$user" "$groups" fi done < "$input_file" ``` #### Finalizing Permissions Finally, the script ensures that the password file has the correct permissions, making it readable only by the root user. ```bash chmod 600 "$PASSWORD_FILE" echo "User creation process completed." | tee -a "$LOG_FILE" ``` ### Running the Script To run the `create_users.sh` script, follow these steps: 1. **Create the Input File**: Prepare a text file with usernames and groups. Each line should contain a username followed by a semicolon and a comma-separated list of groups. For example: ``` john;admins,developers jane;users,admins ``` 2. **Make the Script Executable**: Ensure the script has executable permissions. ```bash chmod +x create_users.sh ``` 3. **Run the Script with Root Privileges**: Execute the script, passing the path to the input file as an argument. ```bash sudo ./create_users.sh /path/to/input_file.txt ``` ### Viewing Logs and Passwords - **Log File**: The script logs all actions to `/var/log/user_management.log`. You can view this log file using a text editor or command like `cat`, `less`, or `tail`. ```bash less /var/log/user_management.log ``` - **Password File**: Generated passwords are stored in `/var/secure/user_passwords.csv`. This file is readable only by the root user. View it with: ```bash sudo less /var/secure/user_passwords.csv ``` ### Conclusion This script provides a robust solution for automating user and group management on a Linux system. It ensures that all actions are logged for auditing purposes and that generated passwords are stored securely. By following the steps outlined in this article, you can customize and extend the script to meet your specific needs, improving efficiency and consistency in user management tasks [click here to view the gitHub repository](https://github.com/fktona/Bash_script/tree/master). If you're curious about the [HNG Internship](https://hng.tech/internship), check out their website. And if you're looking to hire talented developers, head over to [HNG Hire.](https://hng.tech/hire)
fikan
1,908,243
Ng-News 24/25: DomRef, TypeScript 5.5, Business/Render effect, State of JavaScript
The days of ElementRef might be numbered. State of JavaScript revealed its results. TypeScript 5.5...
0
2024-07-01T22:37:46
https://dev.to/this-is-angular/ng-news-2425-domref-typescript-55-businessrender-effect-state-of-javascript-364g
webdev, javascript, angular, programming
The days of ElementRef might be numbered. State of JavaScript revealed its results. TypeScript 5.5 brings inferred type predicates, a conceptional split into business, and rendering effects is on the table. {% embed https://youtu.be/mpHzSkA-J2U %} ## RFC: DOM Interaction In a new RFC, the Angular team proposes replacing the ElementRef type with a new one: DomRef. ElementRef encapsulates the native DOM Element. Especially UI libraries, which require direct access to the DOM, heavily depend on it. According to Jeremy Elbourn, the switch is necessary to achieve advanced hydration techniques like streaming or partial hydration. The community received the RFC with certain reservations. https://github.com/angular/angular/discussions/56498 ## State of JavaScript 2023 State of JavaScript, a survey with around 24,000 participants, released its results. In terms of usage, Vue took over Angular's second place. In the category of interest, Angular rose from 20% to 23%. {% embed https://2023.stateofjs.com/en-US/ %} ## TypeScript 5.5 TypeScript 5.5 was released, and it will be available in Angular 18.1. We will get inferred type predicates. A function checking against a certain type and returning a boolean based on that is now automatically defined as a type predicate. That means whenever you use that function on a union type, TypeScript will reduce the amount of that union type according to your function. Other new features are better checks for regular expressions and better type support whenever you access an object literal dynamically. {% embed https://devblogs.microsoft.com/typescript/announcing-typescript-5-5 %} ## Business and Rendering Effects When using effects, we usually want to control the Signals that the effect function tracks. In application development, it would make sense to define those Signals explicitly. Alex Rickabaugh, Angular framework lead, explained on GitHub that the effect was designed for Rendering Effects, where implicit tracking is necessary. Explicit tracking would be a good fit for Business Effects. Currently, the effect is not ready to support this in Angular Core. https://github.com/angular/angular/issues/56155#issuecomment-2177202036
ng_news
1,908,242
Automating The Creation Of Users From A Text File Using Bash Scripting
Introduction This article explains a bash script (GitHub Repo) designed to automate Linux...
0
2024-07-01T22:33:33
https://dev.to/jhude51/automating-the-creation-of-users-from-a-text-file-using-bash-scripting-5b4a
bash, linux, hng
## **Introduction** This article explains a bash script ([GitHub Repo](https://github.com/jhude51/hng-stage-one.git)) designed to automate Linux user account creation from a text file containing the said users, as well as a list of supplementary group(s). The script should create users and groups as specified in the prerequisites section, set up home directories with appropriate permissions and ownership, generate random passwords for the users, and log all actions in a file. In addition, the script should store the generated passwords securely in a text file. ## **Prerequisites** Before proceeding with this article, it’s pertinent that you have some basic knowledge of **Linux OS** and its commands. Although I have added clear comments to the script, a basic knowledge of Bash scripting is still required to follow along. To run or use the script, take note of the following: - Ensure you have sudo privileges as user and group management typically requires root access i.e. run the script with sudo or as root. - The script is written for an Ubuntu distro but would still work for other Linux flavors. - Each line in the text file to be passed to the script as an argument must be formatted as `user; list of groups separated by commas`. **Sample file structure:** ``` light; sudo,dev,www-data idimma; sudo mayowa; dev,www-data ``` ## **Script Explanation** With the housekeeping out of the way, below is the breakdown of the script. ### **Validation of Input File** The script starts by checking that an input file is passed as an argument to the script and the path of the said file is valid (the file exists). For both checks, I used the if conditional in combination with the logical AND operator ‘**&&**’, which only evaluates the second statement **if and only if the first statement is true**. If the first statement evaluates to true, the script exits immediately i.e. **_exit 1_**. ![Validation of Input File](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bd9jh4zdaz9rsjn7r8ei.png) ### **Helper Functions** Two helper functions are defined for random password generation and logging. The random password generator function – _**password_gen()**_ uses the bash built-in **$RANDOM** variable which by default, generates random integer. The **$RANDOM** variable is piped to the **base64** module to generate an alphanumeric password. ![Helper Functions - Random Password Generator](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e9zczdq6dm2j1fkas7ne.png) The logging function – _**logger()**_ when called with an argument, would echo the current date and time together with the action to a file declared as **$LOG_FILE**. ![Helper Functions - Logging Function](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tbwomr96hpojmkna4t68.png) ### **Secure the Password File** The directory _**/var/secure**_ is created if it does not exist and only the user has permissions on the directory. ![Secure the Password File](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p52jtfnssv6edanavocu.png) ### **Working with the Input File** The input file is first read line by line (**$lines**) and each line (**$line** – delimited by a newline character) is iterated over in a for loop. ![Working with the Input File - read line](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yhgpmr8vlu3rity34nkv.png) The **$line** is then split to an array at the delimiter **‘;’** with a trailing whitespace **(‘; ’)**. Remember that each line of the input file is formatted as so - `user; list of groups`. The first slice (string before the field separator/delimiter **‘; ’**) is assigned to the **$username** variable and the other slice to the **$groups** variable. ![Working with the Input File - split username and group](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/atw97pxe8omyojj3qqko.png) ### **User Creation** The script then checks if a user with the **$username** already exists and if true, skips to the next iteration. Otherwise, the helper _**password_gen()**_ function is called and the value assigned to a **$password** variable. The **useradd** utility is then called with the following flags _m_, _U_ and _G_ (see the useradd man pages) to create the user. ![User Creation - useradd](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wq49214ymzvjgjfyi3ah.png) For the user password, we use **chpasswd** utility to set the password with the **$password** generated. In addition, the password is then redirected to the **$PASSWORD_FILE** and appropriate permissions set on the file (only the user has permissions - _rw_). ![User Creation - chpasswd](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gsatzdwtz6z2xr36y0mu.png) ### **Securing the User Home Directory** Finally, appropriate permissions are set on the user’s home directory so that only the user has _read,write_ and _execute_ permissions on the directory. ![Securing the User Home Directory](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/alta5pkyfzdza2rjqwmj.png) ## **Running the Script** - Make the script executable: `chmod +x /path/to/script`. You might have to run this command as **sudo** - Run the script with the input file as an argument: `sudo ./path/to/script /path/to/inputfile.txt`. - You can verify the script ran successfully by running the following: ``` # View the $LOG_FILE sudo cat /var/log/user_management.log # View the $PASSWORD_FILE sudo cat /var/secure/user_passwords.txt # View the system accounts file sudo cat /etc/passwd ``` ## **Conclusion** As a Sysops/DevOps engineer, automation of user account management using bash scripts can significantly enhance efficiency and accuracy. Take note that the desired result can be achieved using a different logic and structure. For a more streamlined solution, refactoring the main part of the script into smaller functions should be considered. This task is a part of the **HNG Internship program** that offers a transformative 2-month internship, where the participants can amplify their skills, cultivate networks whilst working on real life projects like this one. You can learn more about the program by visiting the HNG Internship website at [HNG Internship](https://hng.tech/internship). You can also join the **HNG Premium Network** where you can get connected with top techies, collaborate with them, and grow your career. To learn more about the HNG Premium Network, visit [HNG Premium](https://hng.tech/premium).
jhude51
1,908,241
Ng-News 24/24: Vertical Architectures, WebAssembly, Angular v9's Secret, NgRx
Brandon Roberts unveiled why Angular 9 has the highest download rates. Manfred Steyer gave a talk...
0
2024-07-01T22:33:31
https://dev.to/ng_news/ng-news-2424-vertical-architectures-webassembly-angular-v9s-secret-ngrx-1bhi
webdev, javascript, angular, programming
Brandon Roberts unveiled why Angular 9 has the highest download rates. Manfred Steyer gave a talk about vertical architectures. Evgeniy Tuboltsev published a guide on how to integrate WebAssembly into Angular, and NgRx 18 was released. ## Vertical Architectures At the Angular Community Meetup, Manfred Steyer presented an upgraded version of his talk about DDD in Angular. He mentioned the team topologies model, where we have four different types of teams responsible for different tasks: 1. Platform services team 2. Specialization team 3. Supportive teams 4. Value stream team {% embed https://www.youtube.com/watch?v=AEMMyvFkx4c %} ## NgRx 18 NgRx, the most popular state management library in Angular, was released in v18 making it compatible to Angular 18. Be careful, if you want to use the Signal Store. It **will** become stable later and it is not released yet. To use it in Angular 18, run `npm i @ngrx/signals@next` or use the schematic (also with the `next` tag). {% embed https://dev.to/ngrx/announcing-ngrx-18-ngrx-signals-is-almost-stable-eslint-v9-support-new-logo-and-redesign-workshops-and-more-17n2 %} ## The secret behind Angular 9 At the moment, Angular is downloaded around 3.5 million times per week, making it the third most downloaded framework after React and Vue. Around 450,000 downloads come from Angular 9. Given that the current version is 18, that's a little bit strange. Brandon Roberts discovered that Angular 9 is a dependency of codelzyer. Codelzyer is a library we used before typescript ESLint came out. It is very likely that Codelyzer is part of many applications, although it is not actively used, and developers should remove it. According to the statistics, Codelzyer has a current download number of 600,000. Without Codelzyer, Angular's download numbers would/will drop by 17%. {% embed https://www.youtube.com/watch?v=fQGDZzrIPRg %} ## WebAssembly & Angular WebAssembly allows to run application in browser, which have been written in other languages than Javascript. In addition, the execution speed is almost like native code. Evgeniy Tuboltsev wrote an article where he shows you how to port an application written in Rust to WebAssembly and consume it in Angular. As for the comparison his example runs three faster than the JavaScript counterpart. {% embed https://medium.com/@eugeniyoz/powering-angular-with-rust-wasm-0eed1668a51c %}
ng_news
1,908,239
From Script to Snake - JavaScript to Python
Table Of Contents Introduction Popularity How to Setup On VSCode Control...
0
2024-07-01T22:30:09
https://dev.to/ismaelenriquez/from-script-to-snake-javascript-to-python-4h57
python, learning, javascript, tutorial
## Table Of Contents - Introduction - Popularity - How to Setup On VSCode - Control flow - Variables - Functions - Loops - Conclusion ## Introduction Are you a programmer who knows JavaScript but wants to learn Python? Here's an easy guide to dive into Python. Python is a powerful and versatile programming language used for web development, automating tasks, data analysis, and notably, artificial intelligence and machine learning. Known for its simplicity and readability, Python is accessible for beginners yet robust enough for complex projects. Its versatility has solidified its position as one of the most popular languages in 2024. Follow me as I transition from JavaScript to Python and explore machine learning. ## Popularity ![Python popularity](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ezcdg36gxe7ldpmybmiz.png) Python is known for its ease of use and flexibility, making it ideal for beginners and experienced developers alike. Its popularity is underscored by: - **Extensive Libraries:** Python boasts a wide array of libraries and frameworks that simplify tasks such as data analysis, web development, and machine learning. - **Industry Adoption:** Python is trusted by major tech companies like Google, Netflix, Uber, and Dropbox. - **Career Opportunities:** Python's widespread use opens many job opportunities, with salaries ranging from $66,000 to $164,000 in the US. ## How to Setup On VSCode 1. **Download Python:** - Visit [Python](https://www.python.org) and download the latest version of Python for your operating system. - During installation, ensure to check "Add Python to PATH". 2. **Open VSCode:** - After installing Python, open Visual Studio Code (VSCode). 3. **Install the Python Extension in VSCode:** - If you haven't installed the Python extension yet, VSCode will prompt you to install it when you create a Python file (*.py). - Alternatively, you can install the Python extension from the Extensions view in VSCode. 4. **Write and Run Python Code:** - Start coding Python directly in a .py file. - Run your Python code in VSCode. ![VSCode setup](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mjoi0hjkt80pz5o0vnu2.png) ### Press play and it should run your code ## Control flow To run Python code in your terminal, use: ``` python3 <File Name> ``` ### Now, let's get into the syntax of Python and compare it with JavaScript to highlight their similarities and differences. ## Variables In Python, variable declaration is straightforward and does not require keywords like `let` or `const` as in JavaScript. Variables are simply assigned using the syntax `variable_name = value`. ![Python](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vwx10tuugfeyozwkib73.png) ![Javascript](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3jzqu6sh8lte2ffa1qrx.png) ## Control Flow Control flow decides the order in which statements run based on conditions in programming. In Python, we use `if`, `else`, and `elif` along with `==` (equals) and `!=` (not equals) for comparisons. ![Python Control Flow](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5rwkhn4nmb5t54cnwbtv.png) ![Javascript Control Flow](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tz7i0cwa1aptnwsxdr39.png) In JavaScript, curly braces `{}` are used to define code blocks, including if statements, loops, functions, and other structures where multiple statements need to be grouped together. JavaScript also uses `console.log()` for outputting text to the console. ## Functions: In Python, functions are defined using `def` followed by the function name and parameters in parentheses. The function's code is indented to mark its scope. ![Python Function](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rzzaib2c3e07c2cfu8u7.png) ### In Python, `f"Hello, {name}!"` is equivalent to using backticks with `${name}` in JavaScript. ![Javascript Function](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vzqx8a89wwsfce9q3gab.png) ## Loops Python, like JavaScript, uses `for` loops and `while` loops for iterating through data or repeating tasks. Unlike JavaScript, Python's `for` loop directly iterates over items of lists or strings without needing an explicit incrementer. Key functions in Python include: - **len():** Used to determine the number of items in a sequence. - **range():** Generates a sequence of numbers, commonly used with `for` loops when iterating a specific number of times. ![Python Loop](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wr7o6bl67naa1rgehz7k.png) ![Javascript Loop](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l5ckiksgkuvdayu2s798.png) ## Conclusion Moving from JavaScript to Python opens up a world of opportunities with a versatile and powerful programming language. Learning these essential topics is key to tackling exciting projects. For those looking to improve their Python skills, starting with resources like Codecademy's Python 3 lessons can build a solid foundation. Additionally, platforms like LeetCode offer practical challenges to enhance coding proficiency in Python. Thanks for reading!
ismaelenriquez
1,908,155
[Game of Purpose] Day 44
Today I fixed a problem with camera. The tutorial I took mouse control from was using camera...
27,434
2024-07-01T22:29:03
https://dev.to/humberd/game-of-purpose-day-44-4g1o
gamedev
Today I fixed a problem with camera. The tutorial I took mouse control from was using camera rotation. Whereas the default Third Person project uses `Yaw Input` and `Pitch Input`. Before: ![before](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/21bjn75b7zquouwlqfkd.png) Now: ![now](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/te0xuzihiju89ttgp7bw.png)
humberd
1,908,224
Git Branch default Naming: main vs master
Since October 1, 2020, GitHub has set the default branch to "main" for new repositories, while many...
0
2024-07-01T22:07:53
https://dev.to/coderpena/git-branch-default-naming-main-vs-master-2lnc
github, git, 100daysofcode, branch
Since October 1, 2020, GitHub has set the default branch to "main" for new repositories, while many existing repositories still use "master". This change is part of an effort to use more inclusive language and avoid terms with negative historical connotations, opting for a more neutral term. "Main" is considered a better choice because it is short, easy to remember, and translates well across languages. On the other hand, Git, the underlying version control system, is an open-source project maintained by a diverse community. Changes in Git require consensus from contributors and users to ensure broad agreement and compatibility across various workflows and tools. As of now, the default branch name in Git remains "master" because changing it involves a complex and slower process compared to a single platform like GitHub. While discussions about changing the default branch name in Git have occurred, reaching a consensus in open-source projects takes time. As a result, many organizations and individuals have chosen to rename the default branch from "master" to "main" or some other neutral term. This change is not a requirement, but rather a choice made by individual users or organizations based on their values and preferences. Upon investigating, I learned the previous exposure above and this experience highlighted the importance of staying informed about industry updates and platform-specific changes. It also underscored broader discussions in the tech community regarding terminology and inclusivity in software development practices. Moving forward, adjusting to GitHub's default settings improved my project integration and enhanced my understanding of Git workflows. If you encountered similar challenges with Git, I will detail what I did to resolve the issue: 1. I renamed the local branch via the following line command: **git branch -m master main** 2. I pushed the renamed branch to the remote repository: **git push -u origin main** 3. In addiction, and because after the version git 2.28 we can set a global config with the default branch name allowing now and whenever we create a new git local repo using any default branch name, I run the following line command: **git config --global init.defaultBranch main** Here you go. From now on, the branch names in the local and remote repository will be the same, preventing push and pull going to a different repository than expected.
coderpena
1,908,220
Creating users and groups from a file with bash.
As a devops engineer creating users and managing them is one of the primary responsibilities as a...
0
2024-07-01T22:07:27
https://dev.to/vale_hash/creating-users-and-groups-from-a-file-with-bash-1jf1
hng, sysadmin, bash, devops
As a devops engineer creating users and managing them is one of the primary responsibilities as a devops engineer or sysadmin. Today we will be creating a bash script that : 1. Reads a text file containing the employee’s usernames and group names, where each line is formatted as user;groups. 2. create users and groups as specified 3. set up home directories with appropriate permissions and ownership 4. generate random passwords for the users 5. log all actions to /var/log/user_management.log 6. Additionally, store the generated passwords securely in /var/secure/user_passwords.txt Without further ado let's get started. Firstly lets create the file create_user.sh using the `touch` command `touch create_user.sh` we want to first add our shebang `#!/bin/bash`. Now we an start ticking off some of the requirements. requirement 5 specifically, let us create the user_management.log file #### Creating the required log and txt files ``` if [[ ! -e /var/log/user_management.log ]]; then sudo mkdir -p /var/log/ sudo touch /var/log/user_management.log fi ``` analyzing the above code: we check if the fire /var/log/user_management.log exist, if it does not we create the directory /var/log/ with mkdir and the -p flag for persistence, then we create the file user_management.log using `touch` and then end the if statement using fi. lets do the same for the file that all the passwords will be saved in ``` if [[ ! -e /var/secure/user_passwords.txt ]]; then sudo mkdir -p /var/secure/ sudo touch /var/secure/user_passwords.txt fi ``` /var/secure/user_passwords.txt and /var/log/user_management.log is difficult to type and a mouthful to say so we'll assign the to variables ``` log_file="/var/log/user_management.log" pass_file="/var/secure/user_passwords.txt" ``` and now we can start sending outputs of all that is going on to our logfile ``` echo "logfile created..." >> $log_file echo "checking if text_file exists" >> $log_file ``` checking if a text file containing the users and groups have been passed #### Confirming a file was passed as an argument ``` if [[ -z "$1" ]]; then echo "Usage: $0 filename" echo "Text file does not exist. ...exiting" >> $log_file exit 1 fi ``` The above code snippet checks if there is user_input from the terminal and if there is not it sends a text to the log file and the program ends. However assuming there is a file passed/user_input made in the terminal #### Reading the file ``` echo "Reading file" >> $log_file file_path="$1" if [[ -f "$file_path" ]]; then echo "fetching the usernames and groups" >> $log_file while IFS= read -r lines; do user_name=$(echo "$lines" | awk -F'; ' '{print $1}') groups=$(echo "$lines" | awk -F'; ' '{print $2}') ``` The above code opens up the file and starts to read from it, #### Fetching the usernames and groups ``` While IFS = read -r lines; do user_name=$(echo "$lines" | awk -F'; ' '{print $1}') groups=$(echo "$lines" | awk -F'; ' '{print $2}') ``` The above snippet basically says while there are lines in the file i.e the code has not traveled to the end of the file, we then assign user_name to the $lines being piped through awk and the delimiter '; ' to separate the line basically, we have a line vale; sudo, vale, data when we pipe it into `awk -F'; ' '{print $1}'` it splits the line into tow parts which are `vale` and `sudo, vale, data` and {print $1} let's us access the first part of the line before the delimiter. while the line below does the same thing but {print $2} gives us access to the second part of the line. ``` file_path="$1" if [[ -f "$file_path" ]]; then echo "fetching the usernames and groups" >> $log_file while IFS= read -r lines; do user_name=$(echo "$lines" | awk -F'; ' '{print $1}') groups=$(echo "$lines" | awk -F'; ' '{print $2}') if id -u "$user_name">/dev/null 2>&1; then echo "user $user_name already exists" echo "user $user_name already exists" >> $log_file else IFS=',' read -ra group_array <<< "$groups" #explain in the post ``` now that we have access to the users we can check if a user already exist, to do that we simply check if a user has an id, if the name already has an id it exist and we simply make it known to the user that the user_name has been created already ``` if id -u "$user_name">/dev/null 2>&1; then echo "user $user_name already exists" echo "user $user_name already exists" >> $log_file ``` ### checking if user already exists `id -u "$user_name"` is used to get the id of a user and `>dev/null 2>&1` is used to send the output to a null file so no output is displayed `else IFS=',' read -ra group_array <<< "$groups" ` is used to read the elements in groups and put them into an array group_array where the elements are split by the ',' delimiter. ``` file_path="$1" if [[ -f "$file_path" ]]; then echo "fetching the usernames and groups" >> $log_file while IFS= read -r lines; do user_name=$(echo "$lines" | awk -F'; ' '{print $1}') groups=$(echo "$lines" | awk -F'; ' '{print $2}') if id -u "$user_name">/dev/null 2>&1; then echo "user $user_name already exists" echo "user $user_name already exists" >> $log_file else IFS=',' read -ra group_array <<< "$groups" #explain in the post for group in "${group_array[@]}"; do if ! getent group "$group" >/dev/null 2>&1; then sudo groupadd "$group" echo "Group $group created" echo "Group $group created" >> $log_file fi done ``` ### Group creation ``` for group in "${group_array[@]}"; do if ! getent group "$group" >/dev/null 2>&1; then sudo groupadd "$group" echo "Group $group created" echo "Group $group created" >> $log_file fi done ``` lets break down the above block of code : `for group in "{$group_array[@]};` this creates a variable group that is equal to the current item in the group array that was created before and it keeps on changing according to the number of elements in that group `do if ! getent group "$group" >/dev/null 2>&1;` remember when we wanted to find out if a user exists or not? This is basically the same thing but instead of a user we use the getent function to get entities that belong to a particular group and if no entity belongs to a particular group the group most likely does not exists so we pipe the output to the null folder and create the group. ### password creation we can generate a random password using openssl `password=$(openssl rand -base64 12)` This creates a variable password which would have a 12 digit password in base 64 #### creating users with their assigned group and passwords luckily someone asked this question on stackoverflow and got a response ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lrtsrf7veedr5z5rriyx.png) Thank you very much Netzego and Damien sudo useradd -m -G "$groups" -p "$(openssl passwd -1 "$password")" "$user_name" echo "adding groups :$groups for user: $user_name and password in $pass_file " >> $log_file group_fold="/home/$user_name" so far we have: ``` file_path="$1" if [[ -f "$file_path" ]]; then echo "fetching the usernames and groups" >> $log_file while IFS= read -r lines; do user_name=$(echo "$lines" | awk -F'; ' '{print $1}') #explain in the post groups=$(echo "$lines" | awk -F'; ' '{print $2}') if id -u "$user_name">/dev/null 2>&1; then #explain in the post echo "user $user_name already exists" echo "user $user_name already exists" >> $log_file else IFS=',' read -ra group_array <<< "$groups" #explain in the post for group in "${group_array[@]}"; do if ! getent group "$group" >/dev/null 2>&1; then #explain in the post sudo groupadd "$group" echo "Group $group created" echo "Group $group created" >> $log_file fi done password=$(openssl rand -base64 12) sudo useradd -m -G "$groups" -p "$(openssl passwd -1 "$password")" "$user_name" #explain in post echo "adding groups :$groups for user: $user_name and password in $pass_file " >> $log_file group_fold="/home/$user_name" ``` now we just have to create the users home folders with the appropriate permissions and groups ``` sudo chmod 700 $group_fold > $log_file 2>&1 sudo chown $user_name:$user_name $group_fold > $log_file 2>&1 echo "Home directory for $user set up with appropriate permissions and ownership" >> $log_file echo "$user_name,$password" >> "$pass_file" fi done < "$file_path" sudo chmod 600 "$password_file" sudo chown "$(id -u):$(id -g)" "$password_file" echo "File permissions for $password_file set to owner-only read" >> $log_file else echo "File not found: $file_path" >> "$log_file" exit 1 ``` ### The complete code: ``` #!/bin/bash if [[ ! -e /var/log/user_management.log ]]; then sudo mkdir -p /var/log/ sudo touch /var/log/user_management.log fi if [[ ! -e /var/secure/user_passwords.txt ]]; then sudo mkdir -p /var/secure/ sudo touch /var/secure/user_passwords.txt fi log_file="/var/log/user_management.log" pass_file="/var/secure/user_passwords.txt" echo "logfile created..." >> $log_file echo "checking if text_file exists" >> $log_file if [[ -z "$1" ]]; then echo "Usage: $0 filename" echo "Text file does not exist. ...exiting" >> $log_file exit 1 fi echo "Reading file" >> $log_file file_path="$1" if [[ -f "$file_path" ]]; then echo "fetching the usernames and groups" >> $log_file while IFS= read -r lines; do user_name=$(echo "$lines" | awk -F'; ' '{print $1}') #explain in the post groups=$(echo "$lines" | awk -F'; ' '{print $2}') if id -u "$user_name">/dev/null 2>&1; then #explain in the post echo "user $user_name already exists" echo "user $user_name already exists" >> $log_file else IFS=',' read -ra group_array <<< "$groups" #explain in the post for group in "${group_array[@]}"; do if ! getent group "$group" >/dev/null 2>&1; then #explain in the post sudo groupadd "$group" echo "Group $group created" echo "Group $group created" >> $log_file fi done password=$(openssl rand -base64 12) sudo useradd -m -G "$groups" -p "$(openssl passwd -1 "$password")" "$user_name" #explain in post echo "adding groups :$groups for user: $user_name and password in $pass_file " >> $log_file group_fold="/home/$user_name" sudo chmod 700 $group_fold > $log_file 2>&1 sudo chown $user_name:$user_name $group_fold > $log_file 2>&1 echo "Home directory for $user set up with appropriate permissions and ownership" >> $log_file echo "$user_name,$password" >> "$pass_file" fi done < "$file_path" sudo chmod 600 "$password_file" sudo chown "$(id -u):$(id -g)" "$password_file" echo "File permissions for $password_file set to owner-only read" >> $log_file else echo "File not found: $file_path" >> "$log_file" exit 1 ``` This task was assigned to me during my [HNG](https://hng.tech/internship) internship devops track
vale_hash
1,908,223
When to use enums vs. inheritance for modeling object types
Understanding when to use enums versus inheritance and polymorphism to model object types is crucial...
0
2024-07-01T22:07:27
https://dev.to/muhammad_salem/when-to-use-enums-vs-inheritance-for-modeling-object-types-1mc1
Understanding when to use enums versus inheritance and polymorphism to model object types is crucial for creating maintainable and scalable software. Here's a systematic thought process to help you make informed decisions: ### When to Use Enums Enums are useful when you have a finite set of related constants that are conceptually distinct but don't require behavior differences. Here are some situations where enums are appropriate: 1. **Fixed Set of Values**: When you have a known, fixed set of values that are not expected to change frequently. For example, days of the week, order status, or directions (north, south, east, west). 2. **Simple State Representation**: When you want to represent simple states or categories that don’t involve different behaviors or additional properties. For example, the status of an order (Pending, Shipped, Delivered, Cancelled). 3. **Lightweight**: When you need a lightweight solution that doesn't require the overhead of class hierarchies. Enums are easy to implement and use, providing a clear and concise way to define and use constants. 4. **Data-Driven Conditions**: When the values are primarily used for data-driven conditions, such as switch statements or conditional logic in methods. **Example: Order Status** ```csharp public enum OrderStatus { Pending, Shipped, Delivered, Cancelled } ``` ### When to Use Inheritance and Polymorphism Inheritance and polymorphism are appropriate when you need to model types that have different behaviors or additional properties. Here are some situations where inheritance and polymorphism are suitable: 1. **Behavior Differences**: When different types have distinct behaviors that need to be encapsulated in different methods. For example, different types of notifications (EmailNotification, SMSNotification) with different sending mechanisms. 2. **Extendable Types**: When you anticipate that the set of types might change or expand in the future. Creating a base class and extending it makes the system more flexible and easier to maintain. 3. **Shared Functionality**: When you have common functionality that can be shared across multiple types. Using a base class to implement shared behavior and properties reduces code duplication. 4. **Rich Object Model**: When you need to represent a richer object model with more complex interactions and relationships. Using inheritance allows you to create more expressive and flexible models. **Example: Notification System** ```csharp public abstract class Notification { public string Recipient { get; set; } public string Message { get; set; } public abstract void Send(); } public class EmailNotification : Notification { public string Subject { get; set; } public override void Send() { // Implementation for sending email } } public class SMSNotification : Notification { public override void Send() { // Implementation for sending SMS } } ``` ### Systematic Thought Process 1. **Analyze Requirements**: - What are the different types you need to model? - Do these types share common properties or behaviors? - Will the set of types change or expand in the future? 2. **Determine Complexity**: - Are the differences between types primarily behavioral or do they involve additional data? - Is the behavior simple enough to be handled by conditional logic, or does it warrant separate classes? 3. **Consider Extensibility**: - Is it likely that new types will be added? - Does the system need to be open for extension but closed for modification (Open/Closed Principle)? 4. **Evaluate Maintainability**: - Will the use of enums lead to long switch statements or conditionals scattered across the codebase? - Will inheritance hierarchies become too complex or hard to manage? 5. **Performance Considerations**: - Enums are generally more performant for simple state representation. - Inheritance might introduce some overhead but provides greater flexibility and clarity for complex behaviors. ### Decision Guidelines - **Use Enums When**: - You have a simple, finite set of related constants. - The values are used primarily for data-driven decisions. - The types don’t have significant behavior differences. - **Use Inheritance and Polymorphism When**: - Different types have distinct behaviors or properties. - The model needs to be extendable and flexible. - You need to share common functionality among different types. ### Conclusion Choosing between enums and inheritance/polymorphism depends on the specific requirements and complexity of the system you are designing. Enums are great for simple, fixed sets of constants, while inheritance and polymorphism provide flexibility and extensibility for more complex models with distinct behaviors. By systematically analyzing the requirements, complexity, and maintainability of your system, you can make informed decisions that lead to a robust and scalable design.
muhammad_salem
1,908,222
SAW Software Bug Report
Introduction: SAW is an acronym for Scrap Any Website. This software is used for scrapping websites...
0
2024-07-01T22:07:21
https://dev.to/samson_ajayi/saw-software-bug-report-c00
testing, software, producttesting, softwaredevelopment
**Introduction**: [SAW ](https://scrapeanyweb.site/) is an acronym for [Scrap Any Website](https://scrapeanyweb.site/). This software is used for scrapping websites to obtain information that might be needed by developers. Provided below are some details about this software: **Name**: Scrap Any Website [Download here](https://apps.microsoft.com/detail/9mzxn37vw0s2?hl=en-us&gl=NG) **OS**: Windows 10 version 17763.0 or higher **Memory**: Not specified (Minimum), 4 GB (Recommended) **Features**: Data Extraction, Website Scraping **Approximate size**: 136.5 MB What I tested and reported was specifically the scrapping task. I focused on the scrapping tasks feature and documented the bug report from my personal observation. **Problem Description:** Issue 1: I observed there was an unknown code "-1" when scrapping some URLs, and this could really cause confusion and misunderstanding for some users. This issue can be reproduced by inputing at list 20 website URLs, turning on _discover New URL_ button, then Scrapping the URLs simultaneously. After this is done, some URLs gives the wrong code, which is "-1"' See more on [this spreadsheet ](https://docs.google.com/spreadsheets/d/15IvCI94T37Iu8do9P9yYSmsA1AIFsdxTztDp8OtytVE/edit?usp=sharing) Issue 2: I observed an inconsistency in the scrapped data statistics which could cause uncertainties in the usage of the software. This inconsistency can be reproduced when multiple URLs is tested. You will see miscalculated scrapped data. See image and further details by accessing [this spreadsheet](https://docs.google.com/spreadsheets/d/15IvCI94T37Iu8do9P9yYSmsA1AIFsdxTztDp8OtytVE/edit?usp=sharing). Issue 3: I observed the inability to deleted URLs by users. So in a case whereby a user mistakenly inputed a URL, it will be very difficult to remove. No feature or options was provided for this when scrapping in a folder. **Impact:** For issue 1, I couldn't place an actual error and believed there most have been a software issue hence my subtle inability to trust the data I see on this software. The scrapped data statistics miscalculation also might have been because of this flaw, which seems to be causing a flow of errors. The inability to delete a particular URL and re scrap the URL list is also making the software not user friendly in some unique cases. **Bug Report Process:** As cited earlier, click the link below to access the spreadsheet for more information of this bug report process: [BUG REPORT](https://docs.google.com/spreadsheets/d/15IvCI94T37Iu8do9P9yYSmsA1AIFsdxTztDp8OtytVE/edit?usp=sharing) The stated observations should be revisited and a guide should be provided for users. This will help encourage newbies to use the software and understand seemingly confusing terminologies, **Conclusion**: These identified issues significantly impact the software's usability and reliability. The presence of unexpected codes like "-1" during URL scraping creates confusion and undermines trust in the data output. Inconsistencies in scrapped data statistics further complicate data analysis, potentially leading to flawed insights. Additionally, the inability to delete URLs post-scraping hampers user flexibility and makes error correction cumbersome. Addressing these issues through thorough testing, documentation improvements, and user-friendly features will be essential in enhancing the software's functionality and user experience. Samson Ajayi, HNG 11 Intern, 2024.
samson_ajayi
1,897,464
🗣️🦾📲🤓 RTFM : ask AI agent to learn how to send sms w. open-interpreter
💭 About How many times did someone ask you : "How do you..." ... and how many times...
27,823
2024-07-01T22:07:14
https://dev.to/adriens/rtfm-ask-ai-agent-to-learn-how-to-send-sms-w-open-interpreter-c52
nocode, ai, showdev, tutorial
## 💭 About How many times did someone ask you : > _"How do you..."_ ... and how many times did you make the [RTFM (Read the Fucking Manual)](https://en.wikipedia.org/wiki/RTFM) joke ? ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u0k334ro41v9z0imgpzj.png) 👉 Well that's all this blog post is all about, we're going to: 1. **🎁 Provide a `cli` tool that behaves like all other tools** to an AI assistant 2. **🦥 Ask it to learn by himself** how to use it 3. **🚀 Make it do the job** ... locally ! To achieve this we will use : - **🤖 A core `LLM` engine** : `gpt-4` - **💻 A locally running assistant** that is able to create an action plan to achieve a goal : [Open Interpreter](https://www.openinterpreter.com/) ## 🎯 What we'll do This time, thanks to a custom tool I created last week-end : {% embed https://dev.to/adriens/mobitag-go-hackathon-2024-06-22-week-end-2n16 %} and ask the **AI Assistant to discover the tool and send a custom sms** with a custom content to myself with it. ## 🧑‍🏫 Teach him how to use the tool {% youtube 9BTJ2sCbTbg %} ## 🤣 Funniest `sms` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cq1pdh5oyligk9ywghkl.png) ## 🔭 Perspectives The same way we used the `--help` pattern for `cli` tool, we can use the [OpenAPI](https://swagger.io/specification/) to tell an AI how to use an API as a tool. But keep in mind : the better the documentation, the easier and better integration will be... at almost 0 effort (which is our target to scale automation). Below some examples of how to achieve this on various frameworks & services : ## 🤓 For coders : - [🦙 LlamaIndex](https://www.llamaindex.ai/) : [OpenAPI Tool](https://llamahub.ai/l/tools/llama-index-tools-openapi?from=all) - [🦜 Langchain](https://www.langchain.com/) : [OpenAPI tool](https://python.langchain.com/v0.2/docs/integrations/toolkits/openapi/) ## ⏱️ `< 5'` demo : build & deploy conversational agents (non-coders) Least but not last, for non coders [Google Vertex AI Agent Builder](https://cloud.google.com/dialogflow/vertex/docs/concept/tools) to **build and deploy Agents ... within 5 minutes** : [![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/chc4hgfc9ujkzzsvcmjk.png)](https://youtu.be/0QbUYfTRJEY?si=lVbF-8ssL_8zq6ZF&t=547)
adriens
1,908,221
How to recover your cryptocurrency lost to investment scam
I was impatient to carry out the necessary research but I wanted to jump on the crypto trading and...
0
2024-07-01T22:00:44
https://dev.to/judith_allen_bc4852a5e3cf/how-to-recover-your-cryptocurrency-lost-to-investment-scam-756
I was impatient to carry out the necessary research but I wanted to jump on the crypto trading and investment buzz. Unfortunately for me, I invested 84,700 GBP worth of bitcoin with a fraudulent trading platform, I was happy to watch my account grow to 123,575 GBP within a couple of weeks. But I didn't realize I was dealing with a scam trading platform until I tried to make a withdrawal. I made a withdrawal request and noticed my account was suddenly blocked for no apparent reason. I tried contacting customer support but to no avail. I needed my money back at all costs because I couldn't afford to let it go. So I tried all possible means to make sure I recovered my scammed Bitcoin. I did a lot of online searching for help and tried to see if other people had any similar experiences. I stumbled upon a cryptocurrency forum where a couple of people mentioned that they had been through the same process but were able to recover their lost cryptography funds with the help of INFINITE DIGITAL RECOVERY. So I filed a report to their email on infinitedigitalrecovery@proton.me and they were able to help me get back all my lost funds within a couple of hours without any upfront payment, I feel indebted to them. Apart from trying to express my gratitude to them once again using this medium, I will recommend anybody who wants to recover scammed bitcoin, stolen cryptocurrency, funds lost to binary options forex, investment, frozen wallet, and any other form of online scam to reach out to INFINITE DIGITAL RECOVERY now. EMAIL: infinitedigitalrecovery@proton.me TELEGRAM: +15625539611
judith_allen_bc4852a5e3cf
1,908,219
Comparing Sass and Vue: A Deep Dive into Two Frontend Technologies
In the ever-evolving landscape of frontend development, two technologies have stood out for their...
0
2024-07-01T21:52:35
https://dev.to/variant/comparing-sass-and-vue-a-deep-dive-into-two-frontend-technologies-578f
webdev, frontend, css
In the ever-evolving landscape of frontend development, two technologies have stood out for their unique contributions to the developer's toolkit: Sass (Syntactically Awesome Style Sheets) and Vue.js. Both have revolutionized how we approach web design and development, but they serve very different purposes. This article will explore the nuances of Sass and Vue.js, contrasting their functionalities, strengths, and what makes each of them invaluable in the realm of frontend development. What is Sass? Sass is a CSS preprocessor, which means it extends the capabilities of standard CSS. It introduces features that aren't available in plain CSS, such as variables, nested rules, and mixins. Sass makes writing CSS more efficient and easier to maintain by allowing developers to use reusable code snippets and logical structures. Key Features of Sass: - Variables: Store values like colors, fonts, or any CSS value that you want to reuse throughout your stylesheet. - Nesting: Nest your CSS selectors in a way that follows the same visual hierarchy of your HTML. - Partials and Import: Split your CSS into smaller, more manageable files, which can be imported into a main stylesheet. - Mixins: Create reusable chunks of code that can be included in other selectors. - Inheritance: Share a set of CSS properties from one selector to another. **What is Vue.js?** Vue.js is a progressive JavaScript framework used for building user interfaces and single-page applications. Vue is designed to be incrementally adoptable, meaning you can use as much or as little of Vue as you need. It provides data-reactive components with a simple and flexible API. **Key Features of Vue.js:** Reactive Data Binding: Automatically updates the DOM when the underlying data changes. Components: Encapsulate reusable code in self-contained units. Directives: Special tokens in the markup that tell the library to do something to a DOM element. Vue CLI: A powerful tool for scaffolding Vue.js projects. Single File Components: Combine HTML, JavaScript, and CSS in a single file with the .vue extension. Comparing Sass and Vue.js While Sass and Vue.js both enhance frontend development, they do so in fundamentally different ways. Here's a closer look at their differences: **Purpose and Use Case** Sass: Primarily used for styling websites. It extends CSS capabilities, making it easier to write and manage stylesheets. Vue.js: A JavaScript framework for building interactive user interfaces and single-page applications. It focuses on the structure and functionality of web applications. Integration Sass: Can be integrated with any project that uses CSS. It doesn't require any specific setup beyond a build tool like Webpack or Gulp to compile the Sass files into CSS. Vue.js: Requires a more involved setup, especially for larger projects. It often involves using the Vue CLI and setting up a build process. Performance Sass: As a preprocessor, it compiles to CSS, which means there is no runtime performance cost. The styles are just as fast as regular CSS. Vue.js: Adds a small amount of overhead due to its reactivity system and component structure. However, it is optimized for performance and scales well with large applications. **Working with ReactJS in HN** React's component-based architecture and unidirectional data flow make it a popular choice for developers. As I delve deeper into ReactJS during the HNG Internship, I look forward to enhancing my skills in creating dynamic and efficient web applications. React's ecosystem and community support are unparalleled, providing a wealth of resources and libraries to streamline development. Comparing Sass and Vue: A Deep Dive into Two Frontend Technologies In the ever-evolving landscape of frontend development, two technologies have stood out for their unique contributions to the developer's toolkit: Sass (Syntactically Awesome Style Sheets) and Vue.js. Both have revolutionized how we approach web design and development, but they serve very different purposes. This article will explore the nuances of Sass and Vue.js, contrasting their functionalities, strengths, and what makes each of them invaluable in the realm of frontend development. What is Sass? Sass is a CSS preprocessor, which means it extends the capabilities of standard CSS. It introduces features that aren't available in plain CSS, such as variables, nested rules, and mixins. Sass makes writing CSS more efficient and easier to maintain by allowing developers to use reusable code snippets and logical structures. Key Features of Sass: Variables: Store values like colors, fonts, or any CSS value that you want to reuse throughout your stylesheet. Nesting: Nest your CSS selectors in a way that follows the same visual hierarchy of your HTML. Partials and Import: Split your CSS into smaller, more manageable files, which can be imported into a main stylesheet. Mixins: Create reusable chunks of code that can be included in other selectors. Inheritance: Share a set of CSS properties from one selector to another. What is Vue.js? Vue.js is a progressive JavaScript framework used for building user interfaces and single-page applications. Vue is designed to be incrementally adoptable, meaning you can use as much or as little of Vue as you need. It provides data-reactive components with a simple and flexible API. Key Features of Vue.js: Reactive Data Binding: Automatically updates the DOM when the underlying data changes. Components: Encapsulate reusable code in self-contained units. Directives: Special tokens in the markup that tell the library to do something to a DOM element. Vue CLI: A powerful tool for scaffolding Vue.js projects. Single File Components: Combine HTML, JavaScript, and CSS in a single file with the .vue extension. Comparing Sass and Vue.js While Sass and Vue.js both enhance frontend development, they do so in fundamentally different ways. Here's a closer look at their differences: Purpose and Use Case Sass: Primarily used for styling websites. It extends CSS capabilities, making it easier to write and manage stylesheets. Vue.js: A JavaScript framework for building interactive user interfaces and single-page applications. It focuses on the structure and functionality of web applications. Learning Curve Sass: Relatively easy to learn for those who are already familiar with CSS. The syntax is straightforward, and it builds on existing CSS knowledge. Vue.js: Has a steeper learning curve, especially for those new to JavaScript frameworks. However, Vue's documentation is excellent, and its learning path is smooth. Integration Sass: Can be integrated with any project that uses CSS. It doesn't require any specific setup beyond a build tool like Webpack or Gulp to compile the Sass files into CSS. Vue.js: Requires a more involved setup, especially for larger projects. It often involves using the Vue CLI and setting up a build process. Performance Sass: As a preprocessor, it compiles to CSS, which means there is no runtime performance cost. The styles are just as fast as regular CSS. Vue.js: Adds a small amount of overhead due to its reactivity system and component structure. However, it is optimized for performance and scales well with large applications. Working with ReactJS in HNG In the HNG Internship, we predominantly use ReactJS, another powerful JavaScript library for building user interfaces. React's component-based architecture and unidirectional data flow make it a popular choice for developers. As I delve deeper into ReactJS during the HNG Internship, I look forward to enhancing my skills in creating dynamic and efficient web applications. React's ecosystem and community support are unparalleled, providing a wealth of resources and libraries to streamline development. Conclusion Sass and Vue.js each offer distinct advantages that cater to different aspects of frontend development. Sass enhances the styling workflow, making CSS more manageable and efficient, while Vue.js empowers developers to build interactive and dynamic web applications with ease. Understanding and leveraging both technologies can significantly elevate your frontend development skills. For more information about the HNG Internship and to explore opportunities, visit https://hng.tech/internship and https://hng.tech/hire.
variant
1,908,218
Abstract & Interface in C#
Note: We will use a famous example, an Animal, for the whole article What is...
27,809
2024-07-01T21:48:01
https://www.linkedin.com/pulse/abstract-interface-c-loc-nguyen-j7apc/
csharp, beginners, learning
**Note**: We will use a famous example, an Animal, for the whole article ### What is Abstract? Abstract class provide a base define for something (Ex: Base class for Dog and Bird is Animal). It present for Polymorphism in OOP. An abstract class is declared using the `abstract` keyword. It can contain both abstract methods (without implementations) and non-abstract methods (with implementations). Take a look at example below We define abstract class `Animal` with the following detail: - Properties: name - Abstract methods: MoveBy - Non-abstract methods: Print ![abstract](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k2wuooamxyc1192isway.png) Next, we will create two classes `Dog` and `Bird` which implement `Animal`. After that, we initialize them ![dog abstract](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vs4a7ni2rgx0tyki4nhs.png) ![bird abstract](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8gsmxvw2glja03rvivpp.png) ![main abstract](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b0rtpmu05m9c7oyptg51.png) Now we can get some point in here: - Every `Dog` and `Bird` have the name (1) - But `Dog` move by leg and `Bird` move by wing (maybe by leg also but assume it) (2) With (1), we can declare `Print` method in `Animal` with non-abstract type to use for `Dog` and `Bird` With (2), the `MoveBy` method must be abstract and it would be declared in each class later. This is the key point to show difference between abstract and non-abstract method. Summary, - `Abstract` class provide a base definition (Ex: Animal) - `Abstract` method provide base method which execute differently for each class implement (Ex: MoveBy) - `Non-abstract method` are usually use as a common method for all class implementation (Ex: Print) ### What is Interface? For short, an `interface` is a completely “abstract class” that defines a contract for related functionalities. When we sign a contract, we must provide/do everything defined in a contract, like `A` must give `B` 100k$ and don't need to know `A` give cash or transfer - that is the `Abstraction` in OOP. Take a look at the example below We define interface class `IAction` with the following detail: - Move - Talk ![interface](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ohft99k2elf6qlehfewk.png) Next, we will extend two classes `Dog` and `Bird`, which have been implement `Animal`, with `IAction`. ![Dog interface](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qu8a1y2siuvhjxazwxg4.png) ![Bird interface](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z1dp17jna44l5vfulp24.png) ![Main interface](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4bkuaa0m175faijktkp3.png) We can see in `IAction`, we have two methods with abstract types that execute differently in each class. Summary, - `Interface` define a contract or set of rules that classes must adhere to. - `Interface` class contains only abstract method, not fields
locnguyenpv
1,908,157
SQL Course: Self Join
It’s time to add followers to our database application. We will establish one more many-to-many...
27,924
2024-07-01T21:45:41
https://dev.to/emanuelgustafzon/sql-course-self-join-5g0g
sql
It’s time to add followers to our database application. We will establish one more many-to-many relationship. Our join table will be the followers table. The interesting thing with this table is that is the two foreign keys are referencing the same table, the users table. Therefore we need to query the followers with a self join. [Read more about self joins.](https://www.w3schools.com/sql/sql_join_self.asp) ## Create the join table * The OwnerID is the user who follow another user. * The FollowingID is the user who is being followed. ``` CREATE TABLE Follows ( ID INTEGER PRIMARY KEY AUTOINCREMENT, OwnerID INTEGER, FollowingID INTEGER, FOREIGN KEY (OwnerID) REFERENCES Users(ID), FOREIGN KEY (FollowingID) REFERENCES Users(ID) ); ``` ## Insert data Ex user with ID 1, Ben follow user with ID 2, Jim. ``` INSERT INTO Follows (OwnerID, FollowingID) VALUES (1, 2), (2, 1), (2, 3); ``` ## Query with self join. In this query we retrieve all users and the users they follow and then group by user. ``` SELECT owner.username, followsUser.Username FROM Follows f JOIN Users owner ON f.OwnerID = owner.ID JOIN Users followsUser ON f.FollowingID = followsUser.ID GROUP BY owner.username, followsUser.Username; ``` We can also get all the users a particular user is following. ``` SELECT owner.username, followsUser.Username FROM Follows f JOIN Users owner ON f.OwnerID = owner.ID JOIN Users followsUser ON f.FollowingID = followsUser.ID WHERE owner.Username = 'Ben'; ```
emanuelgustafzon
1,908,216
Don't be a jack of all trades: The incredible importance of domain knowledge for software testers.
_As a software tester, diving deep into the domain you’re working in isn’t just a nice-to-have—it’s a...
0
2024-07-01T21:44:32
https://dev.to/lanr3waju/dont-be-a-jack-of-all-trades-the-incredible-importance-of-domain-knowledge-for-software-testers-2pgc
webdev, productivity, softwaretesting, career
**_As a software tester, diving deep into the domain you’re working in isn’t just a nice-to-have—it’s a game-changer! Here’s why: _** 1. Know What to Test: Understanding the domain means you know exactly what aspects of the product to test. You can pinpoint critical functionalities and ensure they work seamlessly. 2. Where to Test: Domain knowledge helps you identify the key areas of the application that need thorough testing. You’ll know where the potential pain points are and can focus your efforts there. 3. How the Product Works: A solid grasp of the domain means you understand the product inside out. This knowledge allows you to simulate real-world usage scenarios and catch bugs that might otherwise be missed. 4. Feature Expectations: Knowing the domain means you’re aware of the essential features the product should have. You can ensure these features are present and functioning as expected. 5. User Interaction: You can evaluate how the product should interact with users, ensuring a smooth and intuitive user experience. Your insights can help design a product that users love. 6. Critical Decisions: Domain expertise helps you make informed decisions about what the product should and shouldn’t include. You can guide the development process to ensure the final product meets user needs. 7. Developer Support: Even if developers aren’t familiar with the domain, your knowledge can bridge the gap. You can highlight potential issues and areas for improvement that developers might overlook. **But wait, there’s more! 🌟** - Faster Issue Identification: With domain knowledge, you can quickly identify and address issues. You’ll know what to look for and can catch defects early in the development cycle. - Improved Communication: You can communicate more effectively with stakeholders. Whether you’re discussing requirements with clients or collaborating with developers, your domain expertise makes you a valuable interlocutor. - Increased Credibility: Having domain knowledge boosts your credibility. Teams will trust your insights and rely on your expertise to ensure the product’s success. - Continuous Improvement: Domain knowledge isn’t static. As you continue learning and growing within the domain, you’ll bring fresh perspectives and innovative solutions to the table. In a nutshell, being a knowledgeable software tester means being a domain expert. Your insights drive quality, enhance user experience, and ensure the product not only meets but exceeds expectations. So, fellow testers, dive deep into your domains and let’s keep making amazing products! 💡✨
lanr3waju
1,908,164
Setup para Ruby / Rails: Sistema Operacional
Essa série é um guia para quem quer preparar um ambiente de desenvolvimento Ruby / Rails. O...
27,960
2024-07-01T21:38:52
https://dev.to/serradura/setup-ubuntu-para-desenvolver-com-ruby-rails-5g2l
beginners, ruby, rails, braziliandevs
Essa série é um guia para quem quer preparar um ambiente de desenvolvimento [Ruby](https://www.ruby-lang.org/en/) / [Rails](https://rubyonrails.org/). O objetivo é apresentar um passo a passo com o mínimo necessário para que você possa começar a desenvolver com Ruby o mais rápido possível. Escolha o seu sistema operacional e siga as instruções. - [Windows 11](https://dev.to/serradura/setup-para-ruby-rails-windows-wsl-479l) - [Linux Ubuntu](https://dev.to/serradura/setup-para-ruby-rails-ubuntu-2ip8) - MacOS (em breve) --- Já ouviu falar do **ada.rb - Arquitetura e Design de Aplicações em Ruby**? É um grupo focado em práticas de engenharia de software com Ruby. Acesse o <a href="https://t.me/ruby_arch_design_br" target="_blank">canal no telegram</a> e junte-se a nós em nossos <a href="https://meetup.com/pt-BR/arquitetura-e-design-de-aplicacoes-ruby/" target="_blank">meetups</a> 100% on-line. ---
serradura
1,908,211
Resumo Kubernetes
x
0
2024-07-01T21:29:31
https://dev.to/adzubla/resumo-kubernetes-1m69
kubernetes, k8s, kubectl, aks
--- title: Resumo Kubernetes published: true description: x tags: kubernetes, k8s, kubectl, aks --- ## Configurar cluster Configurar kubectl para acessar um Azure AKS ```sh az aks get-credentials -g ${GROUP} -n ${CLUSTER} ``` Ver configurações do kubectl ```sh kubectl config view ``` Adicionando um novo cluster ao kubectl ```sh # Add a user/principal that will be used when connecting to the cluster kubectl config set-credentials kubeuser/foo.com --username=kubeuser --password=kubepassword # Point to a cluster kubectl config set-cluster foo.com --insecure-skip-tls-verify=true --server=https://foo.com # This context points to the cluster with a specific user kubectl config set-context default/foo.com/kubeuser --user=kubeuser/foo.com --namespace=default --cluster=foo.com # Use this specific context kubectl config use-context default/foo.com/kubeuser ``` Trocando de cluster ```sh # Mostrando os clusters configurados no ~/.kube kubectl config get-contexts # Mostrando o contexto atual kubectl config current-context # Trocando de contexto kubectl config use-context CONTEXT_NAME ``` ## Namespaces Criando um namespace ```sh kubectl create namespace ${NAMESPACE} ``` Trocando o namespace corrente ```sh kubectl config set-context --current --namespace=${NAMESPACE} ``` ## Pods Procurar (por label) ```sh kubectl get pods -l app=MY-APP POD_NAME=$(kubectl get pods -o=jsonpath='{.items[?(@.metadata.labels.app=="MY-APP")].metadata.name}') ``` Describe (ver eventos) ```sh kubectl describe pod $POD_NAME ``` Logs (da aplicação) ```sh kubectl logs $POD_NAME ``` Mostrar os nodes de execução dos pods ```sh kubectl get pods -o=wide # Filtrar pelo nome do node kubectl get pods --field-selector spec.nodeName=$NODE_NAME ``` ## Terminal Abrir terminal num pod ```sh kubectl exec --stdin --tty $POD_NAME -- /bin/bash ``` > Se estiver usando um terminal Git-bash ou MinGW, colocar a variável `MSYS_NO_PATHCONV`: > ```sh > MSYS_NO_PATHCONV=1 kubectl exec --stdin --tty $POD_NAME -- /bin/bash > ``` ## Service Acessando um serviço ```sh kubectl get svc ${SERVICE_NAME} EXTERNAL_IP=$(kubectl get svc ${SERVICE_NAME} -o=jsonpath='{.status.loadBalancer.ingress[0].ip}') curl http: ${EXTERNAL_IP}/api/v1/hello ``` Expondo um endereço: ```sh kubectl port-forward service/${SERVICE_NAME} 9200:9200 ``` ## Logs ```sh kubectl logs -p $POD --all-containers --previous=false ``` ## Restart pods Sem downtime: ```sh kubectl rollout restart deployment <deployment_name> ``` Parando todos pods e depois reiniciando todos: ```sh kubectl scale deployment <deployment name> - replicas=0 kubectl scale deployment <deployment name> - replicas=1 ``` Alterando uma variável associada ao pod: ```sh kubectl set env deployment <deployment name> LAST_START_DATE="$(date)" ``` Selecionando um pod específico: ```sh kubectl delete pod <pod_name> ``` Selecionando todos pods com label: ```sh kubectl delete pod -l "app:myapp" ``` ## Performance Mostra consumo de CPU e MEMORY de pods ```sh kubectl top pod kubectl top pod POD_ID ```
adzubla
1,908,146
My Journey
Finding Solutions: A Backend Developer's Journey Nothing gives a backend developer greater...
0
2024-07-01T20:30:40
https://dev.to/ed_adelaja_e1a8093ba92af5/my-journey-1loi
Finding Solutions: A Backend Developer's Journey Nothing gives a backend developer greater satisfaction than solving an issue they first struggled with. I recently encountered a challenge of this kind, which not only pushed the boundaries of my technical knowledge but also cemented my love for backend development.. Here’s a step-by-step breakdown of that experience, along with a bit about the journey I’m about to embark on with the HNG Internship. (https://hng.tech/premium) (https://hng.tech/internship) The Challenge I was approached with a seemingly simple request : To optimize database queries for an e-commerce app. Now , the problem here was that the application was experiencing serious slowdowns during peak traffic times. I did some troubleshooting and found out the problem was with the database. Step-by-Step Breakdown 1. Identifying the Root Cause I started by figuring out which queries were making the app move slow. I used MySQL's slow query log and performance schemes to identify which of the queries took the longest to run Action Taken: I enabled the slow query log and then I set a threshold to log the queries that took longer than one second to execute. Within some hours, I had myself a list of the problematic queries. 2. Analyzing the Queries I did some more troubleshooting on the slow queries. I wanted to understand why they were taking so long to execute. For this, I had to find out how MySQL was running these queries and so I used the "EXPLAIN" statement. Action Taken: Running "EXPLAIN SELECT..." on the slow queries showed me that many of them were not making effective use of indexes. They were doing full table scans instead, which I found concerning 3. Optimizing Indexes After finding out the problematic queries, the next step I took was to optimize the indexes. This involved adding new indexes and optimizing existing ones to ensure that the queries could run more efficiently, without the prior slowdowns. Action Taken: I added composite indexes on the columns that were frequently used together in WHERE clauses and JOIN operations. I also removed redundant indexes that I believed were no longer needed. 4. Query Refactoring In some cases, adding indexes just wasn’t enough. Some queries needed to be written altogether. This involved breaking down complex queries into simpler, more efficient ones. Action Taken: I restructured several queries to reduce the number of JOIN operations and to use subqueries where necessary. I also implemented caching for some read-heavy operations to reduce the load on the database. 5. Testing and Monitoring After making these changes, I needed to be sure that they actually improved performance without introducing new issues. I set up a staging environment that mirrored the production environment to test the latest changes. Action Taken: I used load testing tools to simulate peak traffic and monitored the performance of the database. The optimized queries showed a huge reduction in execution time, and the application’s overall responsiveness improved. The Outcome The result of these optimizations was a dramatic improvement in the application’s performance during peak traffic times. I gained valuable insights into database optimization techniques following this. My Journey with HNG Internship Looking back on this experience, I’m more excited than ever about the journey I’m about to start with the HNG Internship. This internship for me is a great opportunity to learn from experts, collaborate with other talented developers, and tackle real-world problems. I’m veryinterested in the mentorship aspect of the HNG Internship. Having access to seasoned developers who can provide guidance and feedback is invaluable. I’m also eager to contribute to meaningful projects and to continue honing my skills in a dynamic and challenging environment. Why HNG Internship? I chose HNG Internship because it aligns with my career goals. It focuses on practical experiences and to be honest that’s just what I’m looking for. I would like to push myself, learn new things, and build some projects along the way. In conclusion, solving challenging backend problems is what drives me as a developer. The satisfaction of overcoming obstacles and finding practical solutions to them is unmatched. As I embark on this journey with the HNG Internship, I’m ready to embrace new challenges, learn from the best, and continue growing as a backend developer. Thank you for reading about my journey. I’ll be giving more updates in due course as I dive into this chapter.
ed_adelaja_e1a8093ba92af5
1,908,213
🚀 How to make your CV beat ATS and impress recruiters 🌟
Generic applications get buried these days. Stand out with a well-tailored CV that proves you're the...
0
2024-07-01T21:29:00
https://dev.to/hey_rishabh/how-to-make-your-cv-beat-ats-and-impress-recruiters-432d
webdev, beginners, tutorial, ai
Generic applications get buried these days. Stand out with a well-tailored CV that proves you're the perfect fit! Here’s how to craft a powerful CV that gets noticed ## Break the job description📍 📝 Read the job description closely and **_identify the (keywords) skills, experience, and qualifications they crave_**. These are your golden keywords. 🌟 By thoroughly understanding the job description, **you can tailor your CV to reflect exactly what the employer is looking for**. This means **highlighting the specific skills and experiences that match the job requirements**, showing that you are not only qualified but also a perfect fit for the role. **For example,** > if a job description mentions "project management" 10 times, ensure this keyword is prominently featured in your CV. ## 📍 Keyword magic ✨ **Sprinkle those keywords throughout your CV naturally**. Don't force it – but show the ATS and recruiter you possess what they are looking for. 🧙‍♂️ **[Integrate these keywords into your professional summary](https://instaresume.io/blog/how-to-write-a-resume-summary)**, skills section, and job experiences. ATS systems scan for specific terms, so using the right keywords can help ensure your CV gets seen by human eyes. **Aim to use each key term at least 3-4 times, depending on the length of your CV.** ## 📍 [Skills showcase](https://instaresume.io/blog/how-many-skills-to-list-on-resume) 🛠️ **Highlight your skills in a way that mirrors the job requirements. Balance technical expertise (hard skills) with interpersonal strengths (soft skills) – just like the job description emphasizes.** 🤹‍♂️ _For example, if the job requires project management and teamwork, make sure these skills are prominently featured. This balance shows you have a well-rounded skill set that matches the job perfectly. Quantify your skills by including numbers, such as "managed a team of 15" or "reduced project completion time by 25%."_ ## 📍[ Make these skills clear and easy to find](https://instaresume.io/blog/how-many-bullet-points-per-job-on-resume) 🔍 **Use bullet points, headings, and concise language to make your skills and qualifications easy to scan**. 📑 Recruiters often skim through CVs, so it’s crucial to make your key points stand out at a glance. A well-organized CV with clear sections will help recruiters quickly see your qualifications. **_For instance, use 5-7 bullet points under each job experience to keep it clear and concise._** ## Quick checklist for CV greatness ✅ ## Scrutinize the job description 🕵️‍♀️ Don't skim – understand their needs. Take note of specific requirements, preferred skills, and key responsibilities. This will help you align your CV with what the employer is seeking. Understanding their needs in detail allows you to tailor your application effectively.**Spend at least 15-20 minutes analyzing each job description.** ## 📍 Uncover the keywords 🔑 These are your tools to get seen. Identify the essential words and phrases from the job description and incorporate them naturally into your CV. Use these keywords in your professional summary, experience, and skills sections to show you have exactly what they're looking for. **Aim for 10-15 keywords per job description. ** ## 📍 Refresh your summary 🖊️ Tailor it to the specific role. Your professional summary should highlight your most relevant experiences and skills, directly reflecting what the job description calls for. A well-crafted summary can quickly convey your suitability for the role and grab the recruiter’s attention. **Keep it to 3-4 sentences, incorporating 2-3 key achievements or skills.** ## 📍 Fine-tune your experience 💼 Highlight relevant achievements. Focus on accomplishments and responsibilities that directly relate to the job you’re applying for. Use metrics and specific examples to showcase your impact. For instance, instead of saying “Managed a team,” you could say “Managed a team of 10, increasing productivity by 20%.” Include 2-3 quantified achievements for each role. [Incorporating action verbs into your bullet points](https://instaresume.io/blog/action-verbs-for-resume) can significantly enhance your ATS compatibility, ensuring your resume stands out. ## 📍 List powerful skills 💪 Showcase what makes you a perfect match. Create a skills section that lists both hard and soft skills pertinent to the job, demonstrating a comprehensive fit for the role. Make sure these skills are easy to find and clearly defined. Include 6-8 skills that are most relevant to the job. ## 📍[ Keep it concise & focused](https://instaresume.io/blog/resume-format-for-freshers) 🕐 Recruiters don't have all day. Aim for a clear, concise, and well-organized CV that quickly conveys your suitability for the role. Avoid unnecessary details and focus on what makes you stand out.** Keep your CV to 1-2 pages, depending on your experience level. ** [Tailoring your CV takes time](https://instaresume.io/blog/resume-format-for-freshers), but the results are golden! 🌟 Invest in a winning CV, and watch your job search take off. 🚀
hey_rishabh
1,908,210
shadcn-ui/ui codebase analysis: How does shadcn-ui CLI work? — Part 2.3
I wanted to find out how shadcn-ui CLI works. In this article, I discuss the code used to build the...
0
2024-07-01T21:22:59
https://dev.to/ramunarasinga/shadcn-uiui-codebase-analysis-how-does-shadcn-ui-cli-work-part-23-fnj
javascript, opensource, shadcnui, nextjs
I wanted to find out how shadcn-ui CLI works. In this article, I discuss the code used to build the shadcn-ui/ui CLI. In [part 2.2](https://medium.com/@ramu.narasinga_61050/shadcn-ui-ui-codebase-analysis-how-does-shadcn-ui-cli-work-part-2-2-73cff5651b06), I followed along the call stack when the function getProjectConfig is called and discussed functions named [getConfig](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-config.ts#L55), [getRawConfig](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-config.ts#L91). A detailed explanation is provided about getRawConfig that is called from getConfig. In this article, we will analyse few more lines of code from getConfig function. ![](https://media.licdn.com/dms/image/D4E12AQFI9vQMv__mbg/article-inline_image-shrink_1000_1488/0/1719868556454?e=1725494400&v=beta&t=8DBsSv5N2c6wJjt5dY_8Zk1Rzj11lBFiyGrqVCqpryw) resolveConfigPaths ------------------ ```js export async function resolveConfigPaths(cwd: string, config: RawConfig) { // Read tsconfig.json. const tsConfig = await loadConfig(cwd) if (tsConfig.resultType === "failed") { throw new Error( \`Failed to load ${config.tsx ? "tsconfig" : "jsconfig"}.json. ${ tsConfig.message ?? "" }\`.trim() ) } return configSchema.parse({ ...config, resolvedPaths: { tailwindConfig: path.resolve(cwd, config.tailwind.config), tailwindCss: path.resolve(cwd, config.tailwind.css), utils: await resolveImport(config.aliases\["utils"\], tsConfig), components: await resolveImport(config.aliases\["components"\], tsConfig), ui: config.aliases\["ui"\] ? await resolveImport(config.aliases\["ui"\], tsConfig) : await resolveImport(config.aliases\["components"\], tsConfig), }, }) } ``` Let’s break this function down. ### loadConfig ```js // Read tsconfig.json. const tsConfig = await loadConfig(cwd) loadConfig is imported from [tsconfig-paths](https://www.npmjs.com/package/tsconfig-paths). This function loads the tsconfig.json or jsconfig.json. It will start searching from the specified cwd directory. if (tsConfig.resultType === "failed") { throw new Error( \`Failed to load ${config.tsx ? "tsconfig" : "jsconfig"}.json. ${ tsConfig.message ?? "" }\`.trim() ) } ``` This is an error check that throws an error when tsConfig or jsConfig fails to load. ```js return configSchema.parse({ ...config, resolvedPaths: { tailwindConfig: path.resolve(cwd, config.tailwind.config), tailwindCss: path.resolve(cwd, config.tailwind.css), utils: await resolveImport(config.aliases\["utils"\], tsConfig), components: await resolveImport(config.aliases\["components"\], tsConfig), ui: config.aliases\["ui"\] ? await resolveImport(config.aliases\["ui"\], tsConfig) : await resolveImport(config.aliases\["components"\], tsConfig), }, }) ``` [configSchema](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-config.ts#L43) is returned by the [resolveConfigPaths](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-config.ts#L65C23-L65C41). This code snippet uses [path.resolve](https://nodejs.org/api/path.html#pathresolvepaths) and [resolveImport](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/resolve-import.ts#L3). ### resolveImport ```js import { createMatchPath, type ConfigLoaderSuccessResult } from "tsconfig-paths" export async function resolveImport( importPath: string, config: Pick<ConfigLoaderSuccessResult, "absoluteBaseUrl" | "paths"> ) { return createMatchPath(config.absoluteBaseUrl, config.paths)( importPath, undefined, () => true, \[".ts", ".tsx"\] ) } ``` You can read more about [createMatchPath](https://www.npmjs.com/package/tsconfig-paths#creatematchpath). Conclusion: ----------- I updated the [commanderjs-usage-in-shadcnui](https://github.com/Ramu-Narasinga/commanderjs-usage-in-shadcn-ui) with shadcn-ui CLI package code to understand the getConfig call in [getProjectConfig](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L73). Turns out, this little detour to understand the series of function calls following getConfig are to check if there is existing component config. ```js // Check for existing component config. const existingConfig = await getConfig(cwd) if (existingConfig) { return existingConfig } ``` Just to recap, the call stack is like this: getConfig calls getRawConfig, getRawConfig uses [explorer.search](http://explorer.search) (from cosmicconfig) and then if there is an existing component config, resolveConfigPaths is returned that uses some helpers function such as createMatchPath provided by tsconfig-paths package. All this trouble just to check if there’s an existingConfig. Why tho? The answer lies in the different schema that you get when there’s an existing component config available. ([configSchema](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-config.ts#L43) and [rawConfigSchema](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-config.ts#L20)). There’s something unique about the way these functions are organised! > _Want to learn how to build shadcn-ui/ui from scratch? Check out_ [_build-from-scratch_](https://tthroo.com/) About me: --------- Website: [https://ramunarasinga.com/](https://ramunarasinga.com/) Linkedin: [https://www.linkedin.com/in/ramu-narasinga-189361128/](https://www.linkedin.com/in/ramu-narasinga-189361128/) Github: [https://github.com/Ramu-Narasinga](https://github.com/Ramu-Narasinga) Email: [ramu.narasinga@gmail.com](mailto:ramu.narasinga@gmail.com) [Build shadcn-ui/ui from scratch](https://tthroo.com/) References: ----------- 1. [https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-config.ts#L55](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-config.ts#L55) 2. [https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-config.ts#L4](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-config.ts#L4) 3. [https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-config.ts#L2](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-config.ts#L2) 4. [https://www.npmjs.com/package/tsconfig-paths](https://www.npmjs.com/package/tsconfig-paths) 5. [https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/resolve-import.ts#L3](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/resolve-import.ts#L3) 6. [https://www.npmjs.com/package/tsconfig-paths#creatematchpath](https://www.npmjs.com/package/tsconfig-paths#creatematchpath)
ramunarasinga
1,908,204
Day 3
I am a hobbyist Python dev now. Working on a choose your own adventure game and having a lot of fun....
0
2024-07-01T21:21:57
https://dev.to/myrojyn/day-3-4j3g
I am a hobbyist Python dev now. Working on a choose your own adventure game and having a lot of fun. Here's to hoping that this is building a bridge from hobbyist to being employed for it.
myrojyn
1,908,162
Getting Started with Synthetic Monitoring on GCP and Datadog
When you are talking about Monitoring, we often think about getting information on cpu usage, memory...
0
2024-07-01T21:17:50
https://medium.zenika.com/getting-started-with-synthetic-monitoring-on-gcp-and-datadog-2391b9f38025
cloud, testing, performance, googlecloud
When you are talking about Monitoring, we often think about getting information on cpu usage, memory usage and other classic ops metrics. A 2024 trend in the monitoring landscape is also to gather end to end metrics and that’s why we see more and more “Synthetic Monitoring” panels in modern observability tools. Like in the pyramid testing model, testing end to end metrics is a complementary approach to the metrics that we are used to check. With new SEO constraints linked to the performance of web applications, many companies are now managing end to end metrics to get the full picture of what is working and how fast their websites are. “Everything fails, all the time” is a famous quote from Amazon's Chief Technology Officer Werner Vogels. This means that software and distributed systems may eventually fail because something can always go wrong. ## Definition & Concepts In a few words : It’s a way to monitor your application, by simulating user actions and business critical scenarios. This aims to warn you of production related issues, so you can fix them before impacting most of your users. If you are familiar with End-to-end testing, you are almost good to go. The idea, here, is to execute scripts that mimic your users and ”run” most business critical scenarios. The key difference is that you are running this on your production environments, and focus will be given on performance of your application. Once your scenario is executed, you’ll get plenty of metrics around your application, whether it’s related to web performance, or on one of your internal components. Based on these metrics, you’ll be able to know where the effort needs to be made on your assets, to make your application more robust and fault-tolerant. In general, there are different type of monitoring that refer to Synthetic monitoring : ### 1/ Availability Monitoring : Will verify that your service is available at any time. It is a bit more sophisticated than a simple health check control, instead this monitor ensures that the service is running well and responding as it intends to be. ### 2/ Transaction Monitoring : It is a step ahead of Availability Monitoring, now we are going to add scripts that will simulate users interactions to make sure business critical scenarios are working as expected. ### 3/ Web performance monitoring : As its name suggests, it will focus on application performance throughout Core Web Vitals, this helps identify improvements/ degradation for your end-users. ## Use-case In this article, we will focus on Transaction & Web performance monitoring : Our critic scenario is : - A user navigates to https://training.zenika.com - Type in search bar : “CKA” - Should be redirected to a result page containing results. - Choose the first training. - Should be redirected to the training detail page. ## Synthetic monitoring setup on Google Cloud _Demo: https://github.com/Tarektouati/GCP-synthetic-monitoring_ A synthetic monitor is composed of 2 Google Cloud components : A cloud function A monitor attached to the cloud function. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/08p8458s1wbetivn501k.jpeg) We’ll create a Node.JS cloud function with a puppeteer inside to which iterate on our user’s scenario. As it’s puppeteer, you can write it by yourself, and rely on testing-library best parcticies and install pptr-testing-library. if you are lazy, that’s why you can use Chrome recorder, to generate your puppeteer user journey. Once, your are good to go, deploy cloud function in the your desired region, by running : ```sh gcloud functions deploy <YOUR_CLOUD_FUNC_NAME> — gen2 — runtime=nodejs18 — region=<REGION> — source=. — entry-point=<YOUR_CLOUD_FUNC_ENTRYPOINT> — memory=2G — timeout=60 — trigger-http ``` You should see you cloud function available on GCP console ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kdh0iauv4iqf0e3fa160.png) Next, attach a monitor to that cloud function : ```sh gcloud monitoring uptime create <YOUR_MONITOR_NAME> — synthetic-target=projects/<PROJECT_ID>/locations/<REGION>/functions/<YOUR_CLOUD_FUNC_NAME> — period=5 ``` You should also see the monitor available and configured on the GCP console ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u0rmqsq0dyp8x9irhwl0.png) Navigate to the monitor detail page, you can see whether it status is passing or not. Go ahead and create an Alerting policy, this step is crucial as it will notify you when your monitoring goes wrong. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dge07xild0y4c4qdx39q.png) Define a duration before an incident is declared, select a channel (email, slack, pager-duty, …). And add instruction to help the duty-girl/boy understand the incident. For now, we have seen Transaction Monitoring with GCP, but what about Web performance monitoring ? This doesn’t come out of the box, but still possible on GCP by combining puppeteer with lighthouse (checkout puppeteer documentation for this https://github.com/GoogleChrome/lighthouse/blob/main/docs/puppeteer.md). ## Set-up on synthetic monitoring on Datadog A synthetic monitor on Datadog is composed of different components : ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ut1ggv520xbs1xmm2q0s.png) Select **UX Monitoring > Synthetic tests > New Test** then create a “Browser test” Configure your test by filling: - Target URL - Name, - Browser targets (Chrome, Firefox, …) - Locations: Chose one or multiple locations based on your business requirements ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2xo4w14zsblomde2iho6.png) Next, same as the GCP part, we need to define the test period and alerting conditions. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c1zrwkze02s1y2iu43wy.png) To create our test scenario it’s quite easy and fast, Datadog allow you to record a journey directly from your browser (This requires a browser extension to be installed) Go ahead and create your own journey, and once you are satisfied, create your monitor. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j3vk20delpxd8y1rpcbq.png) By default, a Browser performance dashboard is already available. This one showcases the metrics like : - success rate per browser (chosen in the test configuration in the steps above) - Core Web Vitals - Long running tasks (which can be painful for your users) - 3-party integration - … ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5ob49kbuj54wefn2aqql.png) ## Conclusion Leveraging Google Cloud for Availability and Transaction Monitoring is a robust and efficient choice, especially for those already integrated into the Google Cloud ecosystem. The seamless integration and comprehensive tools available within Google Cloud ensure thorough and effective transaction monitoring. However, when it comes to web performance monitoring, while it’s still possible on Google Cloud, exploring Datadog can provide additional benefits. Datadog’s Real User Monitoring (RUM) is particularly noteworthy, offering advanced capabilities and insights that might better serve your web performance monitoring needs. More information : [Link to Datadog doc](https://docs.datadoghq.com/synthetics/) [Link to Google Cloud doc](https://cloud.google.com/monitoring/uptime-checks/introduction)
tarektouati
1,908,165
Simulating a Traffic Light with Bacon.js and state machine
Handling asynchronous events and managing state effectively are essential skills. Functional Reactive...
0
2024-07-01T21:15:38
https://dev.to/francescoagati/simulating-a-traffic-light-with-baconjs-and-state-machine-48f0
javascript, baconjs, frp, statemachine
Handling asynchronous events and managing state effectively are essential skills. Functional Reactive Programming (FRP) provides a powerful paradigm that simplifies these tasks by treating events as continuous streams of data. ### Understanding the Traffic Light State Machine Let's start by defining the states and transitions of our traffic light using a simple state machine approach: ```javascript const Bacon = require('baconjs'); // Define traffic light states and transitions const trafficLightStateMachine = { initialState: 'Green', transitions: { 'Green': { nextState: 'Yellow', duration: 3000 }, 'Yellow': { nextState: 'Red', duration: 1000 }, 'Red': { nextState: 'Green', duration: 2000 } } }; ``` Here, the traffic light begins at `'Green'`, transitions to `'Yellow'` after 3 seconds, then to `'Red'` after 1 second, and finally back to `'Green'` after another 2 seconds. ### Simulating Traffic Light Events To simulate the traffic light's behavior, we'll create a stream of events using Bacon.js: ```javascript const simulateTrafficLight = Bacon.fromArray([ { type: 'state', value: 'Green' }, { type: 'timeout' }, { type: 'state', value: 'Yellow' }, { type: 'timeout' }, { type: 'state', value: 'Red' }, { type: 'timeout' }, { type: 'state', value: 'Green' }, { type: 'timeout' } ]); ``` This `simulateTrafficLight` stream alternates between emitting state change events (`'state'`) and timeout events (`'timeout'`), mimicking the traffic light's transitions in a controlled manner. ### Implementing the State Machine with `withStateMachine` The heart of our simulation lies in using `withStateMachine` provided by Bacon.js. It allows us to model the traffic light's behavior based on the defined state machine: ```javascript simulateTrafficLight .withStateMachine(trafficLightStateMachine.initialState, function(state, event) { if (event.type === 'state') { // Emit the current state and schedule the next transition return [state, [{ type: 'timeout', delay: trafficLightStateMachine.transitions[state].duration }]]; } else if (event.type === 'timeout') { // Transition to the next state const nextState = trafficLightStateMachine.transitions[state].nextState; return [nextState, [{ type: 'state', value: nextState }]]; } else { // Pass through unknown events return [state, [event]]; } }) .log(); ``` In this setup: - When a `'state'` event is encountered, it emits the current state and schedules the next transition after the specified duration. - When a `'timeout'` event occurs, it transitions to the next state as defined in the state machine. - Any other events are passed through without changes. ### Visualizing the Traffic Light Simulation By logging the output of our state machine, we can observe the sequence of state changes and timeouts as they occur based on our predefined simulation: ```javascript simulateTrafficLight .withStateMachine(/*...*/) .log(); ``` Functional Reactive Programming with Bacon.js offers a straightforward yet powerful approach to managing state and handling events in JavaScript. By using FRP principles, you can build applications that are not only responsive but also easier to maintain and extend over time. Mastering these concepts opens up possibilities for creating more interactive and dynamic web experiences. Whether you're building a traffic light simulator or handling real-time data updates, Bacon.js and FRP provide a solid foundation for modern JavaScript development. In conclusion, diving into FRP with Bacon.js can elevate your JavaScript skills and empower you to tackle complex event-driven scenarios with confidence.
francescoagati
1,908,163
Automate your GitHub Issues with PowerShell and GitHub API
I made a post on how to authenticate to the GitHub API to perform tasks on your repositories, in this...
0
2024-07-01T21:07:45
https://dev.to/omiossec/automate-your-github-issues-with-powershell-and-github-api-2l39
powershell, github
I made a post on how to authenticate to the GitHub API to perform tasks on your repositories, in this article I will show how to automate actions in GitHub. I will use GitHub Issues as examples, I will use the token authentication, the standard one without restriction, but all the scripts presented here will work using any other authentication method. The scenario is simple. You have one repository in a GitHub organization and you want to automate some issue-related tasks. Let's start by creating an issue for a repository in PowerShell. ```powershell param( # The access token to the GitHub Rest API [Parameter(Mandatory=$true)] [string] $accessToken, # The Organisation name [Parameter(Mandatory=$true)] [string] $orgaName, # The repository Name [Parameter(Mandatory=$true)] [string] $reposName, # the tittle of the Issue [Parameter(Mandatory=$true)] [string] $issueTitle, # the body of the issue [Parameter(Mandatory=$true)] [string] $issueBody ) $authenticationToken = [System.Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(":$accessToken")) $headers = @{ "Authorization" = [String]::Format("Basic {0}", $authenticationToken) "Content-Type" = "application/json" "Accept" = "application/vnd.github+json" "X-GitHub-Api-Version" = "2022-11-28" } $issueCreationUri = "https://api.github.com/repos/$($orgaName)/$($reposName)/issues" $body = @{ "title" = $issueTitle; "body"= $issueBody} | ConvertTo-Json $githubIssueLabels = Invoke-RestMethod -Method post -Uri $issueCreationUri -Headers $headers -body $body $githubIssueLabels ``` As seen in the previous article, we need to convert the token (a String) in base64 and add it to the header used in the request. This header includes the content type, JSON, the accept field (recommended by GitHub), and the API version (optional). We need to create the HTTP body of the request by using a Hashtable converted to JSON We need to build the API URI using the Organization name and the Repository name Then we use invoke-RestMethod to send the query using the HTTP post method To run it: `.\createIssue.ps1 -accessToken "<GitHub Token>" -orgaName "<Orga Name>" -reposName "<Repos name>" -issueTitle "something don't work" -issueBody "I can't access the servive"` We can list issues from a repository: ```powershell param( # The access token to the GitHub Rest API [Parameter(Mandatory=$true)] [string] $accessToken, # The Organisation name [Parameter(Mandatory=$true)] [string] $orgaName, # The repository Name [Parameter(Mandatory=$true)] [string] $reposName ) $authenticationToken = [System.Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(":$accessToken")) $headers = @{ "Authorization" = [String]::Format("Basic {0}", $authenticationToken) "Content-Type" = "application/json" "X-GitHub-Api-Version" = "2022-11-28" } # Querry the API to list all issues in a repository $reposAPIUri = "https://api.github.com/repos/$($orgaName)/$($reposName)/issues" $githubIssues = Invoke-RestMethod -Method get -Uri $reposAPIUri -Headers $headers foreach ($issue in $githubIssues) { $issue.title $issue.Id $issue.created_at.dateTime $issue.locked $issue.State $issue.user.login $issue.number } ``` Here, we use almost the same header, but the Accept field is optional. We build the URI and use Invoke-WebRequest to query the API. The result in the $gitHubIssues is an Array. You need to use a forEach to extract some information. We can get the details of an issue by using the issue number: ```powershell param( # The access token to the GitHub Rest API [Parameter(Mandatory=$true)] [string] $accessToken, # The Organisation name [Parameter(Mandatory=$true)] [string] $orgaName, # The repository Name [Parameter(Mandatory=$true)] [string] $reposName, # The issue number [Parameter(Mandatory=$true)] [int] $issueNumber ) $authenticationToken = [System.Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(":$accessToken")) $headers = @{ "Authorization" = [String]::Format("Basic {0}", $authenticationToken) "Content-Type" = "application/json" "Accept" = "application/vnd.github.text+json" "X-GitHub-Api-Version" = "2022-11-28" } $issueIUri = "https://api.github.com/repos/$($orgaName)/$($reposName)/issues/$($issueNumber)" $githubIssue = Invoke-RestMethod -Method get -Uri $issueIUri -Headers $headers $githubIssue ``` Here we need to use the Accept field with "application/vnd.github.text+json" to indicate we only want the text representation of the Markdown, but you can use other options: - application/vnd.github.raw+json to return the markdown as code, like in VSCode - application/vnd.github.html+json to return the HTML representation of the issue - application/vnd.github.full+json to return the text representation, the markdown code and the HTML representation in Body, Body_text and Body_html To execute: `.\getissue.ps1 - accessToken "<GitHub Token>" -orgaName "<Orga Name>" -reposName "<Repos name>" -issueNumber <issue Number>` You can also get the list of comments for an issue: ```powershell param( # The access token to the GitHub Rest API [Parameter(Mandatory=$true)] [string] $accessToken, # The Organisation name [Parameter(Mandatory=$true)] [string] $orgaName, # The repository Name [Parameter(Mandatory=$true)] [string] $reposName, # The issue number [Parameter(Mandatory=$true)] [int] $issueNumber ) $authenticationToken = [System.Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(":$accessToken")) $headers = @{ "Authorization" = [String]::Format("Basic {0}", $authenticationToken) "Content-Type" = "application/json" "Accept" = "application/vnd.github.full+json" "X-GitHub-Api-Version" = "2022-11-28" } $issueCommentsURI = "https://api.github.com/repos/$($orgaName)/$($reposName)/issues/$($issueNumber)/comments" $githubIssueComments = Invoke-RestMethod -Method get -Uri $issueCommentsURI -Headers $headers foreach ($comment in $githubIssueComments){ $comment.body $comment.id $comment.user.login $comment.url $comment.reactions.total_count } ``` Here we have the same header as the previous example, the Accept field uses application/vnd.github.full+json to get the 3 options for the comment body. The result is an array that can be parsed with a ForEach and it contains complex objects like user or reaction. To create a comment, you can use the same URI with a POST HTTP method: ```powershell param( # The access token to the GitHub Rest API [Parameter(Mandatory=$true)] [string] $accessToken, # The Organisation name [Parameter(Mandatory=$true)] [string] $orgaName, # The repository Name [Parameter(Mandatory=$true)] [string] $reposName, # the body of the comment [Parameter(Mandatory=$true)] [string] $commentBody, # The issue number [Parameter(Mandatory=$true)] [int] $issueNumber ) $authenticationToken = [System.Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(":$accessToken")) $headers = @{ "Authorization" = [String]::Format("Basic {0}", $authenticationToken) "Content-Type" = "application/json" "Accept" = "application/vnd.github+json" "X-GitHub-Api-Version" = "2022-11-28" } $body = @{ "body"= $commentBody} | ConvertTo-Json $issueCommentsURI = "https://api.github.com/repos/$($orgaName)/$($reposName)/issues/$($issueNumber)/comments" $githubCreateIssueComments = Invoke-RestMethod -Method POST -Uri $issueCommentsURI -Headers $headers -body $body $githubCreateIssueComments ``` To run it: `createcomment.ps1 - accessToken "<GitHub Token>" -orgaName "<Orga Name>" -reposName "<Repos name>" -issueNumber <issue Number> -commentBody "I have the same problem"` You can manage labels on an issue, list, add, remove, and update labels. To list label: ```powershell param( # The access token to the GitHub Rest API [Parameter(Mandatory=$true)] [string] $accessToken, # The repository Name [Parameter(Mandatory=$true)] [string] $orgaName, # The repository Name [Parameter(Mandatory=$true)] [string] $reposName, # The issue number [Parameter(Mandatory=$true)] [int] $issueNumber ) $authenticationToken = [System.Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(":$accessToken")) $headers = @{ "Authorization" = [String]::Format("Basic {0}", $authenticationToken) "Content-Type" = "application/json" "Accept" = "application/vnd.github+json" "X-GitHub-Api-Version" = "2022-11-28" } $issueURI = "https://api.github.com/repos/$($orgaName)/$($reposName)/issues/$($issueNumber)/labels" $githubIssueLabels = Invoke-RestMethod -Method get -Uri $issueURI -Headers $headers foreach ($label in $githubIssueLabels) { $label.name $label.description $label.default } ``` The default property, here, indicates if the label is part of the standard GitHub label (Bug, Triage, documentation for example) or a personal label. To add labels to an issue: ```powershell param( # The access token to the GitHub Rest API [Parameter(Mandatory=$true)] [string] $accessToken, # The repository Name [Parameter(Mandatory=$true)] [string] $orgaName, # The repository Name [Parameter(Mandatory=$true)] [string] $reposName, # The issue number [Parameter(Mandatory=$true)] [int] $issueNumber, # Array of label [Parameter(Mandatory=$true)] [array] $issueLabels ) $authenticationToken = [System.Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(":$accessToken")) $headers = @{ "Authorization" = [String]::Format("Basic {0}", $authenticationToken) "Content-Type" = "application/json" "Accept" = "application/vnd.github+json" "X-GitHub-Api-Version" = "2022-11-28" } $body = @{ "labels" = $issueLabels} | ConvertTo-Json $issueLabelsURI = "https://api.github.com/repos/$($orgaName)/$($reposName)/issues/$($issueNumber)/labels" $githubIssueLabels = Invoke-RestMethod -Method post -Uri $issueLabelsURI -Headers $headers -body $body $githubIssueLabels ``` A POST HTTP method is used to create labels. Labels are stored as an array, the array is used to create the body of the request that needs to be converted into JSON. You can delete a label by using the DELETE HTTP method: ```powershell param( # The access token to the GitHub Rest API [Parameter(Mandatory=$true)] [string] $accessToken, # The repository Name [Parameter(Mandatory=$true)] [string] $orgaName, # The repository Name [Parameter(Mandatory=$true)] [string] $reposName, # The issue number [Parameter(Mandatory=$true)] [int] $issueNumber, # The label to remove [Parameter(Mandatory=$true)] [string] $issueLabelName ) $authenticationToken = [System.Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(":$accessToken")) $headers = @{ "Authorization" = [String]::Format("Basic {0}", $authenticationToken) "Content-Type" = "application/json" "Accept" = "application/vnd.github+json" "X-GitHub-Api-Version" = "2022-11-28" } $issueLabelsURI = "https://api.github.com/repos/$($orgaName)/$($reposName)/issues/$($issueNumber)/labels/$($issueLabelName)" $githubIssueLabels = Invoke-RestMethod -Method DELETE -Uri $issueLabelsURI -Headers $headers -body $body $githubIssueLabels ``` Another action that can be done with the GitHub API is to lock an issue: ```powershell param( # The access token to the GitHub Rest API [Parameter(Mandatory=$true)] [string] $accessToken, # The repository Name [Parameter(Mandatory=$true)] [string] $orgaName, # The repository Name [Parameter(Mandatory=$true)] [string] $reposName, # The issue number [Parameter(Mandatory=$true)] [int] $issueNumber, # The lock reason [Parameter(Mandatory=$true)] [string] [ValidateSet("off-topic", "too heated", "resolved", "spaqm")] $lockReason ) $authenticationToken = [System.Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(":$accessToken")) $headers = @{ "Authorization" = [String]::Format("Basic {0}", $authenticationToken) "Content-Type" = "application/json" "Accept" = "application/vnd.github+json" } $body = @{ "lock_reason" = $lockReason} | ConvertTo-Json $issueURI = "https://api.github.com/repos/$($orgaName)/$($reposName)/issues/$($issueNumber)/lock" Invoke-RestMethod -Method put -Uri $issueURI -Headers $headers -body $body ``` To lock an issue you need to choose a reason among off-topic, too heated, resolved, or spam, the parameter $lockReason validates this set. It is used to create the body of the request. There are small examples of how to automate GitHub Issues with PowerShell. You can explore [the API to do more](https://docs.github.com/en/rest/quickstart?apiVersion=2022-11-28) You can find the code in this [GitHub repository](https://github.com/omiossec/pwsh-GitHub-API/tree/main/AutomateGitHub)
omiossec
1,908,161
A simple guide to React.js and Next.js
React.js and Next.js are popular frontend technologies or tools used to build websites or web...
0
2024-07-01T21:05:41
https://dev.to/__mulero/a-simple-guide-to-reactjs-and-nextjs-1k02
React.js and Next.js are popular frontend technologies or tools used to build websites or web applications, particularly in the front end. If you're new to web applications this write-up can be useful in understanding what these technologies do and their differences. React.js, also called React, is a javascript library created by the popular Facebook. It makes things easier for developers by breaking down user interfaces(UIs) into feasible, reusable parts known as components, enabling programmers or developers to create UIs. Components are just like puzzles or LEGO blocks that we can put together to create a complete website or web application. Next.js is a javascript framework built on top of React. Created by Vercel, it basically makes React better by adding extra features that make building complex websites easier. It handles issues like rendering web pages on the server and generating static pages ahead of time. React.js is more flexible than Next.js meaning React allows you to configure and set up the project the way you like it making it possible to choose different libraries and tools that you need meanwhile Next.js comes with a lot of built-in features and conventions, limiting the use of manual setup and configuration. Next.js helps to render services like simple routing, server-side rendering, and static site generation making the creation of SEO-friendly websites. I am excited about participating in the HNG internship to enhance my React.js skills and gain proper experience in front-end development. React.js's powerful features and its known flexibility make it an excellent choice for building websites or web applications and I'm excited to use this wonderful technology to contribute and build meaningful projects during the internship at HNG. Click the link to join the HNG internship [(https://hng.tech/internship,)]. You can also hire elite developers, click the link below to hire ;[(https://hng.tech/hire)].
__mulero
1,908,159
READING RECENT DOCUMENTATION CAN SAVE YOU A LOT OF TIME: MY EXPERIENCE TRYING TO UPSERT VECTORS TO MY PINECONE DATABASE
My name is Ezenwa Victory Chibuikem, a recent graduate of Electrical and Electronics Engineering from...
0
2024-07-01T21:04:42
https://dev.to/victory_ezenwa_87a0d0e9da/reading-recent-documentation-can-save-you-a-lot-of-time-my-experience-trying-to-upsert-vectors-to-my-pinecone-database-ib1
My name is Ezenwa Victory Chibuikem, a recent graduate of Electrical and Electronics Engineering from FUTO. I am a data scientist looking to extend my expertise to machine learning engineering. I view machine learning engineering as a role designed for software engineers who can build machine learning models. Recently, I signed up for and was accepted into HNG Internship 11, where I joined the Backend track to gain experience as a software engineer, particularly in Backend development. If you are seeking valuable experience in fields like mobile development, DevOps, backend, frontend, or data analysis, HNG is an excellent starting point. You can explore more at HNG internship (https://hng.tech/internship) or HNG hire(https://hng.tech/hire). Today, I will be writing about my most recent backend roadblock. Although I wouldn’t call it a difficult backend problem, it was certainly one I couldn’t have solved on my own. I was building an e-commerce product recommendation system that used inputted texts and images to recommend products to users. This was done using Flask, and Pinecone served as my vectorized database. After setting up a connection, I couldn’t seem to upsert any records. The error message displayed wasn’t helpful either, as it didn’t point to a failed insertion but rather to incorrect dimensions of my vectors. At first, I thought the issue was with my vectorization. I even changed the vectorization scheme twice (hard lesson: check your function arguments before changing your entire logic). After hours of debugging and searching, I was able to narrow it down to the upsert line of code and began rechecking it with examples on the internet. I found someone with a similar problem on Stack Overflow, and my issue was solved. The error resulted from assigning a value to a deprecated argument. Initially, I was angry at the energy spent trying to overcome this, but in the end, I couldn’t be happier that I got to move on to other parts of the project. I hope this was an insightful read. I’m looking forward to seeing how much my software engineering skills will improve after this internship. Till next time!
victory_ezenwa_87a0d0e9da
1,908,158
Cybersecurity 101 for Developers: From Zero to Hero
In today’s digital age, cybersecurity is more important than ever. As a developer, understanding the...
0
2024-07-01T21:03:50
https://devtoys.io/2024/07/01/cybersecurity-101-for-developers-from-zero-to-hero/
cybersecurity, security, secops, devtoys
--- canonical_url: https://devtoys.io/2024/07/01/cybersecurity-101-for-developers-from-zero-to-hero/ --- In today’s digital age, cybersecurity is more important than ever. As a developer, understanding the basics of cybersecurity and how to protect your applications from common vulnerabilities is crucial. This guide will take you from zero to hero, covering fundamental concepts, typical vulnerabilities, and best practices to secure your applications. --- ## Understanding Cybersecurity Basics **What is Cybersecurity?** Cybersecurity refers to the practice of protecting systems, networks, and programs from digital attacks. These attacks often aim to access, change, or destroy sensitive information, extort money from users, or interrupt normal business processes. Implementing effective cybersecurity measures is particularly challenging today because there are more devices than people, and attackers are becoming more innovative. --- ## The CIA Triad At the core of cybersecurity are three fundamental principles, often referred to as the CIA triad: - **Confidentiality:** Ensuring that sensitive information is accessed only by authorized individuals. - **Integrity:** Protecting information from being altered by unauthorized parties. - **Availability:** Ensuring that information and resources are accessible to those who need them when they need them. --- ## Common Vulnerabilities **1. SQL Injection** SQL Injection occurs when an attacker exploits vulnerabilities in an application’s software by inserting malicious SQL code into an input field, allowing them to manipulate the database. **Ways to protect:** - Use prepared statements and parameterized queries. - Employ ORM (Object-Relational Mapping) frameworks that handle SQL queries safely. - Validate and sanitize all user inputs. --- **2. Cross-Site Scripting (XSS)** XSS attacks occur when an attacker injects malicious scripts into content from otherwise trusted websites. These scripts can execute in the user’s browser, leading to unauthorized actions or data theft. **Ways to protect:** - Encode data before rendering it in the browser. - Use content security policies (CSP) to restrict the sources from which scripts can be loaded. - Validate and sanitize all user inputs. --- **3. Cross-Site Request Forgery (CSRF)** CSRF attacks trick users into performing actions they did not intend to perform by exploiting the trust a site has in the user’s browser. **Ways to protect:** - Use anti-CSRF tokens. - Ensure that state-changing requests require POST requests. - Implement same-site cookies to prevent cross-origin requests. --- **4. Insecure Deserialization** Insecure deserialization occurs when untrusted data is used to abuse the logic of an application, inflict a denial of service (DoS) attack, or execute arbitrary code. **Ways to protect:** - Avoid accepting serialized objects from untrusted sources. - Implement integrity checks such as digital signatures on serialized objects. - Use serialization libraries that enforce strict controls over which types of objects can be deserialized. --- **5. Security Misconfiguration** Security misconfiguration is the most common issue, often resulting from default configurations, incomplete configurations, open cloud storage, or misconfigured HTTP headers. **Ways to protect:** - Implement a robust configuration management process. - Regularly update and patch systems. - Use automated tools to scan for misconfigurations. --- ## 👀 Are you looking to deep dive into learning more foundational knowledge on cyber security? This is an AWESOME read you NEED to check out!🧐 [![How Cybersecurity Really Works](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ojak731585sx4e8ziy63.jpg)](https://amzn.to/4ctSkPB) ## [How Cybersecurity Really Works: A Hands-On Guide for Total Beginners](https://amzn.to/4ctSkPB) --- ## Best Practices for Securing Applications **1. Secure Development Lifecycle (SDL)** Incorporate security at every phase of the software development lifecycle (SDL). This includes planning, design, coding, testing, and maintenance. Adopting an SDL ensures that security is a priority from the beginning. --- **2. Code Reviews and Static Analysis** Regular code reviews and static code analysis can identify potential security vulnerabilities before they are exploited. Use automated tools to scan your code for common security issues. --- **3. Penetration Testing** Conduct regular penetration testing to identify and mitigate vulnerabilities. Penetration testing simulates an attack on your system, helping you understand how an attacker might exploit vulnerabilities. --- **4. Keep Dependencies Updated** Outdated libraries and frameworks can introduce security vulnerabilities. Use tools like Dependabot or Snyk to keep your dependencies up to date and secure. --- **5. Educate and Train Your Team** Ensure that all team members are aware of security best practices and understand the importance of cybersecurity. Regular training and education can help keep your team informed about the latest threats and mitigation strategies. --- ## Conclusion Cybersecurity is an ongoing process that requires vigilance and continuous improvement. By understanding the basics, recognizing common vulnerabilities, and implementing best practices, you can significantly enhance the security of your applications. Remember, security is everyone’s responsibility, and staying informed and proactive is key to protecting your digital assets. By following this guide, you will be well on your way from zero to hero in cybersecurity, ensuring that your applications remain safe and secure in the face of evolving threats. For further reading and resources, consider exploring the following: - [OWASP Top Ten](https://owasp.org/www-project-top-ten/) - [SANS Institute](https://www.sans.org/) - [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) - [Cybersecurity and Infrastructure Security Agency (CISA)](https://www.cisa.gov/) By leveraging these resources, you can deepen your understanding of cybersecurity and stay updated with the latest trends and best practices. Happy coding, and stay secure! --- ## ❤️ If you enjoyed this article please come visit our hacker community [DevToys.io](https://devtoys.io) and keep up with the latest, news, tools and gadgets by signing up on our newsletter! 🥷🏻
3a5abi
1,907,285
More on React States
This section contains a general overview of topics that you will learn in this lesson. How to...
0
2024-07-01T21:01:30
https://dev.to/ark7/more-on-react-states-4p1
webdev, javascript, programming, tutorial
This section contains a general overview of topics that you will learn in this lesson. How to structure state. How state updates. Learn about controlled components. ## How to structure state Managing and structuring state effectively is by far one of the most crucial parts of building your application. If not done correctly, it can become a source of bugs and headaches. Poor state management can lead to unnecessary **re-renders**, **complex** and **hard-to-debug** code, and unpredictable application behavior. The assignment items go through the topic thoroughly, but as a general rule of thumb: **don’t put values in state that can be calculated using existing values, state, and/or props**. Derived state can lead to inconsistencies, especially when the derived data becomes out of sync with the source state. Instead, compute these values on-the-fly using functions or memoization techniques. Additionally, always strive to keep state localized as much as possible. Avoid lifting state too high up the component tree unless absolutely necessary. By keeping state close to where it is used, you reduce the complexity of state management and make your components more modular and easier to test. Another important aspect is to group related state variables together. When state variables are logically connected, it makes sense to manage them as a single unit, often using objects or arrays. This practice not only helps in organizing your state but also simplifies the process of updating multiple related state variables. ## State should not be mutated Mutating state is a no-go area in React as it leads to **unpredictable results**. Primitives (like numbers, strings, and booleans) are already immutable, but if you are using reference-type values, such as arrays and objects, you should never mutate them. According to the React documentation, we should treat state as if it was _immutable_. To change state, we should always use the state updater function, which, in the case of the example below, is the setPerson function. **## Why Not Mutate State?** 1. Unpredictable Results: Directly mutating state can lead to inconsistencies and bugs, as React might not recognize the changes, leading to unpredictable UI updates. 2. Debugging Difficulties: Mutable state makes it harder to track changes, complicating the debugging process. 3. React Optimization: React relies on immutability to optimize re-renders. Mutations can disrupt this optimization, causing performance issues. Primitives in JavaScript, such as strings, numbers, and booleans, are inherently immutable. Assigning a new value to a state variable containing a primitive type will create a new instance of that value. Primitives in JavaScript, such as strings, numbers, and booleans, are inherently immutable. Assigning a new value to a state variable containing a primitive type will create a new instance of that value. Always use the state updater function provided by React to change state. For example, if you're managing a person object, use setPerson to update the state. Here is an example: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bqrz8vhsxi4hbsye8a8n.png) ## How state updates State updates are asynchronous. What this implies is that whenever you call the setState function, React will apply the update in the next component render. This concept can take a while to wrap your head around. With a lot of practice, you’ll get the hang of it in no time. Remember, state variables aren’t reactive; the component is. This can be understood by the fact that calling _setState_ re-renders the entire component instead of just changing the state variable on the fly.
ark7
1,908,156
You Don't Know Undo/Redo
Look at the gif below. It shows a proof-of-concept implementation of collaborative undo-redo,...
27,923
2024-07-01T20:59:21
https://dev.to/isaachagoel/you-dont-know-undoredo-4hol
webdev, javascript, programming, learning
Look at the gif below. It shows a proof-of-concept implementation of collaborative undo-redo, including "history mode undo", conflict handling, and async operations. In the process of designing and implementing it, I found myself digging down rabbit holes, questioning my own assumptions, and reading academic papers. In this post, I will share my learnings. The [source code is available online](https://github.com/isaacHagoel/todo-replicache-sveltekit/tree/undo-redo). ![undo_basic_demo](https://github.com/isaacHagoel/todo-replicache-sveltekit/assets/20507787/55719699-55c0-43e0-bbfb-b6c60f067375) [Play with the live app](https://todo-replicache-sveltekit-pr-2.onrender.com/) ### Context and motivation Undo-redo is a staple of software systems, a feature so ubiquitous that users just assume apps would have it. As users, we absolutely love it because: 1. It saves us when we make mistakes (e.g. delete or move something unintentionally). 2. It encourages us to experiment and learn by doing (let's try clicking that button and see what happens; worst case we'll undo). 3. Together with redo (which is, in fact, an undo of the undo), it allows us to [back-track](https://www.geeksforgeeks.org/introduction-to-backtracking-2/) and iterate at zero cost. 4. Both undo and redo (when implemented correctly, more on that later) are non-destructive operations (!), giving us a sense of safety and comfort. As developers, however, implementing a robust undo-redo system is a non-trivial undertaking, with implications that penetrate every part of the app. In [my previous post](https://dev.to/isaachagoel/are-sync-engines-the-future-of-web-applications-1bbi), I listed undo/redo as one of the hard problems in web development, so naturally I wanted to see what it would take to add this feature to my little collaborative todo-mvc toy app. I approached it with respect because I had evidence of its tricky nature: 1. In a previous role, I witnessed a peer team struggling to add undo/redo to a [WYSIWYG editor](https://en.wikipedia.org/wiki/WYSIWYG). Although I wasn't involved at the technical level, I was aware of some of the pain and challenges they faced, saw how long it took, and noticed how many bugs they had. 2. When searching for undo-redo libraries, I couldn't find any that supported the multiplayer use-case. Even [the library from the company that made Replicache](https://github.com/rocicorp/undo) didn't. 3. As a user, I can't think of a single collaborative app that implements undo/redo in a way that feels right (if you know any, please leave a comment). It's also very noticeable in its absence in some [major, popular apps](https://community.atlassian.com/t5/Jira-questions/How-do-I-undo-Actions-in-Jira/qaq-p/1129291). Is it because it was to hard to implement? 4. When googling for information about undo-redo, I found (and eventually read) multiple academic papers and master's theses on the subject (e.g., [this](https://www.ksi.mff.cuni.cz/~holubova/dp/Jakubec.pdf)) . They don't write those about straightforward concepts, do they? 5. I also found blog-posts from big players in the field, specifically [one from Liveblocks](https://liveblocks.io/blog/how-to-build-undo-redo-in-a-multiplayer-environment) and a short discussion about undo/redo in [a post by Figma](https://www.figma.com/blog/how-figmas-multiplayer-technology-works/). I later learned that Figma's UX around undo/redo leaves a lot to be desired, which shows that identifying the problems is easier than coming up with good solutions. ### Undo/ redo is a strange beast I always knew that undo/redo is a deeply "strange" feature, but it was only when I started thinking about implementing it myself that some of its weirder aspects became more salient: 1. We all do this: Undo, undo, undo... (repeat N times); copy something; redo, redo, redo (repeat N times); paste. 2. And we get annoyed by this: Undo, undo, type a character, realise you can't redo anymore. (Does a disabled "redo" button mean some of the editing-history is lost forever? Spoiler alert: It depends on the implementation, as we'll see in the next section). 3. And get anxious over this: Closed the window? Bye bye undo history. Opened the app on another device or tab to keep editing from where you left off? No undo history for you (hopefully the original tab is still open somewhere). 4. And what happens if the original tab is still open, you edit on another tab, and then go back to the original tab and hit undo? Would that unintentionally corrupt the state? 5. And when working with others on a collaborative whiteboard (e.g., Figma) or similar apps, did you notice how everyone carefully stay away of other people's way and stick to their own "territory"? What if two of them edited the same entity but not at the same time and one of them hits undo? Points 1 and 2 are intentional design decisions that make undo/redo the feature we know and love. It sacrifices flexibility for speed and simplicity. Points 3-5 are very [sus](https://www.merriam-webster.com/wordplay/what-does-sus-mean) and require a deeper look. ### Modelling a branching state-graph Let's dive into that second point first: What actually happens when you undo a few times, make a change and see your redo button disabled? The answer is: it depends. I'll explain. State changes in a system can be represented as a graph. Undo allows us to traverse the graph "backwards in time," and redo allows us to trace our steps back (so "forwards in time") to the present. Imagine we have the following sequence of operations on an element: 1. Edit "" -> "Hello". 2. Edit "Hello" -> "Hello World". 3. Undo (the state becomes "Hello" again) 4. Edit "Hello" -> "Hello Friend" We can build a graph in which each unique state is represented by a node (like a state machine), as follows: ![Editing with undo](https://github.com/isaacHagoel/todo-replicache-sveltekit/assets/20507787/a1d14532-533f-418d-a89a-8b6435c527d0) In this model, step 4 creates a new branch in the graph, and we lose our ability to return to the "Hello World" state. Going backwards from "Hello Friend," which is the present state, leads to "Hello" and then to "". Going forwards from there using "redo" takes us to "Hello" and then to "Hello Friend." That's because while there is always a single path backwards from any state to the initial state, there can be multiple paths forward but "redo" can only follow a single path - the path that leads to the "present" state. Many systems follow this model, for example: Google Slides as you can see below: ![Google slides undo loses history](https://github.com/isaacHagoel/todo-replicache-sveltekit/assets/20507787/988cbed8-d249-47c7-bcf6-663c820090ff) This way of implementing undo/redo leads to increased user anxiety (== bad user experience) because after I undo, I need to be super careful or else I'd lose the ability to restore some states. The browser's "back" and "forward" buttons, which are basically undo/redo for the URL bar, behave this way as well. But wait, this is not the only way! We can do better (this is where reading academic papers pays off). What if we built our graph such that every node represented a point in time rather than a state? Time is linear (no branching unless you live in the multiverse) and always moves forward. This approach would represent the same sequence of operations like so: ![Editing with undo - linear](https://github.com/isaacHagoel/todo-replicache-sveltekit/assets/20507787/75bad228-53fc-48a3-ad94-ede94dab7bc3) Now we can never lose any change we've made in the past. Going backwards from "Hello Friend" takes us through every change we've ever made, as shown below. ![history_undo_demo](https://github.com/isaacHagoel/todo-replicache-sveltekit/assets/20507787/a5c00e6d-0594-44ca-bb4a-7dc71c075d21) This flavour of undo is called "History Undo" (see page 4 in [this excellent academic paper](https://web.eecs.umich.edu/~aprakash/papers/undo-tochi94.pdf)). It was first introduced by [Emacs](https://opensource.com/resources/what-emacs). It leads to a much better user experience and feels very intuitive to use. Notice that in both cases the "redo" button gets disabled when the user edits "Hello" to "Hello Friend" (which makes sense if you remember that redo is undo of undo). The difference is in how undo behaves after that point (and as a result, any subsequent "redo"). I implemented both modes in my proof of concept. "History undo" mode is enabled by default. If you want to get into the nitty-gritty have a look at the [source code](https://github.com/isaacHagoel/todo-replicache-sveltekit/blob/undo-redo/src/lib/undo/UndoManager.ts#L117). ### The question of scope An important property of a good undo-redo implementation is that it operates in the correct scope. In most applications, the expectation is for the undo-manager's scope to be the entire session. As I continuously hit "undo," I expect all the actions I took in the session to be rolled back. If I close the session, I expect to my undo stack to be lost. If the scope is smaller than the whole session, it can be disorienting for users. For example, if your text-editor has a separate undo stack from the rest of the app, people will [call you out on Twitter](https://x.com/astralwave/status/1805639315730612560): ![Linear undo-redo is broken](https://github.com/isaacHagoel/todo-replicache-sveltekit/assets/20507787/86ab4c12-72dc-48a0-9dbb-03df096cf7c4) In web applications or desktop applications that supports multiple tabs (e.g., [VSCode](https://code.visualstudio.com/)), a session equals a single tab. The expectations I described above carry over from single-user to collaborative, multi-user applications. Users expect to only be able to undo/redo their own actions. Users don't seem to expect the undo/redo stack to exceed session scope e.g. be shared between multiple tabs or multiple devices (on which they have the app running). I wonder if it's just because no app has done that yet and once someone does, it would become the new norm. Wouldn't it be great to have your history available on all your devices? In my implementation I remained within session-scope, like all standard implementations. I did it because it was the easier and quicker option. I might try to extend it to user-scope in future iterations. I am sure it will present interesting technical challenges. ### Memento vs. Command pattern One of the first resources I stumbled upon when I was doing my research was [an article in the official Redux docs](https://redux.js.org/usage/implementing-undo-history), explaining how to implement generic undo/redo using the [Memento pattern](https://en.wikipedia.org/wiki/Memento_pattern). The idea is so simple: Just save a copy of the state every time it changes and store these state copies as "past" (array), "present" and "future" (array). Whenever you want to undo, push the present state into the "future" array, pop the head of the "past" array and make it the new present, the app re-renders, voilà. No need for app-specific logic, whatever your state shape is - it just works. It sounds so alluring, so beautiful and elegant - there is only one tiny problem: it doesn't work for anything besides the simplest of apps. Here is why: 1. It doesn't deal with side-effects. Undoing and redoing actions tends to involve much more than state changes on the frontend. Apps need to create or cleanup remote resources, call APIs and do all the stuff that real apps do. These side-effects and the logic for addressing them are specific, and need to be handled on a case by case basis by the app. In other words, transition between states involves more than just replacing one state object with another. 2. The idea of replacing the state object wholesale is totally incompatible with simultaneous, collaborative editing. It leads to users constantly erasing each other's changes even if they are modifying different parts of the state. 3. Even in React apps that use Redux, not all the state is managed by the central store. There is a bunch of local state in the components. Don't we want the undo/redo manager to be able to account for local component state as well? Sadly, due to of all of the above, the Memento pattern is off the table. This leave us with the much less plug-and-play [Command pattern](https://en.wikipedia.org/wiki/Command_pattern). Instead of storing states we store commands and reverse-commands and execute them whenever we need to roll the state back or forward. "Commands" is just a fancy name for functions that modify state, e.g. "() => markTodoComplete(id, true)" and its reverse "() => markTodoComplete(id, false)". The command pattern allows us to update the state granularly with less collisions. It allows us to apply arbitrary logic and deal with side effects, and it doesn't know or care about the application state or where it lives. These advantages come at a cost: For every action our app can perform, we now need to implement a reverse-action, and register both with the undo manager. But wait, there's more... ### We can still have conflicts, can't we? Multiple users making changes to the same "document" at the same time means that conflicts can and will occur. Having undo-redo thrown into the mix increases the likelihood of conflicts by introducing the possibility of unintentional and hidden conflicts. When real-time editing, users usually try to stay out of each other's way but the dimension of time makes that more tricky. If I edited a place at an earlier time, and later someone made changes on top of my changes, what's gonna happen if I casually "undo" ten times? I can unintentionally cause someone else to lose work. This can also happen via indirect conflicts - for example: user A creates an item, user B edits the item, user A clicks undo - deleting the item and deleting user B's work as a result (again, without intending or even realising it). This sounds bad, right? The whole point of undo/redo is to allow users to experiment and time-travel safely, without worrying about accidentally corrupting the system's state. Sync engines, like Replicache (which our little todo app uses), have the ability to deal with "realtime" conflicts between clients via an authoritative server that can reject and revert changes. However, we don't want the user experience errors due to rejected undo operations, or have nothing happen because the element they are trying to modify no longer exists. See it happening in Figma in the gif below (taken from [here](https://github.com/rocicorp/undo/issues/12#issuecomment-2172386581)). Notice how some undo operations do nothing and the user needs to keep undoing to drain all the bad operations from the undo-stack. That's poor UX. We need to come up with an elegant way to deal with these situations. ![figma_undo](https://github.com/isaacHagoel/todo-replicache-sveltekit/assets/20507787/d9de6e18-745b-472e-971a-4eacf95aafb7) ### Can we simply ignore conflicts? Some smart people don't think that conflicts are a big deal in multiplayer systems. Adam Wiggins (co-founder of Heroku) for example, dismissed it in [this part of his recent talk](https://youtu.be/WEFuEY3fHd0?si=EmhrAV8LUYkhSk2V&t=794) (not in the context of undo/redo but as a general concern). He was later challenged about it by a question from the audience at [this timestamp](https://youtu.be/WEFuEY3fHd0?si=iw5tVQcWm2VvHgAb&t=1756) but stood his ground. To summarise his reasoning: Users stay out of each others' way - it's a social thing (true). Also, when conflicts do occur, users are smart enough to realise what happened and fix it themselves, no big deal. He does note that this is true for the app he's creating ([Muse](https://museapp.com/)), but might not apply in all cases. I have to respectfully disagree. It's cool that users find creative ways to work with broken systems, but we can't use that as an excuse for building sub-par apps. We can and should do better for our users. ### Dealing with conflicts - undo-manager perspective (in theory) So, how can we deal with these nasty conflicts? [This paper](https://web.eecs.umich.edu/~aprakash/papers/undo-tochi94.pdf) lays down a solid foundation. I'll do my best to summarise its main ideas for you. The paper discusses the problem of "undoing actions in collaborative systems" in the context of a distributed text editor called DistEdit. It suggests that the undo-manager takes a "Conflict(A,B)" callback from the app, with the following spec (see section 4.2.1): > The Conflict(A, B) function supplied by the application must return true if the adjacent operations A and B performed in sequence cannot be reordered, and false otherwise . The importance of the notion of conflict is that it imposes an ordering on operations A and B. If Conflict(A, B) is true, then the order of operations A and B cannot, in general, be changed without affecting the results. Furthermore, in general, operation A cannot be undone, unless the following operation B is undone The paper then offers an insight about the users' intentions (see section 5 "Selective Undo Algorithms"): > If an operation A is undone, we assume that users want their document to go to a state that it would have gone to if operation A had never been performed, but all other operations had been performed To achieve this, the proposed algorithm first rolls back everything that came after the operation we want to undo, makes a backup copy of the "future" stack, and tries to preform the undo by "bubbling" the operation we are trying to get rid off up the stack, kinda like [bubble sort](https://www.geeksforgeeks.org/bubble-sort-algorithm/). In each step it checks whether the operation we want to undo (A) has conflict with the next adjacent operation (B). If not, they can be swapped, and we can executed a "transposed" version of B without executing A. This "transpose(A,B)" function, that the app needs to provide, makes sense in the context of text editing, where the cursor position, for example, could be different if operation A never happened. The algorithm keeps working its way up the stack until it either reaches the top (success) or hits a conflict. If it hits a conflict, it tries to get rid of it by checking whether the "future" has the reverse operation; if yes, both can be safely removed. If the conflict cannot be removed, the algorithm determines that the operation cannot be undone (failure). When that happens, the paper offers two options (see section 8.1.4): 1. Show the conflicts to the user and ask them to resolve. 2. Tell the user about the conflict, ignore that undo operation, and allow the user to undo older operations. While I like the general idea, I had some issues with this algorithm: 1. For a general-purpose undo-manager, outside of collaborative text editing (which most apps use [CRDTs](https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type) for nowadays), it seemed excessive to expect the app to provide transpose functions, which must satisfy five mathematical properties (see section 4.2.2 in the paper). 2. I don't want users to be able try to undo something and end up failing. I want to detect the conflicts ahead of time - for better UX. 3. While I think it makes sense to skip bad operations, I am not sure that it's a good idea to ask the user to do something about it. If we wanted that we should have implemented features like version-history or version-control rather than undo/redo. Keeping the user informed is, generally speaking, a good idea, but we should aspire to do it in a non-disruptive manner. ### Dealing with conflicts - undo-manager perspective (in practice) With all this in mind, I ended up with the following implementation: The undo-manager allows each entry (action) to have the attributes 'scopeName', 'hasUndoConflict', and 'hasRedoConflict'. Unlike the "Conflict(A,B)" function from the paper, which takes two adjacent operations, my functions check whether a single undo or redo operation is valid in the context of the current state of the app. The conflict checks run on the "head" of the undo and redo stacks after every operation, and remove conflicting entries (and everything else with the same scopeName) until a non-conflicting one is found. This way, the next action the user can take is always a non-conflicting one. The undo-manager provides a way to tell the user when conflicting entries are removed (via a "change reason"), but in the demo app I used it in a very subtle way. All in all, here is the type definition for a single undo-redo entry: ```typescript export type UndoEntry = { operation: () => void; reverseOperation: () => void; hasUndoConflict?: () => boolean | Promise<boolean>; hasRedoConflict?: () => boolean | Promise<boolean>; // determines what gets removed when there are conflicts scopeName: string; // will be returned from subscribeToCanUndoRedoChange so you can display it in a tooltip next to the buttons (e.g "un-create item") description: string; reverseDescription: string; }; ``` For the full implementation details and how it is used, have a look at the code, for example [here](https://github.com/isaacHagoel/todo-replicache-sveltekit/blob/a75ff89f62e1cb1e0196390586126bdff6cb733e/src/routes/list/%5BspaceID%5D/%2Bpage.svelte#L171). ### Dealing with conflicts - app perspective So the undo-manager facilitates a way to deal with conflicts, but it's up to the app to provide the actual conflict-checking logic. How should it go about that? One useful concept is "ownership". In a single-user app, there is no question about who owns the data - there is only one user, but what about a multi-user, collaborative app? We can think about it as follows: The last user who modified a piece of data (direct modification - not via undo or redo) owns it. The owner of a piece of data can safely undo/redo their changes to it without overriding someone else's changes. For example, if I created a todo item and you modified its description, I shouldn't be able to undo the creation of that item, but I should still be able to undo anything else that I have in my undo stack. The granularity of the ownership is determined by the app. If I modified the text of an item and you then marked it as completed, it is probably okay for me to undo my change because I own the description and you own the completeness. If I later directly modify the completeness, I should be able to undo and redo that, and you shouldn't, because I took ownership over completeness. The gif below shows a simple example: When the user on the right edits the text of the second item, the user on the left loses the ability to undo any edits to its text or its creation, but still has the ability to undo the creation of the first item (which they still own). A sharp-eyed viewer would notice that the undo icon on the left animates when the conflicting entries are removed (when the user on the right enters the text "nope!"). ![undo_demo_conflict](https://github.com/isaacHagoel/todo-replicache-sveltekit/assets/20507787/43115ea9-1333-4842-9f38-b7b86989132b) If you don't agree with the specific logic I applied here, that's okay - the logic is totally flexible. The important thing is that we have an undo-manager that makes this possible. ### Getting the user experience right In the gif above, did you notice that little tooltip (just a "title" attribute in this demo) that tells the user what's going to happen when they click undo/redo when they hover over the button? Did you notice how the buttons animate when there is a change in the undo/ redo stack? To achieve these, the undo-manager provides [a simple pub-sub service](https://github.com/isaacHagoel/todo-replicache-sveltekit/blob/undo-redo/src/lib/undo/simplePubSub.ts) so that the consumer can stay up to speed. <img width="259" alt="next undo operation tooltip" src="https://github.com/isaacHagoel/todo-replicache-sveltekit/assets/20507787/6500dfb3-ad91-4d1e-b7d7-a9f69271b261"> ### Dealing with asynchrony The undo-manager has to be able to handle both synchronous and asynchronous operations because all the Replicache calls are async, and in other real-world systems, any call to the backend or external APIs would be async as well. The challenge with async operations is that they can complete in different order to how they start (depending on how long it takes each promise to resolve) and query the system while it's "between states." For example, an operation starts running, and then while it's awaiting something, an "undo" or a conflict check starts running and make their own changes. To deal with that, I introduced [a simple module](https://github.com/isaacHagoel/todo-replicache-sveltekit/blob/undo-redo/src/lib/undo/serialAsyncExecutor.ts) that executes the async operations serially using a queue. ### Places where my demo implementation falls short If you read my [previous post](https://dev.to/isaachagoel/are-sync-engines-the-future-of-web-applications-1bbi), you’d know that I was thoroughly impressed by Replicache. That's still true, but I have hit some walls (missing features) when adding undo-redo to the app. It’s important to note that the undo-manager itself is generic (agnostic about Replicache) but does pose some expectations that Replicache fails to meet (all seem fixable). Here are the main challenges I faced: 1. **Distinguishing between self inflicted and external updates**: When Replicache informs the app about incoming changes from the server, it doesn’t indicate whether these changes originate in the current session or in some other session (external). We’d like to initiate conflict checks when there are incoming external changes, otherwise they already ran when taking the action (before sending it to the server), but how? I ended up adding some [app-specific logic](https://github.com/isaacHagoel/todo-replicache-sveltekit/blob/undo-redo/src/routes/list/%5BspaceID%5D/%2Bpage.svelte#L26) to detect that. Ideally, once Replicache exposes that info (relevant issue [here](https://github.com/rocicorp/replicache/issues/1058)), this kind of hack won’t be needed. 2. **Failure of an optimistic update**: Replicache uses optimistic updates, meaning that the server can reject operations or return with a different than expected outcome. When that happens, the state is rolled back and adjusted to reflect the server state. When Replicache does that, it doesn’t notify the app, it comes in as any other state update. That makes it hard to adjust the undo stack, which still contains the original updates that the server just rejected. I could have probably worked around it but opted to leave it unimplemented in this POC. Ideally, Replicache would expose that information in the future. 3. **Coming back from offline mode**: If you go offline, you can do anything you want (including undo/redo) and when you come back online your changes will be pushed to the server and override anything other users did while you were offline. This problem is not specific to undo/redo but a result of the [Last Write Wins](https://www.linkedin.com/pulse/last-write-wins-database-systems-yeshwanth-n-emc8c/) conflict-resolution strategy that my app uses. In theory, this could be mitigated by adding more sophisticated logic to the server’s mutators, but that would require more research to get right (maybe a good subject for another post). 4. **Lack of Integration with CRDT-based undo**: Supporting embedded text editors in a way that is user friendly remains a challenge (I haven't attempted it yet, another idea for a future post :)). ### Closing thoughts We've covered a lot of ground in this post. While I spent most of it discussing different aspects of the undo-manager, in reality the majority of my time and effort were spent on carefully thinking through and implementing the app's reverse-operations and conflict-checking logic. In some cases, I had to refactor the app and break down operations to make them undo/redo-friendly. For example, “completeAllItems” couldn’t remain a simple loop that calls “updateItem” with each item-id; it had to become its own thing with its own reverse logic (because maybe another user added or edited items). Some changes to the backend were required as well, such as adding an “un-delete” operation, which is different from “create” because it preserves the original sort position of the item. The database schema changed because I needed to add an "updatedBy" field on each todo, and these are just some examples. Testing is another task that grows considerably when your app has undo-redo. In other words, undo-redo is one of those features that make every other feature in your app more complicated and time-consuming to implement and maintain. Is it worth it? I think the answer is a resounding yes for productivity apps and content-editors of any kind, but you need to know what you’re getting yourself into. It is definitely not for the faint of heart. Thank you for reading. Feel free to leave a comment if you have any questions or insights.
isaachagoel
1,908,154
Tezos Investment Strategies: A Comprehensive Guide
Understanding Tezos Founded in 2014 by Arthur and Kathleen Breitman, Tezos is a...
27,673
2024-07-01T20:54:25
https://dev.to/rapidinnovation/tezos-investment-strategies-a-comprehensive-guide-4kfe
## Understanding Tezos Founded in 2014 by Arthur and Kathleen Breitman, Tezos is a decentralized blockchain platform designed to facilitate the development and use of decentralized applications (dApps) and smart contracts. Despite initial management issues, Tezos gained significant attention in 2017 with its unprecedented initial coin offering (ICO), which raised $232 million in investment. Tezos has shown resilience, overcome obstacles, and made significant progress in its development, solidifying its position as a pioneer in the blockchain ecosystem. ## Key Features and Technology Behind Tezos Tezos has numerous critical aspects that distinguish it from other blockchain platforms: **1\. On-chain Governance Model:** Tezos uses a unique governance system that gives token holders direct control over the platform's progress, allowing for protocol updates without hard forks. **2\. Liquid Proof-of-Stake (LPoS) Consensus:** This mechanism allows token holders to actively participate in network security and consensus by staking, with rewards for both delegators and validators. **3\. Support for Smart Contracts:** Tezos offers a stable framework for developing dApps, providing more security and flexibility for developers. **4\. Secure and Flexible Framework:** Tezos employs formal verification techniques to ensure the validity of smart contract code and allows for protocol changes without affecting the network. ## Tezos as a Smart Contract Platform Tezos provides developers with a strong platform capable of supporting a wide range of dApps and smart contracts. Its self-amending process allows the blockchain to evolve and adapt independently, making it an adaptable solution for various use cases, including DeFi, supply chain management, digital identity verification, and voting systems. ## Diverse Applications within the Tezos Ecosystem The Tezos ecosystem supports lending, borrowing, and trading in DeFi, as well as the generation and trading of NFTs. Projects like Dexter, Kolibri, and Kalamint showcase Tezos' adaptability, while partnerships with industry giants like Ubisoft and Societe Generale highlight its potential for real-world adoption. ## Investing in Tezos Tezos coins (XTZ) may appeal to investors seeking long-term growth. Its self- amending protocol and developing ecosystem create potential for continual improvement and acceptance. However, investors must understand and mitigate risks such as market volatility, legislative changes, and network improvements. ## Tezos Investment Strategies **1\. HODLing Tezos:** This strategy involves purchasing XTZ tokens and holding them for an extended period, capitalizing on potential value appreciation over time. **2\. Staking Tezos:** Investors can store their XTZ tokens in a wallet to help the network run, earning additional tokens as rewards. **3\. Trading Tezos:** Actively buying and selling XTZ tokens on exchanges to profit from short-term price volatility. **4\. Diversifying Tezos Investments:** Directing funds to a range of assets to reduce risk and increase portfolio resilience. ## Tezos Wallets and Security Choosing a secure Tezos wallet is critical. Hardware wallets like Ledger and software wallets like Galleon are popular choices. Implementing strong security measures, such as unique passwords and two-factor authentication (2FA), is essential to protect your digital assets. ## Future Outlook and Potential Developments Tezos continues to evolve with regular protocol upgrades. Investors must stay updated on advancements and regulatory considerations. Despite limitations, Tezos has enormous potential to alter the future of digital wealth creation. ## Conclusion Tezos offers a compelling opportunity for wealth generation in the dynamic blockchain environment. By understanding its key concepts, exploring investment strategies, and prioritizing security, investors can confidently handle Tezos investments and position themselves for financial success. Tezos continues to innovate and evolve, establishing itself as a formidable participant in the digital asset environment. 📣📣Drive innovation with intelligent AI and secure blockchain technology! Check out how we can help your business grow! [Blockchain App Development](https://www.rapidinnovation.io/service- development/blockchain-app-development-company-in-usa) [Blockchain App Development](https://www.rapidinnovation.io/service- development/blockchain-app-development-company-in-usa) [AI Software Development](https://www.rapidinnovation.io/ai-software- development-company-in-usa) [AI Software Development](https://www.rapidinnovation.io/ai-software- development-company-in-usa) ## URLs * <http://www.rapidinnovation.io/post/a-path-to-digital-wealth-creation-with-tezos-investment-strategies> ## Hashtags #BlockchainTechnology #TezosInvestment #DigitalAssets #SmartContracts #CryptoInvesting
rapidinnovation
1,908,150
The Definitive Guide to C# .NET Datagrids
This definitive guide to C# .NET datagrids has everything you need to know. Learn about the benefits and how to make this control in your desktop applications.
0
2024-07-01T20:40:37
https://medium.com/mesciusinc/the-definitive-guide-to-c-net-datagrids-1905c9180ca8
webdev, devops, csharp, dotnet
--- canonical_url: https://medium.com/mesciusinc/the-definitive-guide-to-c-net-datagrids-1905c9180ca8 description: This definitive guide to C# .NET datagrids has everything you need to know. Learn about the benefits and how to make this control in your desktop applications. --- A .NET datagrid is a user interface (UI) control for displaying bound data in a tabular format. They are powerful controls that provide many productivity and data analysis features for .NET applications. A datagrid is similar to the HTML Table but has added features like column sorting, column resizing, and built-in cell editing. .NET datagrid applications are typically written in C# but may also support VB.NET. In this article, we share the evolution of the datagrid for developing .NET applications from the Windows desktop to the web, the top features you’ll find in a datagrid, common development scenarios for C# datagrids, and more. ## The Evolution of C# .NET DataGrids From its humble origins, the .NET datagrid has transformed into a versatile yet complex software component because of technological advancements and evolving business requirements. Because of its complexity, you must peel back the layers to better understand the control. **WinForms Datagrids** The first .NET DataGrid control, released by Microsoft with .NET 1.0 for Windows Forms and bundled with Visual Studio, took tabular data and displayed it onscreen in the form of rows and columns. It had basic designer configuration support and included paging, sorting, and updating support, all of which required writing code. With .NET 2.0, Microsoft wrote a new datagrid control named _[DataGridView](https://learn.microsoft.com/en-us/dotnet/api/system.windows.forms.datagridview?view=windowsdesktop-8.0)_. This control added enhanced design-time capabilities, new data-binding features, and out-of-the-box sorting and paging features. Microsoft defined more run-time events, also known as “callbacks,” to extend the behavior and adjust the look and feel of the grid at runtime. ![Windows Forms DataGridView Control](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wfx1qgbxrewtnktvlqzq.png) **ASP.NET Datagrids** A second version of the DataGrid control, now named _GridView_, was also developed for web applications written with ASP.NET. Web applications originally didn’t have the full power of native desktop applications, so inline editing was not easily feasible. Editing was limited to one row at a time, typically with an edit button column that would transform each row into several textboxes and submit all edits as a batch response to the server. These runtime features on the client were continuously improved over time with the help of JavaScript and jQuery. ![ASP.NET AJAX C1GridView](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vzsibvs0po591xszkeno.png) **WPF Datagrids** With .NET 3.0, Microsoft included a new _DataGrid_ control for its new presentation layer, WPF. One might speculate that WPF (Windows Presentation Foundation) was intended to replace Windows Forms; however, both are built upon the same Win32 libraries and are still widely used today. The WPF DataGrid offered a slightly different feature set and limits than the Windows Forms version. You’ll find a comparison later in this article. ![WPF DataGrid Control](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6f3j7bnmlr5g0eayfe5r.png) **.NET Datagrids for Newer Frameworks** The DataGridView in WinForms and DataGrid in WPF are still supported and used today with .NET 8.0 (and soon .NET 9.0) applications. There is a DataGrid provided for UWP (WinUI 2), but it’s not currently available for WinUI 3 or .NET MAUI. For ASP.NET technologies, we have shifted to ASP.NET Core, Razor Pages, Blazor, and MVC. Surprisingly, these newer web frameworks do not have any native .NET datagrid control. The reason is that these libraries heavily depend on JavaScript more than server-side C# code, so the .NET team has left it up to developers to acquire their own UI libraries. Since these newer web frameworks lack native controls aside from pure HTML and JavaScript, you will want to use powerful third-party datagrid controls that replicate the same components in the earlier frameworks. **Third-Party .NET Datagrids Take Off** From the start, developers ran into limitations with the standard .NET controls. They received requests from their users to add more features made popular by other applications like Microsoft Office, or, as explained, some newer .NET frameworks like Blazor lacked powerful UI controls. Most developers didn’t want to write colossal amounts of code to implement these advanced datagrid features, but where there’s a problem that needs solving, there are companies willing to sell you their solutions. These problems gave rise to the third-party component ecosystem, which Microsoft heartily supported because it ultimately added value to their development environment, Visual Studio. This plight resulted in .NET component vendors adding the features that developers found lacking in the default Microsoft controls, including: - Multi-level grouping for hierarchical display - Multi-column sorting to make fundamental analysis easier - Auto-sized columns and rows–for responsive applications - Rich design-time support for configuring appearance and behavior without writing any code - Advanced cell customization for [editing and visualizing](https://developer.mescius.com/componentone/docs/win/online-flexgrid/edit-mode.html) unique data - Unbound columns for more accessible, dynamic data display - Merge/split cells and rows, similar to Microsoft Word and Excel - Flexible styling options to apply custom branding to the user interface - Freezing and pinning columns, similar to Microsoft Excel - Custom printing and export to popular formats like Excel, CSV, HTML, and PDF And, of course, even basic datagrid controls for newer web frameworks in addition to WinUI and .NET MAUI. For example, the [ComponentOne FlexGrid](https://developer.mescius.com/componentone/flexgrid-net-data-grid-control) is a cross-platform .NET datagrid supported in ASP.NET MVC, Blazor, [Windows Forms](https://developer.mescius.com/componentone/docs/win/online-flexgrid/overview.html), WPF, UWP, WinUI, Xamarin, .NET MAUI, and even pure JavaScript. FlexGrid is an ideal choice for developers and enterprise companies. It supports the same production-ready features as the .NET datagrid but also elevates your application to the next level with custom cells, on-demand loading, built-in filtering, and file export. ComponentOne products are provided by Mescius. ![ComponentOne FlexGrid Datagrid Control](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6d75k27b4xe5ud09fsyf.png) Those are just some of the feature highlights. Next, we’ll look at the full feature set for a typical C# .NET datagrid and how the features compare across frameworks and third-party datagrids. ## The Top Features of a C# .NET Datagrid The primary uses of .NET datagrids include features for displaying, editing, and analyzing data. Let’s break down some key features in these areas across each .NET platform. **.NET Datagrid Display Features** Display features help the user read and understand the data more quickly and efficiently. You can improve the readability of the raw data with cell formatting, merging, and column bands. The table below shows how the top (built-in) display features stack up against a third-party datagrid, like FlexGrid. ![.NET Datagrid Display Features](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vjbq20cpiglln6psvnux.png) **.NET Datagrid Editing Features** Datagrids are primarily designed for editing, as you can get a quick, barebones table editor by simply databinding. The standard .NET datagrids provide basic features and extensibility but do not have as many built-in features as third-party datagrids. ![.NET Datagrid Editing Features](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r376ftqxs8oi864lz7vo.png) **WinForms Datagrid Analysis Features** The standard .NET datagrids are designed primarily for displaying and editing. If you’re looking for more advanced features that fall into the analysis category, you’ll find more of these features built into third-party datagrids, as you can see below. A built-in feature typically means that it’s enabled with very little code — often by setting just one property. ![WinForms Datagrid Analysis Features](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9ez6nrbbmyxey4k4pdu5.png) A missing check means it’s not a built-in runtime feature, but it may still be possible by writing some code. An exception is that, by design, many WPF DataGrid features require custom converters or styles or rely on configuring the CollectionViewSource. While these may not be “built-in” the same way as WinForms, they are fairly easy to use, considering the nature of the platform. Some ASP.NET features considered “built-in” require minimal JavaScript, such as a format function. The trade-offs to getting more built-in features with a third-party datagrid are extra steps to acquire the libraries and additional costs associated with licensing the control. Next, let’s look at how to build and use a datagrid control. ## How to Build a C# .NET Datagrid Let’s look at three common approaches to working with a .NET datagrid, from quick and simple to advanced customization. **Fast Databinding Scenario (Easy)** In the simplest scenario, we simply data-bind and build! Most datagrid controls automatically generate columns for each field and come with basic editing and sorting features out of the box. In many simple cases, this is all you need, but notice that every column will display in the exact order in which it is discovered in the database. ![Bound DataGrid with Auto-Generated Columns](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wsgdxaeuxfftyyffatva.png) The exact steps can vary for each .NET framework. The basic steps are: 1. Instantiate the datagrid control 2. Set the datagrid’s datasource to your data collection 3. Build your application As you can probably tell, the most complex part of this scenario is just obtaining your data collection. The data collection could come from a web service, SQL Server, JSON file, or anywhere. UI controls are generally data agnostic and do not care where the data comes from, but you will need to convert the data to some collection type that the datagrid control recognizes. In .NET, there is a large array (pun intended) of data types that can be used with datagrids, including List, CollectionView, and ObservableCollection. A short C# code example for Windows Forms is below: ``` // obtain data set List<Customer> customers = new List<Customer>(); // populate data set (omitted) dataGridView1.DataSource = customers; ``` For coded examples using Windows Forms, check out our previous article, [The Definitive Guide to WinForms Datagrids](https://dev.to/mescius/the-definitive-guide-to-winforms-datagrids-2j31). **Column Configuration Scenario (Moderate)** By default, datagrids will automatically generate all the columns based on your data source. That sounds convenient, but in reality, databases store strange fields like “last modification date” and odd look-up IDs that you don’t always want or need to display to the user. So, in most cases, you will need to perform some slight modifications to the columns, such as reordering and formatting. ![Blazor FlexGrid with Formatted Columns](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ardcbxontdwdm6t2sshu.png) Columns are rarely in the “perfect” order from the data source, so you will likely need to customize them completely to achieve your desired order. Most datagrid controls also have a quick and easy way to format column text for text and date values once you have defined the columns in code or markup (for example, “c” is the format string for currency). The general steps for this scenario are: 1. Instantiate the datagrid control 2. Disable the datagrid’s automatic column generation 3. Define each column you want to be displayed in your desired order 4. Apply simple column formatting for dates & numbers 5. Set the datagrid’s datasource to your data collection 6. Build your application In XAML and HTML frameworks, you will typically define the columns in the markup directly within your datagrid tags. For Windows Forms, you will define them either in the Visual Studio designer or in C# code. In all .NET frameworks, you can also create and rearrange columns in C# code, so step #3 has some variance based on each framework and your preferences. Below is a short C# code example for Windows Forms: ``` // disable automatic column generation dataGridView1.AutoGenerateColumns = false; // define each column in order, with optional format DataGridViewTextBoxColumn col1 = new DataGridViewTextBoxColumn(); col1.DataPropertyName = "Name"; col1.HeaderText = "Customer name"; col1.DefaultCellStyle.Format = "c"; // add column to datagrid dataGridView1.Columns.Add(col1); ``` After you’ve customized your displayed columns, you still receive all the same editing and sorting features out of the box. **Complete Datagrid Customization (Advanced)** If you need more than the built-in editing, sorting, and moderately easy column customization, then you are likely looking at an [advanced datagrid customization](https://developer.mescius.com/componentone/flexgrid-net-data-grid-control). These advanced features include: - Grouping by columns - Editing with custom cell editors - Displaying custom objects in cells - Create hierarchical grids with collapsible rows and drill-down details - Filtering by drop-down menus or a filter row - Freezing columns and rows in place during scroll Some of these features are possible with the standard datagrids but require writing a lot of code. If you use a third-party datagrid, you can enable most of these features rather quickly with just a few lines of code. ![Advanced GanttView Created from FlexGrid for WinForms](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6e9q5pm2biyzew5oh6h0.png) **What about import and export?** We’ve reviewed displaying and editing, but you may be wondering about import and export. Regardless of what datagrid or .NET framework you use, these features will require additional C# or VB.NET code. Datagrid controls typically have a built-in feature for adding new rows, which can be enabled by setting one property; however, if you need to import/export from a file, such as a Microsoft Excel file, you will need an additional library to help. Third-party datagrids, [such as FlexGrid](https://developer.mescius.com/componentone/flexgrid-net-data-grid-control), typically help you in this area with auxiliary libraries that include the Excel (or Word, CSV, PDF, etc.) formatting libraries along with an extended “Export” method. ## .NET Datagrids with Spreadsheet-like Features Microsoft Excel is the most well-known and used spreadsheet application in the world. Even its competitors offer similar features. If you’re looking for even more Excel-like features for your .NET datagrid, you may want to consider a specialized spreadsheet component. When designing reusable component libraries, there is a limit to the number of features you should add because nobody wants application bloat (where only a small percentage of the features are used). The solution is to break out specialized features into separate libraries. For .NET datagrids, this includes specialized grids that look and feel like Microsoft Excel, as well as Microsoft Project (GanttViews) and Power BI (Pivot Tables). The Excel-like spreadsheet features, which added support for exporting and printing–much like other reporting tools — also include: - A spreadsheet-style look and feel to leverage the popularity of Microsoft Excel - Advanced row filtering to narrow down results, also inspired by Excel - Embedded input controls within cells like text and images - Multi-line rows to display composite information, such as an address, within a single cell - Dynamic cell drawing for complete control of the grid’s appearance - Virtual data scrolling to display large amounts of data while providing a seamless user experience - Configurable views, the next version of styling options - Support for a range of export formats, including images, Excel, Word, and PDF ![PivotTables](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ipyi73c711i4pwut8uuo.png) [MESCIUS Spread.NET](https://developer.mescius.com/spreadnet) is the number one best-selling spreadsheet component for .NET. ## Conclusion Over time, we’ve moved from basic .NET datagrids provided by the framework to requiring more advanced features and solutions across a much wider span of frameworks. Open-source and third-party component vendors have filled in the gaps left open by the core .NET framework. If you consider the costs of developing the features yourself versus buying off-the-shelf, you’ll often find that third-party libraries pay for themselves for the saved time and resources. If you’re developing Windows desktop applications, you may be able to get by with simple datagrids using the native .NET library. Still, if you need advanced features or themes, or if you’re developing for the web, you will definitely want to add a control suite to your toolbox since the only controls available are basic HTML elements.
chelseadevereaux
1,908,148
Mental Challenges
As a person in tech for over 2 decades, I have always seen how well I had done with development but...
0
2024-07-01T20:36:14
https://dev.to/mutantmalu/mental-challenges-3ji5
creative, challenge, hard75, beforeandafter
As a person in tech for over 2 decades, I have always seen how well I had done with development but never took it to the next level. The same can be said to physical activity or sports. There is a thought that I get pretty close to good, then get board and I need to have a challenge. With physical activity, I have taken apon me to do 30/60 day challenges, and am currently doing a 75 day challengs. The Challenge is as follows and I have made it hybrid 1. Gallon of water each day for 75 days(pure water, nothing mixed) 2. 300 Grams of protein under 2000 calories 3. write a python script everyday, it needs to encompass a new API type, a new ML function and a complex calculation based on the information written 4. Workout twice a day(45 minutes inside 45 minute outside) 5. Read a non-fiction book for 15 pages This is the start to day 1
mutantmalu
1,908,144
A technical article about front-end technologies
Introduction Frontend technologies are a set of technologies used to develop the user interface of...
0
2024-07-01T20:30:47
https://dev.to/blessing_edward_968172f10/a-technical-article-about-front-end-technologies-e5m
Introduction Frontend technologies are a set of technologies used to develop the user interface of web pages and applications. Developers can create anything from design, structure to animation we see when opening a website, web applications or a mobile app. The primary goal of front-end development technologies is to improve efficiency during the web development process. A list of front-end technologies which has surfaced the rise of libraries and framework includes. Vue JS: which was created by Evan You. Vue JS is widely used to develop interactive user interface and SPAs. It is one of the best suited JavaScript for creating a lightweight and adaptive interactive UI elements.It is also very easy to implement because of the model view-view model(MVVM) architectural pattern. Flutter: This is one of the most rapidly growing front-end framework for developing effective and flexible web design.It assist a developing flexible and native-like apps with a singular code base. With flutter,developers can quickly create cross-platform apps using the dart programming language. React.JS: This is a popular open source JavaScript front-end library that enables the creation of dynamic and interactive applications while also improving the UI/UX designs. React.JS has a declarative UI, which makes react code easily read and fixed. Conclusion Knowing fully well that HNG is a React only based front-end internship, I will really love to forge ahead with this opportunity given to me to learn React as a front-end technique for web development. https://hng.tech/internship https://hng.tech/hire
blessing_edward_968172f10